Paper
12 April 2005 Hardware-accelerated multimodality volume fusion
Helen Hong D.D.S., Juhee Bae, Heewon Kye, Yeong-Gil Shin
Author Affiliations +
Abstract
In this paper, we propose a novel technique of multimodality volume fusion using a graphics hardware. Our 3D texture based volume fusion algorithm consists of three steps: First, two volumes of different modalities are loaded into the texture memory in the GPU. Second, textured slices of two volumes along the same proxy geometry are combined with various compositing functions. Third, all the composited slices are alpha blended. We have implemented our algorithm using HLSL (High Level Shader Language). Our method shows that the exact depth of each volume and the realistic views with interactive rate in comparison with the software-based image integration. Experimental results using MR and PET brain images and the angiography with a stent show that over composting operation is more useful for clinical application.
© (2005) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Helen Hong D.D.S., Juhee Bae, Heewon Kye, and Yeong-Gil Shin "Hardware-accelerated multimodality volume fusion", Proc. SPIE 5744, Medical Imaging 2005: Visualization, Image-Guided Procedures, and Display, (12 April 2005); https://doi.org/10.1117/12.594578
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Opacity

Positron emission tomography

Magnetic resonance imaging

Visualization

Volume rendering

Surgery

Back to Top