Multi-atlas based methods have been a trend for robust and automated image segmentation. In general these methods first
transfer prior manual segmentations, i.e., label maps, on a set of atlases to a given target image through image registration.
These multiple label maps are then fused together to produce segmentations of the target image through voting strategy or
statistical fusing, e.g., STAPLE. STAPLE simultaneously estimates the true segmentation and the label map performance
level, but has been shown inaccurate for multi-atlas segmentation because it is determined completely on the propagated
label maps without considering the target image intensity. We develop a new method, called iSTAPLE, that combines
target image intensity into a similar maximum likelihood estimate (MLE) framework as in STAPLE to take advantage of
both intensity-based segmentation and statistical label fusion based on atlas consensus and performance level. The MLE
framework is then solved using a modified EM algorithm to simultaneously estimate the intensity profiles of structures of
interest as well as the true segmentation and atlas performance level. Unlike other methods, iSTAPLE does not require the
target image to have same image contrast and intensity range as the atlas images, which greatly extends the use of atlases.
Experiments on whole brain segmentation showed that iSTAPLE performed consistently better than STAPLE.
Harmonic phase (HARP) motion analysis is widely used in the analysis of tagged magnetic resonance images
of the heart. HARP motion tracking can yield gross errors, however, when there is a large amount of motion
between successive time frames. Methods that use spatial continuity of motion - so-called refinement methods -
have previously been reported to reduce these errors. This paper describes a new refinement method based on
shortest-path computations. The method uses a graph representation of the image and seeks an optimal tracking
order from a specified seed to each point in the image by solving a single source shortest path problem. This
minimizes the potential for path dependent solutions which are found in other refinement methods. Experiments
on cardiac motion tracking shows that the proposed method can track the whole tissue more robustly and is also
computationally efficient.
To understand the role of the tongue in speech production, it is desirable to directly image the motion and
strain of the muscles within the tongue. Magnetic resonance
tagging-which was originally developed for cardiac
imaging-has previously been applied to image both two-dimensional and three-dimensional tongue motion during
speech. However, to quantify three-dimensional motion and strain, multiple images yielding two-dimensional
motion must be acquired at different orientations and then interpolated - a time-consuming task both in image
acquisition and processing. Recently, a new MR imaging and image processing method called zHARP was
developed to encode and track 3D motion from a single slice without increasing acquisition time. zHARP was
originally developed and applied to cardiac imaging. The application of zHARP to the tongue is not straightforward
because the tongue in repetitive speech does not move as consistently as the heart in its beating cycle.
Therefore tongue images are more susceptible to motion artifacts. Moreover, these artifacts are greatly exaggerated
as compared to conventional tagging because of the nature of zHARP acquisition. In this work, we
re-implemented the zHARP imaging sequence and optimized it for the tongue motion analysis. We also optimized
image acquisition by designing and developing a specialized MRI scanner triggering method and vocal repetition
to better synchronize speech repetitions. Our method was validated using a moving phantom. Results of 3D
motion tracking and strain analysis on the tongue experiments demonstrate the effectiveness of this method.
Fiducial markers are often employed in image-guided surgical procedures to provide positional information based on pre-operative images. In the standard technique, centroids of three or more markers are localized in both image space and physical space. The localized positions are used in a closed-form algorithm to determine the three-dimensional rigid-body transformation that will register the two spaces in the least-squares sense. In this work we present (1) a method for determining the orientation of the axis of symmetry of a cylindrical marker in a tomographic image and (2) an extension to the standard approach to rigid-body registration that utilizes the orientation of marker axes as an adjunct to the positions of their centroids. The extension is a closed-form, least-squares solution. Unlike the standard approach, the extension is capable of three-dimensional registration with only two markers. We evaluate the accuracy of the former method by means of CT and MR images of markers attached to a phantom and the accuracy of the latter method by means of computer simulations.
Virtual Endoscopy System is a new aided diagnosis method based on computer processing of 3D image slices to provide simulated visualizations of specific organs similar to those produced by standard endoscopy. Compare with real endoscopy, VES has much advantages and will have more applications in the future. We constructed a Virtual Bronchus Endoscopy System based on the techniques of image analysis, compute graphics, and so on. Based on the characteristic of bronchus, we adopted an improved 3D region-growing algorithm, which we call 3D scanline algorithm to extract the bronchus from the DICOM-formatted medical images, then the 3D polyhedral surface model of bronchus is obtained by triangulation with Marching Cubes Algorithm. Then the user is allowed to navigate freely inside the bronchus along the axis. We adopted surface rendering method in the rendering process. In application this system can meet the requirement of real-time navigation and has pretty good display.
In this paper we present an automated region-growing algorithm designed to extract the pipe-like organs, i.e. bronchus, colon and blood vessel, from 3D CT images taken by helical CT scanner. The algorithm is called 3D scanline- filling algorithm, which is a kind of region-growing. This algorithm is very fast, and its parameters can be adjusted automatically. It can be utilized in Virtual Endoscopy system (VES), which is an important computer-aided diagnosis method. The data created after applying this algorithm can then be sent to VES for further process, be visualized and displayed on screen. The physician can observed the inner organ on the screen just like with a real endoscopy without any pain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.