Purpose: In sequential imaging studies, there exists rich information from past studies that can be used in prior-image-based reconstruction (PIBR) as a form of improved regularization to yield higher-quality images in subsequent studies. PIBR methods, such as reconstruction of difference (RoD), have demonstrated great improvements in the image quality of subsequent anatomy reconstruction even when CT data are acquired at very low-exposure settings.
Approach: However, to effectively use information from past studies, two major elements are required: (1) registration, usually deformable, must be applied between the current and prior scans. Such registration is greatly complicated by potential ambiguity between patient motion and anatomical change—which is often the target of the followup study. (2) One must select regularization parameters for reliable and robust reconstruction of features.
Results: We address these two major issues and apply a modified RoD framework to the clinical problem of lung nodule surveillance. Specifically, we develop a modified deformable registration approach that enforces a locally smooth/rigid registration around the change region and extend previous analytic expressions relating reconstructed contrast to the regularization parameter and other system dependencies for reliable representation of image features. We demonstrate the efficacy of this approach using a combination of realistic digital phantoms and clinical projection data. Performance is characterized as a function of the size of the locally smooth registration region of interest as well as x-ray exposure.
Conclusions: This modified framework is effectively able to separate patient motion and anatomical change to directly highlight anatomical change in lung nodule surveillance.
In previous works, we have presented concepts for dynamic beam attenuators (DBAs) allowing for substantial dose reductions. So far, we have used a tube current modulation (TCM) scheme where the tube current is proportional to the square root of the maximum object attenuation in each projection. As DBAs show particularly beneficial behavior in region-of-interest (ROI) imaging, the question arose whether the employed TCM method would still be the most meaningful in this case. We present a simulation framework calculating i) the dose distribution in the object and ii) the image quality in the ROI of the reconstructed images for a given primary fluence: The dose to every voxel was calculated from individual Monte- Carlo simulations for every beamlet. From this we can approximate the effective dose, that incorporates tissue-specific weighting factors describing the stochastic health risk, for any fluence distribution. Using the same fluence distribution, the image variance according to the propagated fluence can be calculated for a given ROI. We employed a homogeneous phantom with a centered ROI and a female thorax phantom where the ROI is described by either the heart or the spine. Eventually, we optimized the tube current according to the product of mean image variance in the ROI and patient dose. Furthermore, the tube current obtained was compared with a heuristic square root TCM (hsqTCM) method. In result, the optimized TCM matches the hsqTCM well for the idealized case of a central ROI in an elliptical, homogeneous phantom. For a more complex case with an off-center ROI, the optimized TCM differs substantially from the hsqTCM rule, offering an additional dose reduction of up to 30%.
In this work, we present a new concept for dynamic beam attenuation, the z-aligned sheet-based dynamic beam attenuator (z-sbDBA). Like the previously presented sbDBA, it allows dynamically adapting the x-ray intensity over the projection angle and also the fan beam angle by changing the tilt angle of the z-sbDBA. In contrast to the sbDBA, however, the absorbing sheets of a z-sbDBA are parallel to the detector rows. In addition, the height of the absorbing sheets varies over the fan angle – very low near the central beam, increasing toward larger fan beam angles. This facilitates designing the absorption profile of the z-sbDBA, which was not feasible with the sbDBA. Due to the changed orientation of the attenuation sheets, pronounced structures along the fan beam width are avoided, reducing the risk of ring artifacts in the reconstructed image.
A prototype of the z-sbDBA, mounted on a drive, has been built and investigated. Using a clinical computed tomography (CT) scanner, we experimentally demonstrate that variable and smooth intensity profiles can be realized by the controlled change of the angular position of the z-sbDBA. Reconstructed images do not reveal substantial artifacts, thus proving the necessary stability of the acquisition technique. We also show that the variance across a reconstructed image can be changed as a function of the tilt angle.
Our experimental results demonstrate that the new z-sbDBA concept maintains the main advantage of the sbDBA concept, which is the dynamic fluence field modulation (FFM) of the emitted x-ray beam. In addition, our findings show that due to the improved z-sbDBA structuring several drawbacks of the sbDBA can be overcome by a) avoiding pronounced structures along the fan beam angle, b) requiring only small tilt angles and c) allowing for a flexible design of the transmission profile propagated toward the patient.
In our previous study we successfully built a novel sheet-based dynamic beam attenuator (sbDBA) for fluence field modulation in X-ray computed tomography (CT) and performed a first-time experimental validation. In this work, we focus on the optimization of the DBA transmission properties for a given object. In clinical routine, CT scanners must cope with various attenuation properties differing from patient to patient. Typically, the attenuation of patients is high in the center of the fan beam, decreasing towards the periphery. The attenuation profiles of an object can also change for different X-ray tube positions. These variations cause unfavorable imbalances of image quality in the reconstructed object. Typically, the peripheral region, which is generally of minor diagnostically relevance, has relatively lower noise than the central region because the rays contributing to the peripheral region are less attenuated. This imbalance can be reduced by using beam-shaping prefilters, e.g. bowtie filters, attenuating the propagated intensity towards the periphery in a predefined, static profile. Bowtie filters, however, are not capable of dynamically adapting their attenuation to the attenuation profile of the patient. This can be accomplished by using dynamic beam attenuators (DBA) where the fan beam intensity can dynamically be modulated on a view-by-view basis, reducing noise inhomogeneities and enabling region-of-interest (ROI) imaging. Different scenarios (no attenuator, tube current modulation, conventional bowtie filter, the sbDBA and an ideal DBA) are compared in terms of image quality. The optimized sbDBA with tube current modulation (TCM) not only reduces the total radiation dose but also allows for spatial selection of intensity as required for ROI imaging.
Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.
We have developed a digitally synthesized patient which we call “Zach” (Zero millisecond Adjustable
Clinical Heart) phantom, which allows for an access to the ground truth and assessment of image-based
cardiac functional analysis (CFA) using CT images with clinically realistic settings. The study using Zach
phantom revealed a major problem with image-based CFA: "False dyssynchrony." Even though the true
motion of wall segments is in synchrony, it may appear to be dyssynchrony with the reconstructed cardiac
CT images. It is attributed to how cardiac images are reconstructed and how wall locations are updated
over cardiac phases. The presence and the degree of false dyssynchrony may vary from scan-to-scan,
which could degrade the accuracy and the repeatability (or precision) of image-based CT-CFA exams.
In recent years, iterative algorithms have become popular in diagnostic CT imaging to reduce noise or radiation dose to
the patient. The non-linear nature of these algorithms leads to non-linearities in the imaging chain. However, the
methods to assess the performance of CT imaging systems were developed assuming the linear process of filtered
backprojection (FBP). Those methods may not be suitable any longer when applied to non-linear systems. In order to
evaluate the imaging performance, a phantom is typically scanned and the image quality is measured using various
indices. For reasons of practicality, cost, and durability, those phantoms often consist of simple water containers with
uniform cylinder inserts. However, these phantoms do not represent the rich structure and patterns of real tissue
accurately. As a result, the measured image quality or detectability performance for lesions may not reflect the
performance on clinical images. The discrepancy between estimated and real performance may be even larger for
iterative methods which sometimes produce “plastic-like”, patchy images with homogeneous patterns. Consequently,
more realistic phantoms should be used to assess the performance of iterative algorithms. We designed and constructed a
biological phantom consisting of porcine organs and tissue that models a human abdomen, including liver lesions. We
scanned the phantom on a clinical CT scanner and compared basic image quality indices between filtered backprojection
and an iterative reconstruction algorithm.
This research aims to develop a new feature guided motion estimation method for the left ventricular wall in gated
cardiac imaging. The guiding feature is the “footprint” of one of the papillary muscles, which is the attachment of the
papillary muscle on the endocardium. Myocardial perfusion (MP) PET images simulated from the 4-D XCAT phantom,
which features papillary muscles, realistic cardiac motion with known motion vector field (MVF), were employed in the
study. The 4-D MVF of the heart model of the XCAT phantom was used as a reference. For each MP PET image, the 3-
D “footprint” surface of one of the papillary muscles was extracted and its centroid was calculated. The motion of the
centroid of the “footprint” throughout a cardiac cycle was tracked and analyzed in 4-D. This motion was extrapolated to
throughout the entire heart to build a papillary muscle guided initial estimation of the 4-D cardiac MVF. A previous
motion estimation algorithm was applied to the simulated gated myocardial PET images to estimate the MVF. Three
different initial MVF estimates were used in the estimation, including zero initial (0-initial), the papillary muscle guided
initial (P-initial), and the true MVF from phantom (T-initial). Qualitative and quantitative comparison between the
estimated MVFs and the true MVF showed the P-initial provided more accurate motion estimation in longitudinal
motion than the 0-initial with over 70% improvement and comparable accuracy with that of the T-initial. We concluded
that when the footprint can be tracked accurately, this feature guided approach will significantly improve the accuracy
and robustness of traditional optical flow based motion estimation method.
One of the major obstacles toward photon counting detector (PCD)-based clinical x-ray CT systems is the large count
rates, because when operated under too intense x-rays, pulse pileup effects (PPEs) due to coincident photons distort the
spectrum recorded by PCDs. In this paper we discuss a strategy using a hybrid detector, which consists of PCDs for the
central part of the detector [which corresponds to a central small field-of-view (FOV) of the object] and energy
integrating detectors (EIDs) for the peripheral part, to achieve the following three goals: 1) to minimize the PPEs; 2) to
produce accurate spectral images for the small FOV; and 3) to provide conventional CT images for the entire FOV. The
third goal requires a solution to exterior problem, because the central part of EID data is missing. The spectral data
obtained by PCDs carry richer information than the intensity data obtained by EIDs; however, performing a simple
weighted summation of counts from multi-energy windows of PCD would not produce realistic EID data, as the
spectrum recorded by PCD could be skewed by spectral response effects (SREs) and PPEs. We propose a unique
approach for the hybrid PCD/EID-CT system in this paper.
The x-ray spectrum recorded by a photon-counting x-ray detector (PCXD) is distorted due to the following physical
effects which are independent of the count rate: finite
energy-resolution, Compton scattering, charge-sharing, and Kescape.
If left uncompensated, the spectral response (SR) of a PCXD due to the above effects will result in image
artifacts and inaccurate material decomposition. We propose a new SR compensation (SRC) algorithm using the
sinogram restoration approach. The two main contributions of our proposed algorithm are: (1) our algorithm uses an
efficient conjugate gradient method in which the first and second derivatives of the cost functions are directly calculated
analytically, whereas a slower optimization method that requires numerous function evaluations was used in other work;
(2) our algorithm guarantees convergence by combining the non-linear conjugate gradient method with line searches that
satisfy Wolfe conditions, whereas the algorithm in other work is not backed by theorems from optimization theory to
guarantee convergence. In this study, we validate the performance of the proposed algorithm using computer
simulations. The bias was reduced to zero from 11%, and image artifacts were removed from the reconstructed images.
Quantitative K-edge imaging in possible only when SR compensation is done.
In clinical computed tomography (CT), images from patient examinations taken with conventional scanners
exhibit noise characteristics governed by electronics noise, when scanning strongly attenuating obese patients
or with an ultra-low X-ray dose. Unlike CT systems based on energy integrating detectors, a system with a
quantum counting detector does not suffer from this drawback. Instead, the noise from the electronics mainly
affects the spectral resolution of these detectors. Therefore, it does not contribute to the image noise in spectrally
non-resolved CT images. This promises improved image quality due to image noise reduction in scans obtained
from clinical CT examinations with lowest X-ray tube currents or obese patients. To quantify the benefits of
quantum counting detectors in clinical CT we have carried out an extensive simulation study of the complete
scanning and reconstruction process for both kinds of detectors. The simulation chain encompasses modeling
of the X-ray source, beam attenuation in the patient, and calculation of the detector response. Moreover,
in each case the subsequent image preprocessing and reconstruction is modeled as well. The simulation-based,
theoretical evaluation is validated by experiments with a novel prototype quantum counting system and a Siemens
Definition Flash scanner with a conventional energy integrating CT detector. We demonstrate and quantify the
improvement from image noise reduction achievable with quantum counting techniques in CT examinations with
ultra-low X-ray dose and strong attenuation.
The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended
CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model
of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation
program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms,
this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological
motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First,
we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which
consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material
properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation
in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through
the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated
simulation package by performing a number of sample simulations of multiple x-ray projections from different views
followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In
conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the
design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods
for improving CT image quality and reducing radiation dose.
KEYWORDS: 3D modeling, Computed tomography, Heart, Arteries, Image segmentation, Data modeling, Ischemia, Medical imaging, Instrument modeling, 3D image processing
A realistic 3D coronary arterial tree (CAT) has been developed for the heart model of the computer generated 3D
XCAT phantom. The CAT allows generation of a realistic model of the location, size and shape of the associated
regional ischemia or infarction for a given coronary arterial stenosis or occlusion. This in turn can be used in medical
imaging applications. An iterative rule-based generation method that systematically utilized anatomic, morphometric
and physiologic knowledge was used to construct a detailed realistic 3D model of the CAT in the XCAT phantom. The
anatomic details of the myocardial surfaces and large coronary arterial vessel segments were first extracted from cardiac
CT images of a normal patient with right coronary dominance. Morphometric information derived from porcine data
from the literature, after being adjusted by scaling laws, provided statistically nominal diameters, lengths, and
connectivity probabilities of the generated coronary arterial segments in modeling the CAT of an average human. The
largest six orders of the CAT were generated based on the physiologic constraints defined in the coronary generation
algorithms. When combined with the heart model of the XCAT phantom, the realistic CAT provides a unique
simulation tool for the generation of realistic regional myocardial ischemia and infraction. Together with the existing
heart model, the new CAT provides an important improvement over the current 3D XCAT phantom in providing a more
realistic model of the normal heart and the potential to simulate myocardial diseases in evaluation of medical imaging
instrumentation, image reconstruction, and data processing methods.
We investigate the effect of heart rate on the quality and artifact generation in coronary artery images obtained using multi-slice computed tomography (MSCT) with the purpose of finding the optimal time resolution for data acquisition. To perform the study, we used the 4D NCAT phantom, a computer model of the normal human anatomy and cardiac and respiratory motions developed in our laboratory. Although capable of being far more realistic, the 4D NCAT cardiac model was originally designed for low-resolution imaging research, and lacked the anatomical detail to be applicable to high-resolution CT. In this work, we updated the cardiac model to include a more detailed anatomy and physiology based on high-resolution clinical gated MSCT data. To demonstrate its utility in high-resolution dynamic CT imaging research, the enhanced 4D NCAT was then used in a pilot simulation study to investigate the effect of heart rate on CT angiography. The 4D NCAT was used to simulate patients with different heart rates (60-120 beats/minute) and with various cardiac plaques of known size and location within the coronary arteries. For each simulated patient, MSCT projection data was generated with data acquisition windows ranging from 100 to 250 ms centered within the quiet phase (mid-diastole) of the heart using an analytical CT projection algorithm. CT images were reconstructed from the projection data, and the contrast of the plaques was then measured to assess the effect of heart rate and to determine the optimal time resolution required for each case. The 4D NCAT phantom with its realistic model for the cardiac motion was found to provide a valuable tool from which to optimize CT cardiac applications. Our results indicate the importance of optimizing the time resolution with regard to heart rate and plaque location for improved CT images at a reduced patient dose.
A detailed four-dimensional model of the coronary artery tree has great potential in a wide variety of applications especially in biomedical imaging. We developed a computer generated three-dimensional model for the coronary arterial tree based on two datasets: (1) gated multi-slice computed tomography (MSCT) angiographic data obtained from a normal human subject and (2) statistical morphometric data obtained from porcine hearts. The main coronary arteries and heart structures were segmented from the MSCT data to define the initial segments of the vasculature and geometrical details of the boundaries. An iterative rule-based computer generation algorithm was then developed to extend the coronary artery tree beyond the initial segmented branches. The algorithm was governed by the following factors: (1) the statistical morphometric measurements of the connectivities, lengths, and diameters of the arterial segments, (2) repelling forces from other segments and boundaries, and (3) optimality principles to minimize the drag force at each bifurcation in the generated tree. Using this algorithm, the segmented coronary artery tree from the MSCT data was optimally extended to create a 3D computational model of the largest six orders of the coronary arterial tree. The new method for generating the 3D model is effective in imposing the constraints of anatomical and physiological characteristics of coronary vasculature. When combined with the 4D NCAT phantom, a computer model for the human anatomy and cardiac and respiratory motions, the new model will provide a unique tool to study cardiovascular characteristics and diseases through direct and medical imaging simulation studies.
Coronary artery imaging with multi-slice helical computed tomography is a promising noninvasive imaging technique. The current major issues include the insufficient temporal resolution and large patient dose. We propose an image reconstruction method which provides a solution to both of the problems. The method uses an iterative approach repeating the following four steps until the difference between the two projection data sets falls below a certain criteria in step-4: 1) estimating or updating the cardiac motion vectors, 2) reconstructing the time-resolved 4D dynamic volume images using the motion vectors, 3) calculating the projection data from the current 4D images, 4) comparing them with the measured ones. In this study, we obtain the first estimate of the motion vector. We use the 4D NCAT phantom, a realistic computer model for the human anatomy and cardiac motions, to generate the dynamic fan-beam projection data sets as well to provide a known truth for the motion. Then, the halfscan reconstruction with the sliding time-window technique is used to generate cine images: f(t, r r). Here, we use one heart beat for each position r so that the time information is retained. Next, the magnitude of the first derivative of f(t, r r) with respect to time, i.e., |df/dt|, is calculated and summed over a region-of-interest (ROI), which is called the mean-absolute difference (MAD). The initial estimation of the vector field are obtained using MAD for each ROI. Results of the preliminary study are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.