In recent years, x-ray photon-counting detectors (PCDs) have become increasingly popular due to their ability to discriminate energy and low noise levels. However, technical issues (e.g., charge splitting and pulse pileup effects) can affect the data quality by distorting the energy spectrum. To address those issues, based on a deep neural network-based approach using a Wasserstein generative adversarial network (WGAN) framework for PCD data correction, we evaluate the effectiveness of pre-trained and training-from-scratch convolutional neural networks (CNNs) as perceptual loss functions to address charge splitting and pulse pileup correction challenges in photon counting computed tomography (CT) data. Different CNN architectures, including VGG11, VGG13, VGG16, VGG19, ResNet50, and Xception, are evaluated. Compared with the method using a pre-trained network, our findings indicate that training the CNNs from scratch on our dataset produces better results. It significantly affects the performance for the choice of CNN architecture as a perceptual loss in the WGAN framework. Furthermore, because recent explosive interest on transformers has suggested their potential to be useful for computer vision tasks, we also evaluate transformers to maximize the attribute-related information contained in the image feature by texture features extraction. Our study emphasizes the importance of selecting appropriate network architecture and training strategy when implementing the WGAN framework for photon counting CT data correction.
Single photon emission computed tomography (SPECT) is commonly used with radioiodine scintigraphy to evaluate patients with multiple diseases such as thyroid cancer. The clinical gamma camera for SPECT contains a mechanical collimator that greatly compromises dose efficiency and limits diagnostic performance. The Compton camera is emerging as a promising alternative for mapping the distribution of radio-pharmaceuticals in the thyroid, since the Compton camera does not require mechanical collimation and in principle does not reject gamma ray photons. In this study, a high-efficiency tomographic imaging system is designed with a Compton camera for thyroid cancer imaging. A Timepix3-based Compton camera is selected for collecting gamma photons emitted from an I-131 phantom, and the backprojection filtration algorithm is applied for image reconstruction. The results demonstrate the feasibility of the Compton camera for high efficiency SPECT imaging and also the limitations that need further efforts to address.
Multimodal imaging has shown great potential in cancer research by concurrently providing anatomical, functional, and molecular information in live, intact animals. During preclinical imaging of small animals like mice, anesthesia is required to prevent movement and improve image quality. However, their high surface area-to-body weight ratio predisposes mice, particularly nude mice, to hypothermia under anesthesia. To address this, we developed a detachable mouse scanning table with heating function for hybrid x-ray and optical imaging modalities, without introducing metal artifacts. Specifically, we employed Polylactic Acid (PLA) 3D printing technology to fabricate a customized scanning table, compatible with both CT and optical imaging systems. This innovation enables seamless transportation of the table between different imaging setups, while its detachable design facilitates maintaining a clutter-free operational environment within the imaging systems. This is crucial for accommodating various projects within the same scanner. The table features positioned fixation points to secure mice, ensuring positional consistency across imaging modalities. Additionally, we integrated a carbon nanotube-based heating pad into the table to regulate the body temperature of mice during examinations, providing an ethical and effective temperature maintenance solution. Our evaluations confirmed the table’s ability to maintain a 30g water bag at approximately 40℃, effectively regulating mouse body temperature to an optimal 36℃ during preclinical imaging sessions. This scanning table serves as a useful tool in preclinical cancer research, offering a versatile tool that upholds animal welfare standards.
KEYWORDS: Luminescence, Tomography, Fluorescence tomography, Inverse problems, 3D modeling, Spatial resolution, Monte Carlo methods, Biomedical optics, 3D image processing, 3D acquisition
We propose an end-to-end reconstruction approach for Mesoscopic Fluorescence Molecular Tomography (MFMT) using deep learning. Herein, an optimized deep network based on back-projection with Residual Channel Attention Mechanism architecture is implemented to directly output 3D reconstruction from 2D measurements and diminish the computational burden while overcoming the limitation of the PC's memory during reconstruction. The network is trained by producing a large synthetic dataset through Monte Carlo simulation and validated with in silico data and a phantom experiment. Our results suggest that this approach can reconstruct fluorescence inclusions in scattering media at a mesoscopic scale.
X-ray photon-counting detector (PCD) offers low noise, high resolution, and spectral characterization, representing a next generation of CT and enabling new biomedical applications. It is well known that involuntary patient motion may induce image artifacts with conventional CT scanning, and this problem becomes more serious with PCD due to its high detector pitch and extended scan time. Furthermore, PCD often comes with a substantial number of bad pixels, making analytic image reconstruction challenging and ruling out state-of-the-art motion correction methods that are based on analytical reconstruction. In this paper, we extend our previous locally linear embedding (LLE) cone-beam motion correction method to the helical scanning geometry, which is especially desirable given the high cost of large-area PCD. In addition to our adaption of LLE-based parametric searching to helical cone-beam photon-counting CT geometry, we introduce an unreliable-volume mask to improve the motion estimation accuracy and perform incremental updating on gradually refined sampling grids for optimization of both accuracy and efficiency. Our numerical results demonstrate that our method reduces the estimation errors near the two longitudinal ends of the reconstructed volume and overall image quality. The experimental results on clinical photon-counting scans of the patient extremities show significant resolution improvement after motion correction using our method, which reveals subtle fine structures previously hidden under motion blurring and artifacts.
Nano-CT enables 3D imaging of micro/nano-structures and is becoming an indispensable tool. At such a high resolution, specimens must have a small diameter small enough to fit into the instrument’s field of view, typically a few tens of micrometers to a few hundred micrometers. As a result, samples are commonly glued to the tip of a steel pin for alignment before imaging. Ideally, data are collected from the part of a specimen above the pin and the x-ray opaque pin will not interfere with imaging. However, the tiny sample size makes precise mounting very tricky, and many times a region adjacent to the pin is found of to be of interest post mounting. Sometimes the sample is too fragile to remount and other times removing the specimen and repeating the tedious remounting steps is impractical due to time constraints, we find that the information occluded by the metal pin can be almost fully recovered via iterative reconstruction with simple metal-trace masked from a regular scan of an imperfectly mounted specimen. Specifically, combining the metal artifact reduction and interior tomography techniques, a metal trace mask in the sinogram is first extracted from a low-resolution global reconstruction which covers the whole cross-section of the pin, then the desired high-resolution reconstruction of a region of interest is iteratively reconstructed excluding any contribution from the metal trace. Our method is demonstrated with a 42.35 nm reconstruction of a portion of a sea urchin tooth, which is scanned on a synchrotron with the pin moving across the field of view during sample rotation, showing that streaking artifacts caused by pin occlusion can be greatly suppressed to achieve an image quality close to that without occlusion. These results suggest that our method has a great potential in simplifying the specimen preparation and relaxing the proficiency requirements, which significantly facilitates nano-CT imaging applications.
Ultrahigh-resolution CT is highly desirable in many clinical applications, such as cochlear implantation and coronary stenosis assessment. Despite the wide accessibility to micro-CT for preclinical research, there are no such micro-CT scanners for clinical use, due to several issues including high radiation dose, patient movement, limited X-ray source power and long image time. To meet the challenge, here we design a robotic-arm-based clinical photon-counting micro-CT system in the interior tomography mode for temporal bone imaging at a 50 µm resolution. We adopt twin collaborative robotic arms. These robotic arms can be operated near humans without danger of injury due to collision. One robot steers the source, the other the detector. The six-axis robot pair defines a volume of interest (VOI) more flexibly than the traditional rotation gantries and C-arm systems. The robotic system allows arbitrary imaging geometry. The required spatial resolution is on the same order of the robot mechanical precision, therefore the geometric errors due to robotic coordination together with patient motion, and system misalignment must be addressed. In this paper, we propose a locally linear embedding-based motion correction method to solve this problem to correct all these geometric errors in an unified framework with a nine-degree-of-freedom model. Different from the conventional motion correction methods that rely on gradient-based parametric optimization, our method utilizes a parametric searching mechanism through iterative reconstruction and without assuming any smoothness of the errors as a function of the view angle. The effectiveness of our method is first demonstrated with numerical simulation and further verified on a set of limited-angle cone-beam scans of a sacrificed mouse using our robotic-arm-based imaging system. Sharper images with clearer anatomical details in the absence of misalignment artifacts are obtained after our data-driven geometric calibration, suggesting a great potential of our method in biomedical and industrial applications.
Radiation dose reduction is one of the most important topics in the field of computed tomography (CT). Over past years, deep learning based denoising methods have been demonstrated effective in reducing radiation dose and improving the image quality. Since the paired low-dose and normal-dose CT are usually not available in clinical scenarios, various learning paradigms are studied, including the fully-supervised learning based on simulation data, the weakly-supervised learning based on unpaired noise-clean or paired noise-noise data, and the self-supervised learning based on noisy data only. Under neither clean nor noisy reference data, unsupervised/self-supervised low-dose CT (LDCT) denoising methods are promising to processing real data and/or images. In this study, we propose the first-of-its-kind Self-Supervised Dual-Domain Network (SSDDNet) for LDCT denoising. SSDDNet consists of three modules including a projection-domain network, a reconstruction layer, and an image-domain network. During training, a projection-domain loss, a reconstruction loss, and an image-domain loss are simultaneously used to optimize the denoising model end-to-end using the single LDCT scan. Our experimental results show that the dual-domain network is effective and superior over single-domain networks in the self-supervised learning setting.
Material decomposition algorithms enable discrimination and quantification of multiple contrast agent and tissue compositions in spectral image datasets acquired by photon-counting computed tomography (PCCT). Image denoising has been shown to improve PCCT image reconstruction quality and feature recognition while preserving fine image detail. Reduction of image artifacts and noise could also improve the accuracy of material decomposition but the effects of denoising on material decomposition have not been investigated. In particular, deep learning methods can reduce inherent PCCT image noise without using a system-based or assumed prior noise model. Therefore, the objective of this study was to investigate the effects of image denoising on quantitative material decomposition in the absence of an influence of spatial resolution on feature recognition. Phantoms comprising multiple pure and spatially uniform contrast agent (gadolinium, iodine) and tissue (calcium, water) compositions were imaged by PCCT with four energy thresholds chosen to normalize photon counts and leverage contrast agent k-edges. Image denoising was performed by the established blockmatching and 3D-filtering (BM3D) algorithm or deep learning using convolutional neural networks. Material decomposition was performed on as-acquired, BM3D-denoised, and deep-learning-denoised datasets using constrained maximum likelihood estimation and compared to known material concentrations in the phantom. Image denoising by BM3D and deep learning improved the quantitative accuracy of material concentrations determined by material decomposition compared to ground truth, as measured by the root-mean-squared error. Material classification was not improved by image denoising compared with as-acquired images, suggesting that material decomposition was robust against inherent acquisition noise when feature recognition was not challenged by the system spatial resolution. Deeplearning-denoised images balanced preservation of local detail compared to more aggressive smoothing with BM3D, as measured by line profiles across features.
Deep learning based methods have achieved promising results for CT metal artifact reduction (MAR) by learning to map an artifact-affected image or projection data to the artifact-free image in the data-driven manner. Basically, the existing methods simply select a single window in the Hounsfield unit (HU) followed by a normalization operation to preprocess all training and testing images, based on which a neural network is trained to reduce metal artifacts. However, if the selected widow contains the whole range of HU values, the model is challenged to predict the dedicated narrow windows accurately since the contribution of small HU values to the training loss may not be sufficiently weighted relative to that for large HU values. On the other hand, if a selected window is small, the opportunity will be lost to train the network effectively on features of large HU values. In practice, various tissues and organs in CT images are inspected with different window settings. Therefore, here we propose a multiple-window learning method for CT MAR. The basic idea of multiple-window learning is that the content of large HU values may help improve features of small HU values, and vice versa. Our method can precisely process multiple specified windows through simultaneously and interactively learning to remove metal artifacts within multiple windows. Experimental results on both simulated and clinical datasets have demonstrated the effectiveness of the proposed method. Due to its simplicity, the proposed multiple-window network can be easily incorporated into other deep learning frameworks for CT MAR.
Micro-/nano-CT has been widely used in practice to offer noninvasive 3D high-resolution (HR) imaging. However, increased resolution is often at a cost of a reduced field of view. Although data truncation does not corrupt high-contrast structural information in the filtered back-projection (FBP) reconstruction, the quantitative interpretation of image values is seriously compromised due to the induced shifting and cupping artifacts. State-of-the-art deep-learning-based methods promise fast and stable solutions to the interior reconstruction problem compared to analytic and iterative algorithms. Nevertheless, given the huge effort required to obtain HR global scans as the ground truth for network training, deep networks cannot be developed in a typical supervised training mode. To overcome this issue, here we propose to train the network with a low-resolution (LR) dataset generated from LR global scans which are relatively easily obtainable and obtained excellent results.
Stimulated emission depletion (STED), as one of the emerging super-resolution techniques, defines a state-of-the-art image resolution method. It has been developed into a universal fluorescent imaging tool over the past several years. The currently best available lateral resolution offered by STED is around 20 nm, but in real live cell imaging applications, the regular resolution offered through this mechanism is around 100 nm, limited by phototoxicity. Many critical biological structures are below this resolution level. Hence, it will be invaluable to improve the STED resolution through postprocessing techniques. We propose a deep adversarial network for improving the STED resolution significantly, which takes an STED image as an input, relies on physical modeling to obtain training data, and outputs a “self-refined” counterpart image at a higher resolution level. In other words, we use the prior knowledge on the STED point spread function and the structural information about the cells to generate simulated labeled data pairs for network training. Our results suggest that 30-nm resolution can be achieved from a 60-nm resolution STED image, and in our simulation and experiments, the structural similarity index values between the label and output result reached around 0.98, significantly higher than those obtained using the Lucy–Richardson deconvolution method and a state-of-the-art UNet-based super-resolution network.
Tomographically measuring the temperature distribution inside a human body has important and immediate clinical applications - including thermal ablative and hyperthermic treatment of cancers, and will enable novel solutions such as opening the blood-brain barrier for therapeutics and robotic surgery in the future. A high intensity focused ultrasound (HIFU) device can heat tumor tissues to 50-90 °C locally within seconds. Thus, accurate, real-time control of the thermal dose is critical to eliminate tumors while minimizing damage to surrounding healthy tissue. This study investigates the feasibility of using deep learning to improve the accuracy of low-dose CT (LDCT) thermometry. CT thermometry relies on thermal expansion coefficients, which is prone to inter-patient variability, and is also affected by image noise and artifacts. To demonstrate that deep neural networks can compensate for these factors, 1,000 computer-generated CT phantoms with simulated heating spots were used in training both a “divide and conquer” and “end to end” approach. In the first strategy, the first encoder-decoder network corrected for beam hardening and Poisson noise in the image domain, while a second fine-tuned differences between predicted and ground truth heat maps. The second strategy is identical to the first, except only a single convolutional autoencoder was used as the CT images were not pre-cleaned. Ultimately, the two-part divide and conquer network increased thermal accuracy substantially, demonstrating exciting future potential for the use of deep learning in this field.
Mice are used for models of almost all human diseases and are routinely scanned by micro-CT scanners. Mouse phantoms are often used for image-quality assessment. With recent developments in deep-learning-based preclinical imaging, there is a major need for large micro-CT datasets in which ground truth is known. In this study, we investigate the feasibility of making cost-effective deformable and reconfigurable mouse phantoms to generate real micro-CT datasets that reflect realistic underlying physical characteristics. Such datasets are highly desirable; for example, complicated photon-counting micro-CT datasets are needed for deep-learning-based material decomposition. In our scheme, mouse body parts are 3Dprinted with high precision using rigid or flexible materials. Liquid tissue surrogates (LTSs) or bioinks/cell lines could be used to emulate mouse organs and physiological fluid in the animals. LTSs provide realistic x-ray properties of their biological counterparts. The LTS organs could be contained in not only 3D-printed chambers, but also dialysis tubing, which emulates the cell membrane. Furthermore, through bioprinting and tissue engineering, organs and tissues can be made even more realistic for micro-CT and other types of tomographic scanning.
X-ray photon-counting detectors (PCDs) become increasingly popular with applications in medical imaging, material science, and other areas. In this paper, we propose a non-uniformity data correction method for photon-counting detectors based on the first and second moment correction. Using three measure datasets, we demonstrate the method’s efficacy in reducing spatial variance of pixel counts. The results demonstrate that both open beam and projection data can be corrected to nearly perfect Poisson counting behavior in both time and space when photon flux is in the detector’s linear response range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.