In glioblastoma, an aggressive brain tumor with a low survival rate, accurately predicting patient survival is crucial for effective treatment planning. For a deep learning model developed using MRI images for glioblastoma prognosis, it’s essential to identify which MRI sequence – T1, T2, T1-contrast enhanced, or FLAIR – yields the most insightful prognostic features. This study utilizes a specialized autoencoder architecture, specifically a Unet model with an attention mechanism equipped with a custom-designed loss function for reconstructing tumor subregions from multimodal MRI scans. Once the autoencoder is trained, it extracts unique imaging features. Subsequently, these extracted features are analyzed using a Cox regression neural network, which is then applied to predict the survival probabilities of glioblastoma patients. The method was applied based on 5-fold cross-validation. The survival prediction performance of the model was assessed using various MRI sequences. The concordance index (C-index) values obtained were 0.62 for T1, 0.63 for T1CE, 0.63 for T2, and 0.66 for FLAIR. Additionally, the area under the curve (AUC) values recorded were 0.76 for T1, 0.74 for T1CE, 0.76 for T2, and 0.82 for FLAIR, indicating a variation in predictive accuracy across different sequences. The findings from our study underscore a discernible variation in the effectiveness of different MRI sequences for glioblastoma survival prognosis. Notably, the FLAIR sequence emerged as the most informative, providing the highest level of prognostic detail in comparison to T1, T1CE, and T2 sequences. This highlights the significance of sequence selection in enhancing the accuracy of survival predictions in glioblastoma cases.
KEYWORDS: Polyps, Education and training, Object detection, Video, Data modeling, Deep learning, Cancer detection, Error control coding, Colorectal cancer, Colon, Artificial intelligence, Medical imaging
To cope with the growing prevalence of colorectal cancer (CRC), screening programs for polyp detection and removal have proven their usefulness. Colonoscopy is considered the best-performing procedure for CRC screening. To ease the examination, deep learning based methods for automatic polyp detection have been developed for conventional white-light imaging (WLI). Compared with WLI, narrow-band imaging (NBI) can improve polyp classification during colonoscopy but requires special equipment. We propose a CycleGAN-based framework to convert images captured with regular WLI to synthetic NBI (SNBI) as a pre-processing method for improving object detection on WLI when NBI is unavailable. This paper first shows that better results for polyp detection can be achieved on NBI compared to a relatively similar dataset of WLI. Secondly, experimental results demonstrate that our proposed modality translation can achieve improved polyp detection on SNBI images generated from WLI compared to the original WLI. This is because our WLI-to-SNBI translation model can enhance the observation of polyp surface patterns in the generated SNBI images.
Operating in a degraded visual environment due to darkness can pose a threat to navigation safety. Systems have been developed to navigate in darkness that depend upon differences between objects such as temperature or reflectivity at various wavelengths. However, adding sensors for these systems increases the complexity by adding multiple components that may create problems with alignment and calibration. An approach is needed that is passive and simple for widespread acceptance. Our approach uses a type of augmented display to show fused images from visible and thermal sensors that are continuously updated. Because the raw fused image gave an unnatural color appearance, we used a color transfer process based on a look-up table to replace the false colors with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Although the database image was not perfectly registered, we were able to produce imagery acquired at night that appeared with daylight colors. Such an approach could improve the safety of nighttime navigation.
We presented a system to display nightime imagery with natural colors using a public database of images. We initially
combined two spectral bands of images, thermal and visible, to enhance night vision imagery, however the fused image
gave an unnatural color appearance. Therefore, a color transfer based on look-up table (LUT) was used to replace the
false color appearance with a colormap derived from a daytime reference image obtained from a public database using
the GPS coordinates of the vehicle. Because of the computational demand in deriving the colormap from the reference
image, we created an additional local database of colormaps. Reference images from the public database were compared
to a compact local database to retrieve one of a limited number of colormaps that represented several driving
environments. Each colormap in the local database was stored with an image from which it was derived. To retrieve a
colormap, we compared the histogram of the fused image with histograms of images in the local database. The
colormaps of the best match was then used for the fused image. Continuously selecting and applying colormaps using
this approach offered a convenient way to color night vision imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.