Pulmonary nodules are the principal lung cancer indicator, whose malignancy is mainly related to their size, morphological and textural features. Computational deep representations are today the most common tool to characterize lung nodules but remain limited to capturing nodule variability. In consequence, nodule malignancy classification from CT observations remains an open problem. This work introduces a multi-head attention network that takes advantage of volumetric nodule observations and robustly represents textural and geometrical patterns, learned from a discriminative task. The proposed approach starts by computing 3D convolutions, exploiting textural patterns of volumetric nodules. Such convolutional representation is enriched from a multi-scale projection using receptive field blocks, followed by multiple volumetric attentions that exploit non-local nodule relationships. These attentions are fused to enhance the representation and achieve more robust malignancy discrimination. The proposed approach was validated on the public LIDC-IDRI dataset, achieving a 91.82% in F1-score, 91.19% in sensitivity, and 92.43% in AUC for binary classification. The reported results outperform the state-of-the-art strategy with 3D nodule representations.
Segmentation or delineation of specific organs and structures in medical images is an important task in the clinical diagnosis and treatment, since it allows to characterize pathologies through imaging measures (biomarkers). In brain imaging, segmentation of main tissues or specific structures is challenging, due to the anatomic variability and complexity, and the presence of image artifacts (noise, intensity inhomogeneities, partial volume effect). In this paper, an automatic segmentation strategy is proposed, based on sparse representations and coupled dictionaries. Image intensity patterns are singly related to tissue labels at the level of small patches, gathering this information in coupled intensity/segmentation dictionaries. This dictionaries are used within a sparse representation framework to find the projection of a new intensity image onto the intensity dictionary, and the same projection can be used with the segmentation dictionary to estimate the corresponding segmentation. Preliminary results obtained with two publicly available datasets suggest that the proposal is capable of estimating adequate segmentations for gray matter (GM) and white matter (WM) tissues, with an average overlapping of 0:79 for GM and 0:71 for WM (with respect to original segmentations).
In structural Magnetic Resonance Imaging (MRI), neurodegenerative diseases generally present complex brain patterns that can be correlated with di erent clinical onsets of this pathologies. An objective method that aims to determine both global and local changes is not usually available in clinical practice, thus the interpretation of these images is strongly dependent on the radiologist's skills. In this paper, we propose a strategy which interprets the brain structure using a framework that highlights discriminant brain patterns for neurodegenerative diseases. This is accomplished by combining a probabilistic learning technique, which identi es and groups regions with similar visual features, with a visual saliency method that exposes relevant information within each region. The association of such patterns with a speci c disease is herein evaluated in a classi cation task, using a dataset including 80 Alzheimer's disease (AD) patients and 76 healthy subjects (NC). Preliminary results show that the proposed method reaches a maximum classi cation accuracy of 81.39%.
Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.
KEYWORDS: Magnetic resonance imaging, Lawrencium, Super resolution, Heart, 3D modeling, Cardiovascular magnetic resonance imaging, 3D image processing, Image processing, Image segmentation, Image analysis
Acquisition of proper cardiac MR images is highly limited by continued heart motion and apnea periods. A typical
acquisition results in volumes with inter-slice separations of up to 8 mm. This paper presents a super-resolution
strategy that estimates a high-resolution image from a set of low-resolution image series acquired in different non-orthogonal orientations. The proposal is based on a Bayesian approach that implements a Maximum a Posteriori
(MAP) estimator combined with a Wiener filter. A pre-processing stage was also included, to correct or eliminate
differences in the image intensities and to transform the low-resolution images to a common spatial reference
system. The MAP estimation includes an observation image model that represents the different contributions to
the voxel intensities based on a 3D Gaussian function. A quantitative and qualitative assessment was performed
using synthetic and real images, showing that the proposed approach produces a high-resolution image with
significant improvements (about 3dB in PSNR) with respect to a simple trilinear interpolation. The Wiener
filter shows little contribution to the final result, demonstrating that the MAP uniformity prior is able to filter
out a large amount of the acquisition noise.
Accurate diagnosis of Alzheimer's disease (AD) from structural Magnetic Resonance (MR) images is difficult due to the complex alteration of patterns in brain anatomy that could indicate the presence or absence of the pathology. Currently, an effective approach that allows to interpret the disease in terms of global and local changes is not available in the clinical practice. In this paper, we propose an approach for classification of brain MR images, based on finding pathology-related patterns through the identification of regional structural changes. The approach combines a probabilistic Latent Semantic Analysis (pLSA) technique, which allows to identify image regions through latent topics inferred from the brain MR slices, with a bottom-up Graph-Based Visual Saliency (GBVS) model, which calculates maps of relevant information per region. Regional saliency maps are finally combined into a single map on each slice, obtaining a master saliency map of each brain volume. The proposed approach includes a one-to-one comparison of the saliency maps which feeds a Support Vector Machine (SVM) classifier, to group test subjects into normal or probable AD subjects. A set of 156 brain MR images from healthy (76) and pathological (80) subjects, splitted into a training set (10 non-demented and 10 demented subjects) and one testing set (136 subjects), was used to evaluate the performance of the proposed approach. Preliminary results show that the proposed method reaches a maximum classification accuracy of 87.21%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.