The Diabetic Retinopathy (DR) is a worldwide eye disease that causes visual damages and can leads to blindness. Therefore, the detection of the DR in the early stages is highly recommended. However, a delay is registered for ensuring early DR diagnosis which caused by the low-rate of the ophthalmologists, the deficiency of diagnosis equipment and the lack of mobility of elderly patients. In this paper, the main objective is to provide a mobile-aided screening system of moderate DR. Within this aim, we propose a classifier-based method which is based on detecting the Hard Exudate (HE) lesions that occur in moderate DR stage. A set of features are extracted to ensure an accurate and robust detection with respect to modest quality of fundus images. Moreover, the detection is provided in a low complexity processing to be suitable for mobile device. The aimed system corresponds to the implementation of the method on a smartphone associated to an optical lens for capturing fundus image. The system reached satisfactory screening performance where an accuracy of 98.36%, a sensitivity of 100% and specificity of 96.45% are registered using the DIARETDB1 fundus image databases. Moreover, the screening is performed in an average execution time of 2.68 seconds.
Several leading-edge applications such as pathology detection, biometric identification, and face recognition are based mainly on blob and line detection. To address this problem, Eigen value computing has been commonly employed due to its accuracy and robustness. However, Eigen value computing requires a raised computational processing, intensive memory data access, and data overlapping, which involve higher execution times. To overcome these limitations, we propose in this paper a new parallel strategy to implement Eigen value computing using a graphics processing unit (GPU). Our contributions are (1) to optimize instruction scheduling to reduce the computation time, (2) to efficiently partition processing into blocks to increase the occupancy of streaming multiprocessors, (3) to provide efficient input data splitting on shared memory to benefit from its lower access time, and (4) to propose new data management of shared memory to avoid access memory conflict and reduce memory bank accesses. Experimental results show that our proposed GPU parallel strategy for Eigen value computing achieves speedups of 27 compared with a multithreaded implementation, of 16 compared with a predefined function in the OpenCV library, and of eight compared with a predefined function in the Cublas library, all of which are performed into a quad core multi-central-processing unit platform. Next, our parallel strategy is evaluated through an Eigen value-based method for retinal thick vessel segmentation, which is essential for detecting ocular pathologies. Eigen value computing is executed in 0.017 s when using Structured Analysis of the Retina database images. Accordingly, we achieved real-time thick retinal vessel segmentation with an average execution time of about 0.039 s.
Glaucoma, Cataract, Age-related macular degeneration, (AMD) Diabetic retinopathy (DR) are among the leading retinal diseases. Thus, there is an active effort to create and develop methods to automate screening of retinal diseases. Many CAD (Computer Aided Diagnosis) systems have been expanded and are widely used for ocular diseases. Recently, Deep Neural Networks (DNNs) have been adopted in ophthalmology and applied to fundus images, achieving detection of retinal abnormalities using retinal images. There are essentially two approaches, the first one is based on hybrid method that employs image processing for preprocessing, features extraction and post processing and Deep Neural Network (DNN) is only used for classification. The second is the fully method where DNN is used for both feature extraction and classification. Several DNN models and their variants have been proposed such as AlexNet, VGG, GoogleNet, Inception, U-Net, Residual Net (ResNet), DenseNet for detection of eye retina abnormalities. The aim of this work is to provide the background and the methodology to conduct a benchmarking analysis including the computational aspects and analysis of the representative DNNs proposed in the state of the art for detection DR diseases. For each DNN different characteristics and some performance indices (i.e. model complexity, computation complexity, inference time, memory use) and detection disease performance (i.e. accuracy rate), must be taking into account to find the more accurate model. The public domain datasets used for training and testing the DNN models such as Kaggle, MESSIDOR, and EyePACS are outlined and analyzed in particular in DR detection.
This paper presents the real-time implementation of deep neural networks on smartphone platforms to detect and classify diabetic retinopathy from eye fundus images. This implementation is an extension of a previously reported implementation by considering all the five stages of diabetic retinopathy. Two deep neural networks are first trained, one for detecting four stages and the other to further classify the last stage into two more stages, based on the EyePACS and APTOS datasets fundus images and by using transfer learning. Then, it is shown how these trained networks are turned into a smartphone app, both Android and iOS versions, to process images captured by smartphone cameras in real-time. The app is designed in such a way that fundus images can be captured and processed in real-time by smartphones together with lens attachments that are commercially available. The developed real-time smartphone app provides a costeffective and widely accessible approach for conducting first-pass diabetic retinopathy eye exams in remote clinics or areas with limited access to fundus cameras and ophthalmologists.
Several retinal pathologies lead to severe damages that may achieve vision lost. Moreover, some damages require expensive treatment, other ones are irreversible due to the lack of therapies. Therefore, early diagnoses are highly recommended to control ocular diseases. However, early stages of several ocular pathologies lead to the symptoms that cannot be distinguish by the patients. Moreover, ageing population is an important prevalence factor of ocular diseases which is the cases of most industrial counties. Further, this feature involves a lake of mobility which presents a limiting factor to perform periodical eye screening. Those constraints lead to a late of ocular diagnosis and hence important ocular pathology patients are registered. The forecast statistics indicates that affected population will be increased in coming years.
Several devices allowing the capture of the retina have recently been proposed. They are composed by optical lenses which can be snapped on Smartphone, providing fundus images with acceptable quality. Thence, the challenge is to perform automatic ocular pathology detection on Smartphone captured fundus images that achieves higher performance detection while respecting timing constraint with respect to the clinical employment. This paper presents a survey of the Smartphone-captured fundus image quality and the existing methods that use them for retinal structures and abnormalities detection.
For this purpose, we first summarize the works that evaluate the Smartphone-captures fundus image quality and their FOV (field-of-view). Then, we report the capability to detect abnormalities and ocular pathologies from those fundus images. Thereafter, we propose a flowchart of processing pipeline of detecting methods from Smartphone captured fundus images and we investigate about the implementation environment required to perform the detection of retinal abnormalities.
Ocular pathology detection from fundus images presents an important challenge on health care. In fact, each pathology has different severity stages that may be deduced by verifying the existence of specific lesions. Each lesion is characterized by morphological features. Moreover, several lesions of different pathologies have similar features. We note that patient may be affected simultaneously by several pathologies. Consequently, the ocular pathology detection presents a multiclass classification with a complex resolution principle. Several detection methods of ocular pathologies from fundus images have been proposed. The methods based on deep learning are distinguished by higher performance detection, due to their capability to configure the network with respect to the detection objective.
This work proposes a survey of ocular pathology detection methods based on deep learning. First, we study the existing methods either for lesion segmentation or pathology classification. Afterwards, we extract the principle steps of processing and we analyze the proposed neural network structures. Subsequently, we identify the hardware and software environment required to employ the deep learning architecture. Thereafter, we investigate about the experimentation principles involved to evaluate the methods and the databases used either for training and testing phases. The detection performance ratios and execution times are also reported and discussed.
Blood vessels segmentation in fundus image is a requiring step in order to detect retinopathies. A higher performing segmentation was been proposed in [12]. It consists at three dependent stages: Provide two binary images to extract wide vessels, compute features of the remaining pixels on binary images in order to extract fine vessels, and then combine both wide and fine vessels. The segmentation execution time is about 3-12 seconds when it is performed with fundus image having resolutions between 768*584 and 999*960. These latest resolutions are quite smaller than ones provided by actual retinograph, which leads to a higher rise on execution time. In this paper, we propose a parallelism strategy of the segmentation approach for implementation in Shared Memory Parallel Machine (SMPM). First, both binary images are provided in parallel. Thereafter, features processing is split according to their computational complexities. At the later stage, wide vessels and fine vessels images are subdivided adequately in the objective of a parallel combination. The parallel strategy is implemented using OpenCV and then assessed on STARE public data sets. Experimental analyses of execution time and efficiency are presented and discussed.
Fundus image processing is getting widely used in retinopathy detection. Detection approaches always proceed to identify the retinal components, where optic disk is one of the principal ones. It is characterized by: a higher brightness compared to the eye fundus, a circular shape and convergence of blood vessels on it. As a consequence, different approaches for optic disk detection have been proposed. To ensure a higher performing detection, those approaches varied in terms of characteristics set chosen to detect the optic disk. Even the performances are slightly different, we distinguish a significant gap on the computational complexity and hence on the execution time. This paper focuses on the survey of the approaches for optic disk detection. To identify an efficient approach, it is relevant to explore the chosen characteristics and the proposed processing to locate the optic disk. For this purpose, we analyze the computational complexity of each detection approach. Then, we propose a classification approach in terms of computational efficiency. In this comparison study, we distinguish a relation between computational complexity and the characteristic set for OD detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.