The purpose of this project is to collect a large ocular images dataset to be used to develop Artificial Intelligence (AI) systems to server an early detection for any ocular diseases. The aim is to decrease blindness rates and promoting better vision quality. The developed systems will contribute in early disease detection and maximizing the accuracy of diagnosis and treatment decisions and reduces the burdens on medical professionals. Two different Optical Coherence Tomography (OCT) machines at the retina and glaucoma clinics in King Abdulaziz Medical City in Riyadh, Saudi Arabia were used to collect the images and the associated clinical information. A total of 58114 retinal images for 4863 patients, 2394 are male and 2469 are female, for the age between 4 to 99 years old captured between the period of 2007- 2021. whereas 40722 images were extracted from Heidelberg Engineering OCT and 17392 images extracted from Topcon OCT. The images will be visually and manually labeled and annotated respectively by multiple specialized ophthalmologists. The developed AI systems will serves the population and health care system in early detection of preventable blinding diseases. Moreover, these systems will reduce the high cost of the late ocular diseases detections. The updated information regarding the data will be available through the link: https://kaimrc.med.sa/?page_id=11767072
KEYWORDS: Breast cancer, Mammography, Artificial intelligence, Magnetic resonance imaging, Breast, Cancer, Ultrasonography, Medicine, Medical research, Data centers
The purpose of this project is to prepare image data set to develop AI systems to serve screening and diagnosis of breast cancer research field. Whereas early detection could have a positive impact on decreasing mortality, as this could offer more options for successful intervention and therapies to reduce the chance of malignant and metastatic progression. Six students, one research technologist, and one consultant in radiology collected the images and the patients’ information. The images extracted from three imaging modalities: the Hologic 3D Mammography, Philips and Super Sonic ultrasound Machines, and GE and Philips machines for MRI. The cases were graded by a trained radiologist. A total of 3085 DICOM format images have collected for the period between 2008 – 2020 for 890 females patients ages 18 to 85. The largest portion in the data is dedicated for mammograms (51.3%), and then ultrasound (31.7%), and MRI exams (17%). There were 593 malignant cases while the benign cases were 2492 cases. The diagnosis was confirmed by biopsy technique after mammogram and ultrasound exams. The data will be continually collected in the future to serve the artificial intelligence research field and the public health community. The updated information about the data will be available on: https://kaimrc.med.sa/?page_id=11767072
KEYWORDS: Computing systems, Image segmentation, Gold, Photography, Optic nerve, Digital photography, Retinal scanning, Macula, Image analysis, FDA class II medical device development
The purpose of this study was to evaluate the ability of a developed computer aided glaucoma screening system to screen for glaucoma using a Food Drug Administration (FDA) Class II diagnostic digital fundus photography system used for diabetic retinopathy screening (DRS). The fundus photos collected from participants underwent a comprehensive eye examination as well as non-mydriatic 45° single photograph retinal imaging centered on the macula. Optic nerve images within the 45° non-mydriatic and non-stereo DRS images (The Retinal fundus Images for Glaucoma Analysis: the RIGA2 dataset) were evaluated by a computer-aided automated segmentation system to determine the vertical cup-to-disc ratio (VCDR). The VCDR from clinical assessment was considered as gold standard, VCDR results from the computer system was compared to that from clinical assessment. The grading agreement was assessed by computing intraclass correlation coefficient (ICC). In addition, sensitivity and specificity were calculated. Among 245 fundus photos, 166 images met quality specifications for analysis. The ICC value for the VCDR between the gold standard clinical exam and the automated segmentation system was 0.41, indicating fair agreement. The specificity and sensitivity for (0.6 VCDR) were 76% and 47% respectively.
The current health care approach for chronic care, such as glaucoma, has limitations for access to expert care and to meet the growing needs of a larger population of older adults who will develop glaucoma. The computer aided diagnosis system (CAD) shows great promise to fill this gap. Our purpose is to expand the initial fundus dataset called Retinal fundus Images for Glaucoma Analysis (RIGA) to develop collaborative image processing methods to automate quantitative optic nerve assessments from fundus photos. All the subjects were women and enrolled in an IRBMED protocol. The fundus photographs were taken using Digital Retinography System (DRS), which is dedicated for diabetic retinopathy screening. Among initial 245 photos, there were 166 photos that met quality assurance metrics for analysis and serve as RIGA2 dataset. Three glaucoma fellowship trained ophthalmologists performed various tasks on these photos. In addition, the cup to disc ratio (CDR) and the neuroretinal rim thickness for the subjects were assessed by slit lamp biomicroscopy and served as the gold standard measure. This RIGA2 dataset is additional 2D color disc photos resource, and multiple extracted features that serves the research community as a form of crowd sourcing analytical power in the growing teleglaucoma field.
Ahmed Almazroa, Sami Alodhayb, Essameldin Osman, Eslam Ramadan, Mohammed Hummadi, Mohammed Dlaim, Muhannad Alkatee, Kaamran Raahemifar, Vasudevan Lakshminarayanan
Glaucoma neuropathy is a major cause of irreversible blindness worldwide. Current models of chronic care will not be able to close the gap of growing prevalence of glaucoma and challenges for access to healthcare services. Teleophthalmology is being developed to close this gap. In order to develop automated techniques for glaucoma detection which can be used in tele-ophthalmology we have developed a large retinal fundus dataset. A de-identified dataset of retinal fundus images for glaucoma analysis (RIGA) was derived from three sources for a total of 750 images. The optic cup and disc boundaries for each image was marked and annotated manually by six experienced ophthalmologists and included the cup to disc (CDR) estimates. Six parameters were extracted and assessed (the disc area and centroid, cup area and centroid, horizontal and vertical cup to disc ratios) among the ophthalmologists. The inter-observer annotations were compared by calculating the standard deviation (SD) for every image between the six ophthalmologists in order to determine if the outliers amongst the six and was used to filter the corresponding images. The data set will be made available to the research community in order to crowd source other analysis from other research groups in order to develop, validate and implement analysis algorithms appropriate for tele-glaucoma assessment. The RIGA dataset can be freely accessed online through University of Michigan, Deep Blue website (doi:10.7302/Z23R0R29).
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.