Medical imaging datasets typically do not contain many training images and are usually not sufficient for training deep learning networks. We propose a deep residual variational auto-encoder and a generative adversarial network based approach that can generate a synthetic retinal fundus image dataset with corresponding blood vessel annotations. In terms of structural statistics comparison of real and artificial our model performed better than existing methods. The generated blood vessel structures achieved a structural similarity value of 0.74 and the artificial dataset achieved a sensitivity of 0.84 and specific city of 0.97 for the blood vessel segmentation task. The successful application of generative models for the generation of synthetic medical data will not only help to mitigate the small dataset problem but will also address the privacy concerns associated with such medical datasets.
Fundus imaging is widely used for the diagnosis of retinal diseases. Major ophthalmic diseases like glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD) are diagnosed by examining retinal fundus images. Therefore, the efficient and reliable diagnosis largely depends upon the resolution of the images. In different diseased conditions, different pathologies and landmarks (haemorrhages, microaneurysms, exudates, blood vessels, optic disc and optic cup, fovea) of the retina get affected. In clinical situations it is often not possible to obtain good high-resolution images. Here, the techniques of super-resolution can be applied. The objective of super-resolution is to obtain a high-resolution image from a low-resolution input image. In this paper, we present results of the application of enhanced deep residual networks for single image super-resolution (EDSR) on retinal fundus images. This network is based on the SRResNet architecture involving skip connections. Using the public RIGA dataset, which consists of glaucoma and normal fundus images, we have trained the model using 2x, 4x and 8x scaling with three different optimizers each (namely ADAM, Stochastic Gradient Descent and RMSprop) to determine which optimizer is best for the different scales. We have also provided results obtained by varying the residual blocks in the network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.