Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review

Lauren Coan, Bryan Williams, Krishna Adithya Venkatesh, Swati Upadhyaya, Ala Al Kafri, Silvester Czanner, Rengaraji Venkatesh, Colin E Willoughby, Srinivasan Kavitha, Gabriela Czanner

Research output: Contribution to journalArticlepeer-review


Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma, examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is non-invasive and low-cost; however, the image examination relies on subjective, time-consuming, and costly expert assessments.

A timely question to ask is: “Can artificial intelligence mimic glaucoma assessments made by experts?”. Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy?

We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011-2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modelling based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
Original languageEnglish
Number of pages42
JournalSurvey of Ophthalmology
Early online date17 Aug 2022
Publication statusPublished - 17 Aug 2022


Dive into the research topics of 'Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review'. Together they form a unique fingerprint.

Cite this