ESCRS - FP16.09 - Deep Learning For Evaluation Of Fuchs Endothelial Dystrophy From In-Vivo Confocal Microscopy Imaging: A Pilot Study

Deep Learning For Evaluation Of Fuchs Endothelial Dystrophy From In-Vivo Confocal Microscopy Imaging: A Pilot Study

Published 2025 - 43rd Congress of the ESCRS

Reference: FP16.09 | Type: Free paper | DOI: 10.82333/3avd-pb53

Authors: Arthur B Cummings** 1 , Brendan Cummings 2

1Cataract& Refractive,Wellington Eye Clinic,Dublin,Ireland, 2Cataract& Refractive,Wellington Eye Clinic,Dublin,Ireland;Oculoplastics,Royal Victoria Eye & Ear Hospital,Dublin,Ireland

Purpose

Fuchs endothelial corneal dystrophy (FECD) is a bilateral, progressive disease characterized by the presence of corneal guttae and loss of corneal endothelial cells. Objective diagnostic tests, such as, in vivo confocal microscopy (IVCM), are valuable tools to evaluate the corneal endothelium in FECD. The application of novel technologies, including computer vision technology and artificial intelligence, that are expected to further enhance the accuracy.

The aim of this study was to evaluate the diagnosing efficacy of deep learning parameters in characterizing Fuchs corneal endothelial dystrophy (FECD) from in-vivo confocal microscopy (IVCM) images.

Setting

Hacettepe University Faculty of Medicine, Ankara, Turkey

Hacettepe University Faculty of Computer Engineering, Ankara, Turkey

Methods

The study was conducted utilizing the PyTorch 2.2 deep learning development framework, incorporating the albumentations and wandb Python libraries to obtain augmented images for robust model training and collaborative experiment monitoring, respectively. We employed images of 768x576 pixels wide, collected from 50 Fuchs' dystrophy and 104 healthy individuals. We preferred EfficientNet architectures (B0-B7) in our deep learning models due to their (1) reduced computational requirements, (2) scalability from mobile to large-scale applications and (3) superior accuracy-efficiency trade-off. To obtain results that are both robust and reliable in our limited data regime, a fivefold cross-validation-based evaluation trategy was utilized. 

Results

The EfficientNet-B0 (the smallest model) working on the resized 224×224 pixel wide images has performed mean accuracy of 98.42% (±2.1% S.D.) with the mean specificity of 97.56% (±3.3% S.D.). The Efficient-Net B1 model achieved mean accuracy of 97.89% (±2.5%) along with a mean specificity of 96.79% (±3.9% S.D.). A more parameter-rich model B2, achieved a mean accuracy of 98.42% (±1.2% S.D.) and a mean specificity of 97.56% (±1.9% S.D.). The EfficientNet-B3 model performed a mean accuracy of 98.94% (±1.2% S.D.) along with a mean specificity of 98.39% (±1.9% S.D.). On the other hand, a higher resolution model, EfficientNet-B5 running on 456×456 images, obtained 97.36% (±2.8% S.D.) and specificity of 95.89 % (±4.5% S.D.).

Conclusions

This study confirms the utility of deep learning for precise FECD evaluation through in-vivo confocal microscopy. Guttae area ratio emerges as a compelling morphometric parameter aligning closely with clinical grading.  Deep learning shows great promise in detecting, diagnosing, grading, and measuring diseases. There is a need for standardization of reporting to improve the algorithms. These findings help thepotential applications in the assessment of FECD patients, as well as in monitoring novel FECD therapies. It is foreseen that the inclusion of more data or the use of more advanced algorithms will likely allow us to surpass these results and will lead us to achieve 100% accuracy in discriminating Fuchs' dystrophy patients