NLM DIR Seminar Schedule
UPCOMING SEMINARS
-
April 8, 2025 Jaya Srivastava
Leveraging a deep learning model to assess the impact of regulatory variants on traits and diseases -
April 15, 2025 Pascal Mutz
TBD -
April 18, 2025 Valentina Boeva, Department of Computer Science, ETH Zurich
Decoding tumor heterogeneity: computational methods for scRNA-seq and spatial omics -
April 22, 2025 Stanley Liang
TBD -
April 29, 2025 MG Hirsch
TBD
RECENT SEMINARS
-
April 1, 2025 Roman Kogay
Horizontal transfer of bacterial operons into eukaryote genomes -
March 25, 2025 Yifan Yang
Adversarial Manipulation and Data Memorization in Large Language Models for Medicine -
March 11, 2025 Sofya Garushyants
Tmn – bacterial anti-phage defense system -
March 4, 2025 Sanasar Babajanyan
Evolution of antivirus defense in prokaryotes depending on the environmental virus load -
Feb. 25, 2025 Zhizheng Wang
GeneAgent: Self-verification Language Agent for Gene Set Analysis using Domain Databases
Scheduled Seminars on April 7, 2022
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
Cataract is the leading cause of blindness worldwide in the elderly and accounts for half of global blindness. Its prevalence is predicted to increase due to the aging population in many countries. It forms as an opacity in the crystalline lens that develops slowly and causes visual impairment. In its severe stages, it requires surgical treatment, so early diagnosis is necessary. Its diagnosis requires usually in-person evaluation by an ophthalmologist, which can be difficult. However, color fundus photographs (CFP) are broadly taken outside ophthalmology clinics, which could be a great chance to increase cataract screening through an automated algorithm. We developed DeepOpacityNet to detect cataract and highlight its most relevant features in CFP. We used a balanced dataset of 17,514 CFPs from 2,573 participants obtained from the Age-Related Eye Diseases Study 2 dataset. The ground truth labels were transferred from slit lamp examination and reading center grading of anterior segment photographs for different cataract types. The dataset was split on the participant level into training, validation, and test sets (70%, 10%, and 20% participants, respectively). DeepOpacityNet and other methods were trained and evaluated on these sets. Moreover, 100 random test CFPs were used to compare DeepOpacityNet performance to that of three ophthalmologists and to visually grade the output class activation maps (CAMs). On the test set, DeepOpacityNet outperformed other methods with accuracy of 0.6683 and AUC of 0.6686. On the random test subset, DeepOpacityNet outperformed ophthalmologists with accuracy of 0.6610 and AUC of 0.6612 compared to 0.6025 and 0.5988. The visual grading of output CAMs by ophthalmologists show that DeepOpacityNet highlights more interpretable features compared to other methods. In conclusion, DeepOpacityNet could detect cataract from CFP with interpretable outputs with reasonable performance superior to that of ophthalmologists on such difficult dataset.