NLM DIR Seminar Schedule
UPCOMING SEMINARS
RECENT SEMINARS
-
Dec. 17, 2024 Joey Thole
Training set associations drive AlphaFold initial predictions of fold-switching proteins -
Dec. 10, 2024 Amr Elsawy
AI for Age-Related Macular Degeneration on Optical Coherence Tomography -
Dec. 3, 2024 Sarvesh Soni
Toward Relieving Clinician Burden by Automatically Generating Progress Notes -
Nov. 19, 2024 Benjamin Lee
Reiterative Translation in Stop-Free Circular RNAs -
Nov. 12, 2024 Devlina Chakravarty
Fold-switching reveals blind spots in AlphaFold predictions
Scheduled Seminars on Jan. 12, 2023
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
Although the powerful applications of machine learning (ML) are revolutionizing medicine, current algorithms are not resilient against bias. Fairness in ML can be defined as measuring the potential bias in algorithms with respect to characteristics such as race, gender, age, etc. In this paper, we perform a comparative study and systematic analysis to detect bias caused by imbalanced group representation in sample medical datasets. We investigate bias in major medical tasks for three datasets: UCI Heart Disease dataset (cardiac disease classification), Stanford Diverse Dermatology Image (DDI) dataset (skin cancer prediction), and chestX-ray dataset (CXR lung segmentation). Our results show differences in the performance of the state-of-the-arts across different groups. To mitigate this disparity, we explored three bias mitigation approaches and demonstrated that integrating these approaches into ML models can improve fairness without degrading the overall performance.