NLM DIR Seminar Schedule
UPCOMING SEMINARS
RECENT SEMINARS
-
Dec. 2, 2025 Qingqing Zhu
CT-Bench & CARE-CT: Building Reliable Multimodal AI for Lesion Analysis in Computed Tomography -
Nov. 25, 2025 Jing Wang
MIMIC-EXT-TE: Millions Clinical Temporal Event Time-Series Dataset -
Oct. 21, 2025 Yifan Yang
TBD -
Oct. 14, 2025 Devlina Chakravarty
TBD -
Oct. 9, 2025 Ziynet Nesibe Kesimoglu
TBD
Scheduled Seminars on Jan. 12, 2023
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
Although the powerful applications of machine learning (ML) are revolutionizing medicine, current algorithms are not resilient against bias. Fairness in ML can be defined as measuring the potential bias in algorithms with respect to characteristics such as race, gender, age, etc. In this paper, we perform a comparative study and systematic analysis to detect bias caused by imbalanced group representation in sample medical datasets. We investigate bias in major medical tasks for three datasets: UCI Heart Disease dataset (cardiac disease classification), Stanford Diverse Dermatology Image (DDI) dataset (skin cancer prediction), and chestX-ray dataset (CXR lung segmentation). Our results show differences in the performance of the state-of-the-arts across different groups. To mitigate this disparity, we explored three bias mitigation approaches and demonstrated that integrating these approaches into ML models can improve fairness without degrading the overall performance.