NLM DIR Seminar Schedule
UPCOMING SEMINARS
-
July 3, 2025 Matthew Diller
Using Ontologies to Make Knowledge Computable -
July 8, 2025 Noam Rotenberg
TBD
RECENT SEMINARS
-
July 1, 2025 Yoshitaka Inoue
Graph-Aware Interpretable Drug Response Prediction and LLM-Driven Multi-Agent Drug-Target Interaction Prediction -
June 10, 2025 Aleksandra Foerster
Interactions at pre-bonding distances and bond formation for open p-shell atoms: a step toward biomolecular interaction modeling using electrostatics -
June 3, 2025 MG Hirsch
Interactions among subclones and immunity controls melanoma progression -
May 29, 2025 Harutyun Sahakyan
In silico evolution of globular protein folds from random sequences -
May 20, 2025 Ajith Pankajam
A roadmap from single cell to knowledge graph
Scheduled Seminars on Jan. 12, 2023
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
Although the powerful applications of machine learning (ML) are revolutionizing medicine, current algorithms are not resilient against bias. Fairness in ML can be defined as measuring the potential bias in algorithms with respect to characteristics such as race, gender, age, etc. In this paper, we perform a comparative study and systematic analysis to detect bias caused by imbalanced group representation in sample medical datasets. We investigate bias in major medical tasks for three datasets: UCI Heart Disease dataset (cardiac disease classification), Stanford Diverse Dermatology Image (DDI) dataset (skin cancer prediction), and chestX-ray dataset (CXR lung segmentation). Our results show differences in the performance of the state-of-the-arts across different groups. To mitigate this disparity, we explored three bias mitigation approaches and demonstrated that integrating these approaches into ML models can improve fairness without degrading the overall performance.