NLM DIR Seminar Schedule
UPCOMING SEMINARS
RECENT SEMINARS
-
Dec. 17, 2024 Joey Thole
Training set associations drive AlphaFold initial predictions of fold-switching proteins -
Dec. 10, 2024 Amr Elsawy
AI for Age-Related Macular Degeneration on Optical Coherence Tomography -
Dec. 3, 2024 Sarvesh Soni
Toward Relieving Clinician Burden by Automatically Generating Progress Notes -
Nov. 19, 2024 Benjamin Lee
Reiterative Translation in Stop-Free Circular RNAs -
Nov. 12, 2024 Devlina Chakravarty
Fold-switching reveals blind spots in AlphaFold predictions
Scheduled Seminars on April 2, 2024
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
The promise of artificial intelligence (AI) in healthcare, from diagnosis to treatment optimization, is undeniable. However, as AI technologies like LLMs and medical imaging AI become integral to clinical practices, their inherent biases pose significant challenges. These biases can exacerbate healthcare disparities, making the pursuit of equity in AI applications not just a technical challenge but a moral imperative.
Our talk will cover two studies. The first one reveals biases in language models predicting healthcare outcomes, showing a tendency to replicate societal disparities in treatment recommendations and prognoses. The second one addresses fairness in medical imaging AI, introducing a causal fairness module that improves equity by adjusting for biases related to sensitive attributes without compromising diagnostic performance.
Addressing biases in AI is crucial for ensuring these technologies serve all patients fairly, regardless of their background. Our studies highlight the importance of continual assessment and adjustment of AI models to reflect ethical considerations alongside technical advancements.