NLM DIR Seminar Schedule
UPCOMING SEMINARS
-
April 8, 2025 Jaya Srivastava
Leveraging a deep learning model to assess the impact of regulatory variants on traits and diseases -
April 15, 2025 Pascal Mutz
TBD -
April 18, 2025 Valentina Boeva, Department of Computer Science, ETH Zurich
Decoding tumor heterogeneity: computational methods for scRNA-seq and spatial omics -
April 22, 2025 Stanley Liang
TBD -
April 29, 2025 MG Hirsch
TBD
RECENT SEMINARS
-
April 1, 2025 Roman Kogay
Horizontal transfer of bacterial operons into eukaryote genomes -
March 25, 2025 Yifan Yang
Adversarial Manipulation and Data Memorization in Large Language Models for Medicine -
March 11, 2025 Sofya Garushyants
Tmn – bacterial anti-phage defense system -
March 4, 2025 Sanasar Babajanyan
Evolution of antivirus defense in prokaryotes depending on the environmental virus load -
Feb. 25, 2025 Zhizheng Wang
GeneAgent: Self-verification Language Agent for Gene Set Analysis using Domain Databases
Scheduled Seminars on April 2, 2024
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
The promise of artificial intelligence (AI) in healthcare, from diagnosis to treatment optimization, is undeniable. However, as AI technologies like LLMs and medical imaging AI become integral to clinical practices, their inherent biases pose significant challenges. These biases can exacerbate healthcare disparities, making the pursuit of equity in AI applications not just a technical challenge but a moral imperative.
Our talk will cover two studies. The first one reveals biases in language models predicting healthcare outcomes, showing a tendency to replicate societal disparities in treatment recommendations and prognoses. The second one addresses fairness in medical imaging AI, introducing a causal fairness module that improves equity by adjusting for biases related to sensitive attributes without compromising diagnostic performance.
Addressing biases in AI is crucial for ensuring these technologies serve all patients fairly, regardless of their background. Our studies highlight the importance of continual assessment and adjustment of AI models to reflect ethical considerations alongside technical advancements.