NLM DIR Seminar Schedule
UPCOMING SEMINARS
-
Feb. 17, 2026 Zhaohui Liang
Heterogeneous Graph Re-ranking for CLIP-based Medical Cross-modal Retrieval -
Feb. 19, 2026 Jean Thierry-Mieg
On Magic2, an innovative hardware-friendly RNA-seq analyzer -
Feb. 24, 2026 Ajith Viswanathan Asari Pankajam
TBD -
March 3, 2026 Gianlucca Goncalves Nicastro
TBD -
March 5, 2026 Hasan Balci
TBD
RECENT SEMINARS
-
Feb. 5, 2026 Lana Yeganova
From Algorithms to Insights: Bridging AI and Topic Discovery for Large-Scale Biomedical Literature Analysis. -
Jan. 29, 2026 Mehdi Bagheri Hamaneh
FastSpel: A simple peptide spectrum predictor that achieves deep learning-level performance at a fraction of the computational cost -
Jan. 22, 2026 Mario Flores
AI Pipeline for Characterization of the Tumor Microenvironment -
Jan. 20, 2026 Anastasia Gulyaeva
Diversity and evolution of the ribovirus class Stelpaviricetes -
Jan. 8, 2026 Won Gyu Kim
LitSense 2.0: AI-powered biomedical information retrieval with sentence and passage level knowledge discovery
Scheduled Seminars on April 2, 2024
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
The promise of artificial intelligence (AI) in healthcare, from diagnosis to treatment optimization, is undeniable. However, as AI technologies like LLMs and medical imaging AI become integral to clinical practices, their inherent biases pose significant challenges. These biases can exacerbate healthcare disparities, making the pursuit of equity in AI applications not just a technical challenge but a moral imperative.
Our talk will cover two studies. The first one reveals biases in language models predicting healthcare outcomes, showing a tendency to replicate societal disparities in treatment recommendations and prognoses. The second one addresses fairness in medical imaging AI, introducing a causal fairness module that improves equity by adjusting for biases related to sensitive attributes without compromising diagnostic performance.
Addressing biases in AI is crucial for ensuring these technologies serve all patients fairly, regardless of their background. Our studies highlight the importance of continual assessment and adjustment of AI models to reflect ethical considerations alongside technical advancements.