NLM DIR Seminar Schedule
UPCOMING SEMINARS
-
Feb. 17, 2026 Zhaohui Liang
Heterogeneous Graph Re-ranking for CLIP-based Medical Cross-modal Retrieval -
Feb. 19, 2026 Jean Thierry-Mieg
On Magic2, an innovative hardware-friendly RNA-seq analyzer -
Feb. 24, 2026 Ajith Viswanathan Asari Pankajam
TBD -
March 3, 2026 Gianlucca Goncalves Nicastro
TBD -
March 5, 2026 Hasan Balci
TBD
RECENT SEMINARS
-
Feb. 5, 2026 Lana Yeganova
From Algorithms to Insights: Bridging AI and Topic Discovery for Large-Scale Biomedical Literature Analysis. -
Jan. 29, 2026 Mehdi Bagheri Hamaneh
FastSpel: A simple peptide spectrum predictor that achieves deep learning-level performance at a fraction of the computational cost -
Jan. 22, 2026 Mario Flores
AI Pipeline for Characterization of the Tumor Microenvironment -
Jan. 20, 2026 Anastasia Gulyaeva
Diversity and evolution of the ribovirus class Stelpaviricetes -
Jan. 8, 2026 Won Gyu Kim
LitSense 2.0: AI-powered biomedical information retrieval with sentence and passage level knowledge discovery
Scheduled Seminars on Feb. 27, 2024
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
In medical imaging, leveraging Artificial Intelligence (AI) significantly enhances the precision and efficiency of radiology report generation. Our research introduces two key methodologies that collectively aim to refine the generation and assessment of these reports by integrating AI with the expertise of radiology professionals.
Initially, our approach focuses on improving report preparation by utilizing longitudinal chest X-ray (CXR) data along with historical reports from the MIMIC-CXR dataset. We developed the Longitudinal-MIMIC dataset, a comprehensive collection that incorporates a patient's historical and current visit data, enabling a more informed analysis. This data powers a transformer-based model featuring a cross-attention mechanism and a memory-driven decoder, which pre-fills the 'findings' section of radiology reports by analyzing a patient's past and present CXRs and reports. This technique not only minimizes reporting errors but also enhances the report's accuracy by incorporating extensive patient history.
Moving to the evaluation phase, we integrate the expertise of professional radiologists with the computational efficiency of Large Language Models (LLMs), such as GPT-3.5 and GPT-4. Employing methods like In-Context Instruction Learning (ICIL) and Chain of Thought (CoT) reasoning, our approach aligns AI evaluations with the nuanced judgment of radiology experts. This collaborative model significantly outperforms traditional evaluation metrics, offering a more accurate and detailed assessment of AI-generated reports. The validation of our approach through detailed annotations from radiology professionals sets a new standard for the accurate evaluation of medical reports.
Together, these methodologies represent a synergistic approach to improving radiology report generation and evaluation. By combining longitudinal patient data with expert radiological insight and AI innovation, our work promises to significantly enhance the quality and efficiency of patient care in the field of radiology.