NLM DIR Seminar Schedule
UPCOMING SEMINARS
-
Feb. 17, 2026 Zhaohui Liang
Heterogeneous Graph Re-ranking for CLIP-based Medical Cross-modal Retrieval -
Feb. 19, 2026 Jean Thierry-Mieg
On a new RNA-seq aligner -
Feb. 24, 2026 Ajith Viswanathan Asari Pankajam
TBD -
March 3, 2026 Gianlucca Goncalves Nicastro
TBD -
March 5, 2026 Hasan Balci
TBD
RECENT SEMINARS
-
Feb. 5, 2026 Lana Yeganova
From Algorithms to Insights: Bridging AI and Topic Discovery for Large-Scale Biomedical Literature Analysis. -
Jan. 29, 2026 Mehdi Bagheri Hamaneh
FastSpel: A simple peptide spectrum predictor that achieves deep learning-level performance at a fraction of the computational cost -
Jan. 22, 2026 Mario Flores
AI Pipeline for Characterization of the Tumor Microenvironment -
Jan. 20, 2026 Anastasia Gulyaeva
Diversity and evolution of the ribovirus class Stelpaviricetes -
Jan. 8, 2026 Won Gyu Kim
LitSense 2.0: AI-powered biomedical information retrieval with sentence and passage level knowledge discovery
The NLM DIR holds a public weekly seminar series for NLM trainees, staff scientists, and investigators to share details on current and exciting research projects at NLM. Seminars take place on Tuesdays at 11:00 AM, EST and some Thursdays at 3:00 PM, EST. Seminars are held in the B2 Library of Building 38A on the main NIH campus in Bethesda, MD.
To schedule a seminar, click the “Schedule Seminar” button to the right, select an appropriate date on the calendar to sign up, and then complete the form. You will need an NIH PIV card to access the “Schedule Seminar” page.
Please include seminars by invited visiting scientists in the NLM DIR seminar series. These need not be on a Tuesday or Thursday.
If you would like to schedule a seminar by a visiting scientist, click the “Schedule Seminar” and complete the form. Contact NLMDIRSeminarScheduling@mail.nih.gov with questions. Please follow this link to subscribe/unsubscribe to/from the NLM DIR seminar mailing list.
Titles and Abstracts for Upcoming Seminars
(based on the current date)
Heterogeneous Graph Re-ranking for CLIP-based Medical Cross-modal Retrieval
Cross-modal retrieval of medical radiographs is a critical component of clinical decision support, cohort discovery, and large-scale data reuse. While CLIP-based vision–language models enable effective zero-shot retrieval, ranking based solely on embedding similarity does not explicitly capture higher-order relationships among images, reports, and clinical semantics. We propose a heterogeneous graph re-ranking framework that augments CLIP-based retrieval with structured relational reasoning while keeping the backbone representation model frozen. Starting from an initial CLIP ranking, the method constructs a heterogeneous k-nearest-neighbor graph over image and report embeddings and applies relation-aware message passing to refine candidate rankings.
We instantiate the framework using three representative graph neural network layer variants (GraphSAGE, GCN, and GAT), and evaluate it on chest radiograph retrieval using the OpenI-CXR and MIMIC-CXR datasets under both within-dataset validation and cross-dataset transfer. On the smaller OpenI dataset, heterogeneous graph re-ranking yields substantial improvements, with GraphSAGE increasing Strong MRR by 47.7%, Precision@10 by 58.2%, and mAP@10 by 45.3%, alongside consistent gains in nDCG. Text-to-image retrieval benefits most, with MRR improving from 0.254 to 0.384 (50.8%). On the larger MIMIC-CXR dataset, gains are more moderate but consistent: GAT improves Strong Precision@10 by 8.5% and mAP@20 by 4.9%, while GraphSAGE enhances weak retrieval performance and normal CXR screening accuracy by up to 3.1%. Cross-dataset experiments further show that heterogeneous graph re-ranking improves robustness relative to embedding-only retrieval, with attention-based models providing the most stable transfer performance.
Overall, these results demonstrate that heterogeneous graph re-ranking is an effective and practical extension to CLIP-based medical cross-modal retrieval, improving ranking quality, clinically relevant screening performance, and generalization without modifying the underlying vision–language encoder.
On a new RNA-seq aligner
TBD
TBD