NLM DIR Seminar Schedule
UPCOMING SEMINARS
RECENT SEMINARS
-
Dec. 2, 2025 Qingqing Zhu
CT-Bench & CARE-CT: Building Reliable Multimodal AI for Lesion Analysis in Computed Tomography -
Nov. 25, 2025 Jing Wang
MIMIC-EXT-TE: Millions Clinical Temporal Event Time-Series Dataset -
Oct. 21, 2025 Yifan Yang
TBD -
Oct. 14, 2025 Devlina Chakravarty
TBD -
Oct. 9, 2025 Ziynet Nesibe Kesimoglu
TBD
Scheduled Seminars on May 12, 2022
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
Videos that semantically correspond to a text query provide highly condensed information that can give a complete answer to the query. Videos relevant to medical instructional questions (e.g., how to use a tourniquet) are especially useful for first aid, medical emergency, and education questions. However, the number of publicly available, benchmark datasets with medical instructional videos is nonexistent. Thus we introduce two new datasets to push research toward designing and comparing algorithms that can recognize medical instructional videos and locate visual answers from them to natural language queries. We propose the datasets, MedVidCL and MedVidQA, for the tasks of Medical Video Classification (MVC) and Medical Visual Answer Localization (MVAL), two tasks that emphasize multi-modal (language and video) understanding. The MedVidCL dataset includes 6117 annotated videos for the MVC task, while the MedVidQA dataset contains 3010 annotated questions with corresponding answer segments from 899 videos for the MVAL task. We have benchmarked both tasks with both datasets via deep learning models that set competitive and comparative baselines for future research.