NLM DIR Seminar Schedule
UPCOMING SEMINARS
-
March 25, 2025 Yifan Yang
TBD -
April 1, 2025 Roman Kogay
TBD -
April 8, 2025 Jaya Srivastava
TBD -
April 15, 2025 Pascal Mutz
TBD -
April 18, 2025 Valentina Boeva, Department of Computer Science, ETH Zurich
Decoding tumor heterogeneity: computational methods for scRNA-seq and spatial omics
RECENT SEMINARS
-
March 11, 2025 Sofya Garushyants
Tmn – bacterial anti-phage defense system -
March 4, 2025 Sanasar Babajanyan
Evolution of antivirus defense in prokaryotes depending on the environmental virus load -
Feb. 25, 2025 Zhizheng Wang
GeneAgent: Self-verification Language Agent for Gene Set Analysis using Domain Databases -
Feb. 18, 2025 Samuel Lee
Efficient predictions of alternative protein conformations by AlphaFold2-based sequence association -
Feb. 11, 2025 Po-Ting Lai
Enhancing Biomedical Relation Extraction with Directionality
Scheduled Seminars on Feb. 11, 2025
Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.
Abstract:
Biological relation networks contain rich information for understanding the biological mechanisms behind the relationship of entities such as genes, proteins, diseases, and chemicals. The vast growth of biomedical literature poses significant challenges updating the network knowledge. The recent Biomedical Relation Extraction Dataset (BioRED) provides valuable manual annotations, facilitating the development of machine-learning and pre-trained language model ap-proaches for automatically identifying novel document-level (inter-sentence context) relationships. Nonetheless, its annotations lack directionality (subject/object) for the entity roles, essential for study-ing complex biological networks. Herein we annotate the entity roles of the relationships in the Bi-oRED corpus and subsequently propose a novel multi-task language model with soft-prompt learning to jointly identify the relationship, novel findings, and entity roles. Our results include an enriched Bi-oRED corpus with 10,864 directionality annotations. Moreover, our proposed method outperforms existing large language models such as the state-of-the-art GPT-4 and Llama-3 on two benchmarking tasks.
Microsoft Teams
Meeting ID: 224 443 106 522
Passcode: 5he64w7k
________________________________________