NLM DIR Seminar Schedule

UPCOMING SEMINARS

RECENT SEMINARS

Scheduled Seminars on July 1, 2025

Speaker
Yoshitaka Inoue
PI/Lab
Augustin Luna
Time
11 a.m.
Presentation Title
Graph-Aware Interpretable Drug Response Prediction and LLM-Driven Multi-Agent Drug-Target Interaction Prediction
Location
Hybrid
In-person: Building 38A/B2N14 NCBI Library or Meeting Link

Contact NLMDIRSeminarScheduling@mail.nih.gov with questions about this seminar.

Abstract:

Machine learning (ML) holds promise for accelerating drug discovery, a lengthy and expensive process. However, the black-box nature of deep learning (DL) models hinders their clinical applicability. Here, we propose two methods: 1) drGT, graph-aware interpretable methods for drug response prediction (DRP), and 2) DrugAgent, a large language model (LLM)-driven multi-agent tool for drug-target interaction (DTI) prediction.

drGT is a graph neural network (GNN)-based approach utilizing a heterogeneous network (e.g., a graph with nodes representing genes, drugs, and cell lines). drGT was evaluated for DRP under randomly masked 5-fold cross-validation and for unseen drugs and cell lines. For prediction, drGT achieved an AUROC of up to 94.5% under random splitting, 84.4% for unseen drugs, and 70.6% for unseen cell lines, comparable to existing benchmark methods, while providing interpretability. Crucially, 63.67% of the drug-gene associations identified by drGT are independently supported by PubMed literature or an established DTI prediction model, validating its interpretability.

Regarding DrugAgent, our multi-agent LLM system for DTI prediction combines multiple specialized perspectives with transparent reasoning. We adapt and extend existing multi-agent frameworks by (1) applying a coordinator-based architecture to the DTI domain, (2) integrating domain-specific data sources (including ML predictions, knowledge graphs, and literature evidence), and (3) incorporating Chain-of-Thought (CoT) and ReAct (Reason+Act) frameworks for transparent DTI reasoning. In comprehensive experiments using a kinase inhibitor dataset, our multi-agent LLM method significantly outperformed a non-reasoning GPT-4o mini baseline, achieving a 45% higher F1 score (0.514 vs 0.355).