Beyond the search bar: Using AI to screen, read & appraise scientific lit.
Part of the series "Coffee Lectures Health & Science" | Spring Semester 2026
The volume of scientific publications now exceeds what researchers can realistically screen, read, and critically evaluate. This 15-minute Coffee Lecture presents a practical AI-supported workflow for navigating information overload—while maintaining methodological rigor. We demonstrate how AI-powered discovery tools such as ResearchRabbit and Connected Papers expand exploration beyond keyword searches, and how evidence-oriented platforms like Consensus and scite.ai help assess how findings are supported, cited, or contradicted in the broader literature. For screening, active-learning systems such as ASReview and Rayyan can substantially reduce title–abstract review time. For in-depth analysis, RAG-based environments including Elicit and NotebookLM support structured reading and semi-automated data extraction. Using predefined variables (e.g., sample size, intervention characteristics, primary outcomes), these systems can generate extraction tables and highlight potential methodological red flags—always requiring expert verification. The session addresses key concerns across disciplines, including hallucinations, algorithmic bias, data protection, and reproducibility. Participants will leave with immediately applicable tools and a clear understanding of where AI enhances scientific workflows—and where critical human appraisal remains indispensable.
Part of the series "Coffee Lectures Health & Science" | Spring Semester 2026
The volume of scientific publications now exceeds what researchers can realistically screen, read, and critically evaluate. This 15-minute Coffee Lecture presents a practical AI-supported workflow for navigating information overload—while maintaining methodological rigor. We demonstrate how AI-powered discovery tools such as ResearchRabbit and Connected Papers expand exploration beyond keyword searches, and how evidence-oriented platforms like Consensus and scite.ai help assess how findings are supported, cited, or contradicted in the broader literature. For screening, active-learning systems such as ASReview and Rayyan can substantially reduce title–abstract review time. For in-depth analysis, RAG-based environments including Elicit and NotebookLM support structured reading and semi-automated data extraction. Using predefined variables (e.g., sample size, intervention characteristics, primary outcomes), these systems can generate extraction tables and highlight potential methodological red flags—always requiring expert verification. The session addresses key concerns across disciplines, including hallucinations, algorithmic bias, data protection, and reproducibility. Participants will leave with immediately applicable tools and a clear understanding of where AI enhances scientific workflows—and where critical human appraisal remains indispensable.
Good to know
Highlights
- 15 minutes
- Online
Location
Online event
Agenda
-
Beyond the search bar: Using AI to screen, read, and appraise scientific lit.
-