Build confidence in evaluating AI-generated research by spotting common failure modes and applying practical verification workflows—including detecting when AI is used to spread misinformation or disinformation.
Image
quotation marks in a magnifying glass

Format: Live online (Zoom or equivalent)
Length: One week (three touchpoints)
Capacity: Up to 25 attendees
Online Tools Provided: OpenNotebook

Session 1 — Intro + Q&A (≈60 min)

Build shared understanding of failure modes and evaluation basics

Session 2 — Office Hour / Open Lab (≈60 min)

Practice verification workflows on real examples and patron scenarios

Session 3 — Follow-up Discussion + Reflections (≈60 min)

Refine workflows and translate them into service scripts and staff guidance

Patrons increasingly arrive with AI-generated "research" that may be inaccurate, incomplete, or misleading—or with polished content designed to spread misinformation. This offering helps staff recognize common issues (hallucinations, outdated claims, missing sources, overconfident tone) and apply structured evaluation methods. We practice verification strategies, compare AI-assisted search approaches, and discuss how to guide patrons toward responsible, evidence-based use.

Who it's for

  • Reference and information staff
  • Anyone supporting research help, fact-checking, or information literacy
  • Staff who want clearer playbooks for "AI said X—how do we verify?"

What you'll learn

  • The most common failure modes in AI-generated "research"
  • Red flags: missing sources, outdated claims, confident tone without evidence
  • Structured evaluation and verification workflows
  • When (and how) AI-assisted search can help—and when it can mislead
  • Ways to coach patrons toward responsible, evidence-based use

Leave with

  • A practical verification checklist for AI-generated claims
  • Increased confidence responding to patrons with AI-assisted "research"
  • Approaches for guiding patrons to credible sources and better questions