We recently became aware of the work of Kathryn FitzGerald and Benjamin Charles Germain Lee analyzing AI policies in public libraries. They collected the first publicly-available public library AI policies and performed an intensive qualitative analysis of them; their work is impressive!
We decided to piggyback on their work by having an AI agent analyze the policies that FitzGerald and Germain Lee collected. We downloaded their corpus from their Internet Archive repository and subjected each one to a series of prompts to analyze different aspects of the prompts. So basically we spent an hour trying to emulate their research without having a background in library information science :-) It was a lot of fun!
Obviously their work is more in-depth and grounded, based on rigorous analysis by actual academics. This is something that Claude and I threw together on a Monday afternoon! But I did think it was interesting that they reached some of the same conclusions with respect to approved and excluded uses, ethical considerations, and more!
Figure 1
AI-Assisted Policy Analysis Pipeline
A four-step workflow using conversational AI prompts to transform unstructured policy documents into structured survey data and a comprehensive analytical report.
Develop Evaluation Criteria
📄 Input
5 sample policies uploaded to AI chatbot as representative examples
💬 Prompt
"Brainstorm a hierarchically grouped list of potential questions we can ask when surveying all similar documents…"
✦ Output
Categorized list of potential survey questions
100+ questions generatedKey action: Human reviews AI-generated questions and identifies which are most relevant.
Assemble the Survey Instrument
☑ Input
Hand-selected question IDs chosen from the Step 1 results
💬 Prompt
"Assemble questions {23, 9, 11, 12…} into a survey instrument as survey_template.txt"
📄 Output
Structured, reusable blank survey template
survey_template.txtKey action: Human curation ensures only meaningful, non-redundant questions make the final instrument.
Batch-Process Every Policy
📁 Input
policies/ folder + survey_template.txt from Step 2
💬 Prompt
"Run each policy against the survey template. Save completed results in a results/ folder."
📁 Output
One completed survey per policy document
results/ folderKey action: AI applies the same survey consistently to every document, ensuring comparable data.
Generate Comprehensive Report
📁 Input
results/ folder of completed surveys from Step 3
💬 Prompt
"Prepare a comprehensive analysis in layperson terms based on the completed surveys."
📖 Output
Thorough analytical report written in accessible, plain language
Final ReportKey action: AI synthesizes structured data into narrative insights; human reviews for accuracy.
Below is the executive summary that Claude produced based on its findings. You can also download the entire looooong report!
PUBLIC LIBRARY AI POLICY ANALYSIS
Date: March 31, 2026
Scope: 15 public library AI policies — United States and Canada (2023–2025)
Method: 66-question structured survey instrument applied to each policy via AI agent; findings synthesized across the full set
What We Had the AI Do
We instructed an AI agent to analyze 15 public library AI policies using a standardized 66-question survey covering nine areas: purpose and framing, scope, legal grounding, permitted uses, prohibited uses, human oversight, transparency and disclosure, training and education, and accountability. The libraries range from a small rural New Hampshire library (Holderness) to the Toronto Public Library, one of the largest urban systems in North America. Adoption dates run from December 2023 to September 2025.
The Bottom Line
Most libraries have gotten the basics right — and most have stopped there.
Virtually every policy studied protects patron data, requires human review of AI outputs, and frames AI as both a useful tool and a genuine risk. These are the field's consensus minimums, and it is good news that they are widely adopted.
But beyond those fundamentals, policy quality varies dramatically. A small number of libraries — most notably Kenosha (WI), Toronto (ON), Oakville (ON), and Schaumburg (IL) — have written genuinely sophisticated governance frameworks. The majority have written short staff conduct documents that leave significant governance questions unanswered. One library (Johnson County, KS) adopted a county government policy by reference with almost no library-specific content.
Key Findings
What the Field Is Doing Well
- Patron data protection is addressed in every substantive policy — this is the field's strongest consensus
- Human review of AI outputs before publication is required by 14 of 15 libraries
- AI is treated as exceptional technology warranting dedicated governance, not just another IT tool
- Dual framing — AI as both opportunity and risk — is consistent and appropriate
Where the Field Falls Short
Gap | How Many Libraries Address It |
|---|---|
Staff training required | 7 of 15 |
Patron disclosure in reference interactions | 3 of 15 |
Employment decision prohibitions | 5 of 15 |
Patron AI literacy programs | 2 of 15 |
Records retention for AI content | 1 of 15 |
Environmental sustainability | 2 of 15 |
Formal tool vetting with named criteria | ~5 of 15 |
Specific legal citations | ~7 of 15 |
The Four-Tier Policy Landscape
Tier 1 — Comprehensive Governance Frameworks: Kenosha WI, Toronto ON, Oakville ON, Schaumburg IL
These documents define terms, cite specific laws, establish formal vetting processes, require training, and address transparency with specificity.
Tier 2 — Substantive Mixed Documents: Crandall NY, Holderness NH, Wolfeboro NH, Naples NY
Thoughtful policies with meaningful depth but significant gaps. Holderness leads the entire field on patron education; Crandall leads on legal grounding.
Tier 3 — Standard Staff Conduct Documents: DeKalb IL, Hastings MI, Hinsdale IL, Houston County GA, St. Charles IL, White House TN
Competent acceptable-use policies for staff. Cover the basics; leave most governance questions unanswered.
Tier 4 — Administrative Adoption: Johnson County KS
One paragraph; no library-specific substance.
Standout Libraries
- Kenosha (WI): Field leader overall. Public approved-tool list, Human-AI-Human model, cross-referencing requirement, record retention, full patron literacy program, broadest legal citations.
- Toronto (ON): Unique three-gate vetting process (IT Security + Privacy Impact + Human Rights AI Impact Assessment). Strongest data definitions. Most rigorous pre-deployment standards of any library studied.
- Holderness (NH): Most patron-education-focused. Explicitly positions library as community AI educator; commits to workshops for all ages. The first policy in this study (December 2023).
- Oakville (ON): Only library to name all 11 institutional values in its AI policy. Strongest value grounding.
- Crandall (NY): Best legal grounding among U.S. libraries. Uses the NYS statutory definition of AI; cites four state statutes; acknowledges workforce impacts.
The Most Significant Missed Opportunity
Patron AI literacy is the largest gap between what libraries could be doing and what they are doing. Libraries are trusted public institutions with access to communities that most need high-quality AI education. Only Holderness and Kenosha have developed substantive patron literacy programs. The other 13 libraries have no patron-facing AI education component in their policies — their AI governance is entirely inward-facing.
Recommendations for Library Practitioners
Every library needs:
- A clear prohibition on patron data in unapproved AI tools
- Required human review of AI outputs before use
- Some form of tool vetting or approval process
- At least basic staff training on AI risks and responsibilities
Most libraries should add: 5. Required patron disclosure when AI is used in reference interactions 6. Specific legal citations (especially applicable library privacy statutes) 7. Employment decision prohibitions (hiring, discipline, performance review) 8. A starting commitment to patron AI literacy programming
Field leaders are also doing: 9. A formal named model for AI use in patron service (e.g., Human-AI-Human) 10. A public-facing list of approved tools with transparency about vetting 11. Multiple vetting gates (security, privacy, and human rights impact assessments) 12. Records retention guidance for AI-generated content 13. Environmental sustainability criteria in tool procurement
A Final Note
Good AI governance is not a document — it is a practice. The libraries best positioned for the future are those that have grounded their policies in clear values, connected them to specific legal obligations, invested in staff training, and built in regular review cycles. The technology will continue to change; the policies that survive will be the ones designed to change with it.
Analysis based on policy documents as they existed at time of survey completion (March 31, 2026; Claude Sonnet 4.6). Full analysis report, completed surveys, and survey instrument are available in the project working files.