From Planning to Synthesis
Course Overview
AI tools can help evaluators move faster, but speed only matters when the work stays credible, transparent, and fit for purpose. This hands-on lab introduces three practical workflows that evaluators can use right away: scoping an evaluation, drafting instruments, and producing a decision-ready synthesis memo. Throughout the session, participants practice documentation and quality checks that keep human judgment at the center. The result is work that is easier to explain and more likely to hold up to client and peer review.
What You Will Learn
Participants learn how to use AI to refine evaluation questions and build a simple analysis frame that aligns with purpose and intended use. They practice drafting and strengthening an interview guide or short survey section, using structured prompts to improve clarity and reduce bias. Participants also practice turning a small set of notes into a concise findings memo that maintains traceability to underlying evidence. Across all workflows, participants build habits for documenting AI use and applying practical quality checks, including hallucination checks, bias checks, and triangulation prompts.
Course Format
This course is delivered as a 3-hour live, virtual, instructor-led session that combines short demonstrations with guided practice and peer exchange. Participants work with realistic evaluation examples and leave with a workflow toolkit they can reuse immediately.
Module Breakdown
The session begins by clarifying where AI adds value in evaluation work. It also covers where AI introduces risk. Participants then work through three practical workflows. First, they turn a messy request into clear evaluation questions. They also build a simple analysis frame. Next, they draft and improve an interview guide. They may also draft a short survey section. They use prompts that strengthen clarity. They also apply bias checks. Finally, they produce a structured synthesis memo. They work from a small set of qualitative inputs. The focus is on keeping conclusions connected to evidence. Throughout the lab, participants use a documentation log. They capture what they asked AI to do. They record what they accepted or rejected. They also record why. The session closes with quality routines. It also includes a short action plan for immediate use.
Who Should Attend
This lab is designed for evaluators, researchers, learning professionals, and program staff who want a practical, hands-on approach to using AI in evaluation work. It is also appropriate for evaluation managers who want a simple workflow and documentation approach they can apply across a team.
Prerequisites
Participants do not need prior AI experience. Participants should have basic familiarity with evaluation or research work products such as evaluation questions, interview guides, surveys, and findings.

Instructor: Valentine Gandhi
