Setting Boundaries for Safe Use
Course Overview
AI governance brings structure to AI-supported evaluation work. It helps teams stay responsible, transparent, and defensible. Rather than adding more tools, governance sets clear boundaries. It also clarifies appropriate use, protects sensitive information, and documents key decisions so work holds up to scrutiny.
This course gives evaluators and evaluation managers a practical governance approach they can use right away. First, it helps teams decide when AI fits the task and when it does not. Next, it addresses privacy and third-party tool risks. Then, it shows how to communicate AI use clearly to colleagues and clients. Finally, it supports consistent practice by establishing shared norms and escalation triggers.
What You Will Learn
Participants explore what AI governance means in an evaluation context and why it strengthens credibility and trust. They build decision rules for when to use AI and when not to, based on task risk, data sensitivity, and intended use. In addition, the course covers practical safeguards for handling sensitive information and selecting tools. For example, participants learn what never belongs in an AI system and what requires extra controls.
Participants also build transparency habits through documentation and disclosure. They practice using an AI decision log to record purpose, inputs, outputs, and key judgments. They also learn clear ways to describe AI use in client-facing products. Finally, the course introduces risk management routines that teams can apply consistently. These include review points, escalation triggers, and response steps when something goes wrong, such as a privacy concern or an output that raises credibility questions.
Course Format
This course runs as two live, virtual, instructor-led modules (or one day in-person). Each module lasts three hours. The sessions combine short demonstrations with guided practice and peer exchange. Participants leave with a template for a ready-to-use governance toolkit.
Module Breakdown
Module 1: Governance Foundations and Decision Rules (3 hours)
Participants define governance for evaluation work and learn a practical decision framework for when AI fits. The module covers data sensitivity and privacy basics, third-party tool considerations, and disclosure fundamentals. Participants also begin an AI governance log and draft a simple set of team norms.
Module 2: Risk Management, Review Routines, and Applied Practice (3 hours)
Participants put governance into day-to-day evaluation work. The module focuses on review points, escalation triggers, and consistent handling of common risks. Participants apply governance to real evaluation work products. They also finalize a governance kit that includes an AI decision log, disclosure language, and a checklist for review and sign-off.
Who Should Attend
This course fits evaluators, researchers, and learning professionals who use or oversee AI-supported work. It also supports evaluation managers who want consistent team norms, documentation practices, and client communication.
Prerequisites
No prior AI experience is required. Participants should have basic familiarity with evaluation or research work products such as evaluation questions, interview guides, surveys, and findings.

Instructor: Rachel Beck
