Building Credible Evidence for Evaluation
Course Overview
Data quality anchors credible evaluation. When datasets include gaps, inconsistencies, or weak documentation, even strong methods produce findings that stakeholders question and struggle to use. This course builds practical data quality skills for evaluators working with quantitative and qualitative data. The course keeps evaluation at the center and uses examples from both evaluation studies and monitoring datasets, since many teams support both.
Participants learn to define “quality” based on evaluation purpose and intended use. They set practical standards for collection and management, then apply routines that reduce error and increase confidence. The course also includes a focused segment on AI-supported workflows. Participants apply the same fundamentals when tools help with cleaning, summarizing, coding, or synthesis. The course stays grounded in sound evaluation practice and treats AI as a real-world context where data quality matters even more.
What You Will Learn
Participants learn to assess and strengthen data quality across the evaluation cycle. They identify common threats to quality. These include unclear definitions, missing data, inconsistent coding, and measurement error. They also address documentation gaps that limit interpretation. The course teaches routines for completeness, consistency, and accuracy. It also covers documentation practices that support defensible evidence. Participants learn how these routines strengthen monitoring data too. They improve indicator clarity and consistency over time.
The course also covers data quality when teams introduce AI tools. Participants prepare data so tools do not amplify existing issues. They use review routines to check AI-supported outputs. They compare outputs to the underlying data and documentation.
Course Format
This course is delivered as two live, virtual, instructor-led modules or as a one-day in-person session. Each module combines short demonstrations with guided practice and peer exchange. Participants work with realistic evaluation examples and leave with a take-home toolkit that includes a data quality checklist, a codebook and documentation template, and a set of reusable quality routines.
Module Breakdown
Module 1: Defining and Diagnosing Data Quality
The course begins by clarifying what “data quality” means in evaluation and how expectations should vary depending on purpose, intended use, and risk. Participants learn practical ways to define indicators and variables clearly, develop consistent codebooks, and identify common issues such as missingness, inconsistent entries, duplicate records, and unclear categories. Examples include measures commonly used in monitoring so datasets are analysis-ready and interpretable. Participants also practice quick diagnostic checks that help them determine whether a dataset is usable as-is or needs targeted improvement before analysis.
Module 2: Quality Routines, Documentation, and Working with AI-Supported Outputs
The second module focuses on the routines that protect quality during collection, cleaning, analysis, and reporting. Participants learn how to set up repeatable checks for accuracy and consistency and how to document decisions so results remain interpretable and defensible. The module then applies these same routines to AI-supported workflows. Participants learn where tools commonly introduce errors or remove context, and they practice straightforward review steps to confirm outputs remain aligned with the underlying data and agreed definitions. The course closes with a short action plan so participants can adopt the routines immediately.
Who Should Attend
This course is designed for evaluators, researchers, analysts, and evaluation support staff who work with qualitative or quantitative data and want a clear, practical approach to improving data quality. It is also useful for evaluation managers and MEL staff who want a shared set of expectations and routines across a team, with an emphasis on the requirements needed for credible evaluation.
Prerequisites
No prerequisites are required. Familiarity with basic data concepts such as missing values, duplicates, and simple cleaning routines is helpful but not necessary.


Instructors: Kerry Bruce and Jade Lamb
