Standards, Ethics, and Quality in Evaluation Practice
Course Overview
Master the art of defensible judgment with this foundational course from the Illuminate Learning Center. Evaluation quality is more than a checklist. It requires methodological rigor and professional integrity, even under pressure. This course helps you navigate the gray areas of evaluation. These include budget constraints, political influence, and the governance challenges of AI-enabled work. By the end of the module, you will have a structured Navigation Protocol. You will use it to turn professional discomfort into transparent, defensible judgment.
What You Will Learn
Participants develop the mental muscles needed to distinguish between and apply four critical framework categories: Professional Ethics, Quality Standards, Evaluative Criteria, and Technology and Data Governance. The course builds competencies in matching the right professional lens to the tension at hand, whether you are balancing equity with efficiency or defending findings under stakeholder pressure. You will also learn to operationalize responsible use of AI and digital tools through human-in-the-loop verification and transparency practices, ensuring tools assist the work while the evaluator remains accountable for final conclusions.
Course Format
This is a self-paced online module designed for flexible professional development. The learning experience is centered on the Application Lab, where you move through real-world scenarios using an interactive decision guide. The course emphasizes practical application over theory, concluding with a template-driven exercise where you draft a Defensible Justification for a current professional dilemma.
Module Breakdown
- Defensible Judgment: Applying a five-part justification structure so your reasoning can stand up to professional and public scrutiny.
- Navigating the Landscape of Practice: Understanding evaluation as a series of professional collisions and identifying where responsibility lies.
- Core Frameworks: Key standards and criteria that shape evaluation practice, including the AEA Guiding Principles, OECD DAC criteria, and the NIST AI Risk Management Framework.
- Responsible Technology Controls: Four essential safeguards for AI-assisted analysis: human verification, output auditing, transparency, and limitations statements.
- The Application Lab: Scenarios covering budget versus rigor, political interference in reporting, equity tradeoffs, and responsible use of AI and digital tools.
Who Should Attend
This course is designed for evaluation practitioners and program managers who lead work in complex, high-stakes environments. It is a core requirement for the Certificate in Applied Evaluation Practice (CAEP), a program designed for professionals leading through complexity who want to enhance their credibility through transparent, evidence-based reasoning.
Prerequisites
There are no formal prerequisites for this course.

Instructor: Beeta Tahmassebi
