From Concepts to Frameworks
Course Overview
Monitoring and evaluation are closely related and often managed together, but they serve different purposes. Monitoring tracks progress and performance over time. Evaluation examines results and performance more deeply to understand what is working, what is not, and why. This introductory course builds a clear foundation in both. Participants learn core concepts, why M&E matters, and how to design a practical framework that teams can use in real settings. The course uses multiple real-world examples and includes brief, optional illustrations of how generative AI can support early stage drafting and planning tasks, with attention to appropriate use and limitations.
What You Will Learn
Participants build a shared understanding of common M&E terms and how M&E supports accountability, learning, and decision-making. They learn how to clarify purpose and intended use, define roles and participation, and translate program intent into a theory of change or logic model. Participants practice developing evaluation questions, defining what success looks like, and selecting components of an M&E framework that fit a real context. They also learn how to translate design choices into usable monitoring, evaluation, and learning plans, including indicators, data sources, and planning structures. By the end of the course, participants can develop a complete, basic M&E framework and follow a clear process for building and refining it.
Course Format
This course is delivered as four live, virtual, instructor-led modules. Sessions are interactive and combine short instruction, discussion, and applied exercises. Participants work through realistic examples and receive feedback during practice activities. Participants who complete the modules receive a certificate of completion.
Module Breakdown
Module 1: Monitoring and Evaluation Fundamentals, Clarifying Concepts and Purpose
The course begins by defining monitoring and evaluation and how they work together in practice. Participants learn the basic purposes of M&E, how monitoring differs from evaluation, and how both support accountability, learning, and decision-making. The module clarifies common terminology, including M&E, MEL, MERL, and MEAL, and explains how evaluation differs from research. The session also introduces ethics as a foundational element of M&E system design. Participants leave with a clear and shared understanding of what M&E is, why it matters, and how teams apply it.
Module 2: Defining Requirements and Parameters for M&E
This module focuses on early design decisions that shape effective M&E systems. Participants learn how to identify requirements, clarify purpose and intended use, and define participation and roles. The module introduces practical approaches for developing a program theory or logic model, identifying evaluation questions, and defining success in context. The session also includes examples of how generative AI can support early-stage M&E tasks, such as drafting evaluation questions, refining logic models, and synthesizing stakeholder input, while reinforcing appropriate use, limitations, and ethical considerations.
Module 3: Developing Monitoring, Evaluation, and Learning Plans
Participants translate design decisions into actionable plans. The module covers the core components of a basic M&E framework and clarifies the distinct roles of monitoring plans, evaluation plans, and learning plans. The session emphasizes how these plans work together to support implementation, assessment, and learning over time. Participants also review examples of how generative AI can assist with planning tasks, such as drafting indicators, organizing data collection plans, and supporting learning agendas, with a focus on maintaining rigor, transparency, and evaluator oversight. Participants leave able to outline practical M&E plans that are clear, aligned, and fit for real-world use.
Module 4: Applied Practice, Designing an M&E Framework
The final module is hands-on and application focused. Using a case study, participants develop an M&E framework and associated monitoring and evaluation plans. Participants receive real-time feedback from the instructor and peers and refine their work through structured activities. Where useful, participants may also explore generative AI as a design support tool, with emphasis on human review and clear documentation. The module builds confidence and reinforces practical skills so participants can contribute to M&E design efforts in their professional roles.
Who Should Attend
This course is designed for professionals who are new to monitoring and evaluation or who want a clear foundation in core concepts and practical framework design. It is well suited for program staff, project managers, learning staff, researchers, and early-career evaluators who support program planning, implementation, reporting, or evaluation.
Prerequisites
No prerequisites are required. Participants do not need prior M&E experience. Familiarity with basic program concepts and an interest in planning or using evidence for decision-making will be helpful.

Instructor: Bianca Montrosse-Moorhead
