Author: Illuminate Team

  • Pro Tips for Planning an Evaluation

    Pro Tips for Planning an Evaluation

    10 Moves That Prevent Scope Creep and Protect Use

    The fastest way to derail an evaluation is to act (and plan) as if you can answer everything. Strong evaluation plans protect use by making smart choices early. They focus on what matters most for the program and the decisions it needs to support.

    Here are ten practical ways to prevent scope creep and protect use:

    1. Start with the end in mind: Be clear about who will use the findings and what they will do with them.
    2. Name the primary users: You can listen widely, but one group typically owns use. Design for them.
    3. Get clear on what success means: If “good” is undefined, you will end up with opinions instead of evidence.
    4. Build a shared program picture: Do not plan around assumptions. Confirm how the program actually operates.
    5. Make the logic visible: A simple program story beats an overbuilt model. Clarity matters more than polish.
    6. Ask fewer, better questions: A short list of high-value questions will outperform a long list every time.
    7. Match methods to questions, not habits: Do not default to what you have always done. Choose what fits what you need to learn.
    8. Use what already exists: Good planning starts with existing data, documents, and routine reporting, then fills gaps thoughtfully.
    9. Protect feasibility and trust: Time, access, burden, and sensitivity are not details. They are design drivers.
    10. Plan for use, not just reporting: Decide early how insights will travel, who will discuss them, and what will happen next.

    AI² Tips: Upgrade Your Evaluation Planning with AI

    AI can help you move faster in the evaluation planning phase. Use it to generate a first draft of evaluation questions, suggest indicator options, or help you populate a draft evaluation planning grid. Then bring your judgment and the people who will use the findings in to refine what truly fits.

    Two guardrails to keep in mind:

    1) Protect confidentiality: Do not paste raw transcripts, identifiable details, or internal sensitive information into public AI tools. Instead, de-identify, summarize, or use a synthetic example, or reserve sensitive work for approved tools and environments.

    2)Treat outputs as drafts: AI can speed up first passes, but you are responsible for what goes into the plan. Review, refine, and validate before anything becomes “final.”

  • How to Choose the Right Evaluation Methods

    How to Choose the Right Evaluation Methods

    A Practical Path to Stronger Evaluation Design

    Rather than searching for a perfect method, we help clients choose an evaluation approach that fits their context, aligns with their values, and supports the intended use of findings. This guide reflects how we think about method selection in our work with clients.

    Begin with clarity on purpose and use

    Strong evaluation design starts with a clear purpose. We begin by asking who will use the evaluation, what decisions it will inform, and what learning the organization hopes to gain. This leads to a set of Key Evaluation Questions that anchor the evaluation plan.

    As part of this early framing, we often explore the strengths that already exist within the organization. Understanding what works provides essential grounding for designing questions that support meaningful improvement and verifiable progress.

    We also clarify which outcomes matter most and how the evaluation will shed light on progress toward them.

    Our approach emphasizes:

    • Questions that support learning and adaptation
    • Questions that honor both outcomes and process
    • Questions that reflect what success truly looks like

    Once the question types are defined, we match them with appropriate evaluation methods. We revisit the program’s theory of change or results pathway to ensure that methods align with how change is expected to happen. When it is helpful, we explore whether technology-enabled data collection tools may support accuracy or timely evidence.

    As a rule of thumb:

    • Descriptive questions benefit from tools that surface what happened with accuracy and detail
    • Causal questions are well served by sensemaking workshops, contribution analysis, or comparative case studies
    • Evaluative questions often use rubrics or criteria co-created with stakeholders
    • Action questions are supported through facilitated reflection, design sessions, or scenario planning

    Aligning questions and methods creates a stronger evaluation design and more useful results.

    Think beyond data collection

    Evaluation methods shape more than data. They influence how people engage throughout the evaluation journey and how learning unfolds. We often design activities that support real-time learning so that teams can adjust strategies as new insights emerge.

    Illuminate supports method selection across every phase, including:

    • Framing the evaluation
    • Co-creating theories of change or learning agendas
    • Collecting qualitative and quantitative data
    • Facilitating sensemaking with stakeholders
    • Communicating findings in clear and accessible ways
    • Supporting teams as they use findings to make decisions

    Throughout these activities, we create space to surface challenges and strengths, helping teams build on what is already working well.

    This whole-process approach helps organizations get more value from their evaluation and strengthen learning systems.

    Honor the context and complexity

    Every evaluation takes place in a specific context. We consider the stage of development, visibility of outcomes, the ecosystem of partners, and the complexity of the environment. Understanding the maturity of an initiative and the visibility of outcomes helps us select methods that can credibly assess progress, even when attribution is difficult.

    We also assess feasibility based on available resources, staff capacity, and existing evidence. When it is useful, we explore digital tools that support data quality, reduce burden, or improve access to timely evidence.

    Illuminate focuses on practical and credible evaluation methods that fit real-world conditions. When timing or resources are limited, we help clients choose right-sized approaches that still generate meaningful insight.

    Blend methods to create a fuller picture

    Strong evaluations rarely rely on a single method. We often blend qualitative and quantitative approaches to bring both depth and pattern recognition.

    Examples include:

    • Surveys paired with interviews
    • Learning sessions paired with document reviews
    • Case studies supported by monitoring data
    • Reflection workshops that validate and enrich results

    This combination improves accuracy, reduces bias, and helps stakeholders see their experiences reflected in the evidence. It also strengthens interpretation by linking findings back to the program’s theory and intended outcomes.

    Use a simple evaluation matrix to maintain alignment

    For every project, Illuminate builds an evaluation matrix that links questions, data sources, methods, and analysis. A matrix:

    • Ensures adequate data for each question
    • Creates triangulation across sources
    • Supports deliberate tool design
    • Helps stakeholders understand the evaluation plan

    It is a simple but powerful tool for organizing complex evaluations.

    Confirm what is feasible

    Before finalizing an evaluation design, we assess feasibility by mapping:

    • Timing and sequencing
    • Availability of key informants
    • Skills required to implement each method
    • Technological and platform needs
    • The balance between ambition and available resources

    This prevents overdesign and ensures that the evaluation can be implemented with quality. It also helps identify risks to evidence quality early in the process so that mitigation strategies can be built into the design.

    Invite review and strengthen the design

    Peer review or expert consultation strengthens evaluation design and supports credibility. A short design review often clarifies assumptions, sharpens methods, and highlights opportunities to improve analytic pathways or incorporate technology that strengthens the reliability of the evidence.

    Stay flexible and transparent

    Evaluations evolve. Illuminate encourages documentation of changes so that stakeholders understand why adaptations were made and how they affect findings. We also check in with stakeholders to ensure that adaptation continues to reflect their insight, needs, and lived realities. Flexibility, when paired with transparency, supports credible and useful results.

    Final thought

    Choosing the right evaluation methods is not about following a fixed formula. It is about clarity, alignment, and thoughtful judgment. At Illuminate, we support organizations in selecting evaluation methods that help them learn, navigate complexity, and make better decisions. When methods are selected with clarity about use, strengths, theory, technology, and outcomes, evaluations generate credible evidence and help organizations move forward with confidence and momentum.

    If you are planning an evaluation and want to ensure your design is practical, feasible, and aligned with the outcomes that matter, our team is here to support you. Get in touch to learn more.

  • Using the AI² Approach to Avoid Common AI Pitfalls

    Using the AI² Approach to Avoid Common AI Pitfalls

    Transform Failure into Success

    The AI Implementation Challenge is Real

    When MIT’s NANDA initiative released its 2025 report The GenAI Divide: State of AI in Business, one finding grabbed headlines: 95% of enterprise AI pilots fail to deliver measurable business results.

    After billions of dollars poured into AI, how could so many initiatives be stuck at the starting line?

    The problem isn’t that the technology is broken, the models work. What breaks down is how organizations adopt, integrate, and learn from them. AI isn’t failing. Organizations are – when they don’t build the right systems for learning.

    That’s where the opportunity lies.

    Why So Many AI Pilots Stall: 5 Common Pitfalls

    1. Unclear goals.
    Pilots launch without a sharp definition of the problem they’re solving or the value they’re expected to deliver. When success isn’t defined, it’s nearly impossible to measure or justify scaling.

    2. Shallow integration.
    AI runs in isolation, disconnected from core systems and workflows. Tools never move beyond “sandbox experiments.”

    3. Limited readiness.
    AI adoption is treated as a tech project, not an organizational change. Without the right mix of talent, collaboration, and leadership sponsorship, even strong pilots fizzle.

    4. Lack of training.
    Teams get access but little guidance. Without structured onboarding and “unlearning” old workflows, adoption is inconsistent and shallow.

    5. No quality assurance.
    Organizations assume “human in the loop” equals safe. But without clear QA processes—expert checkpoints, feedback loops, and traceability—errors slip through and trust erodes.

    Enter AI²: 5 Principles for Turning Pilots Into Success Stories

    1. Start with strengths.
    Target AI where your organization already has momentum—strong data systems, reliable processes, or teams ready to innovate. Quick wins create visible impact. (Illuminate helps uncover these bright spots through appreciative assessments and facilitation.)

    2. Embed learning loops.
    Define outcomes up front, capture both numbers and stories, and create rapid cycles of reflection and adjustment. Everyday challenges like HR inquiries, report writing, or product feedback become opportunities for learning—not just experiments.

    3. Scale what works.
    Not every pilot will succeed everywhere. Identify where AI is making a real difference and expand from there. Bright spots become models to replicate, while less effective pilots are adapted or set aside.

    4. Invest in people.
    The real measure of AI success isn’t just speed or savings—it’s what it makes possible for people. Successful pilots free staff from repetitive tasks, enable professional development, and allow teams to focus on higher-level, mission-driven work. (Illuminate builds feedback systems that capture these human gains alongside business results.)

    5. Set realistic expectations.
    AI isn’t magic. Pilots succeed when they’re grounded in achievable goals and when leaders are willing to learn from both progress and setbacks. Small, well-measured wins often create more momentum than overhyped promises of transformation.

    Flipping the 95%

    The 95% failure rate isn’t a verdict on AI. It’s a signal that companies need a smarter path forward. With AI², organizations can shift from pilots that stall to solutions that scale by:

    • Defining clear objectives tied to business value,
    • Integrating tools into real workflows,
    • Building the culture and talent to adapt,
    • Investing in their people, and
    • Setting realistic expectations.

    The promise of AI can only be unlocked by organizations that know how to learn, adapt, and grow.

    Be Part of the 5%

    If you’re investing in AI, you don’t have to become another statistic. With AI², your organization can shift from experiments that fade to solutions that transform.

    At Illuminate, we help organizations:

    • Align AI with strategy and strengths,
    • Build evaluation and feedback systems, and
    • Scale successful pilots into enterprise-wide change.

    The AI² Readiness Toolkit