Tag: organizational learning

  • How to Choose the Right Evaluation Methods

    How to Choose the Right Evaluation Methods

    A Practical Path to Stronger Evaluation Design

    Rather than searching for a perfect method, we help clients choose an evaluation approach that fits their context, aligns with their values, and supports the intended use of findings. This guide reflects how we think about method selection in our work with clients.

    Begin with clarity on purpose and use

    Strong evaluation design starts with a clear purpose. We begin by asking who will use the evaluation, what decisions it will inform, and what learning the organization hopes to gain. This leads to a set of Key Evaluation Questions that anchor the evaluation plan.

    As part of this early framing, we often explore the strengths that already exist within the organization. Understanding what works provides essential grounding for designing questions that support meaningful improvement and verifiable progress.

    We also clarify which outcomes matter most and how the evaluation will shed light on progress toward them.

    Our approach emphasizes:

    • Questions that support learning and adaptation
    • Questions that honor both outcomes and process
    • Questions that reflect what success truly looks like

    Once the question types are defined, we match them with appropriate evaluation methods. We revisit the program’s theory of change or results pathway to ensure that methods align with how change is expected to happen. When it is helpful, we explore whether technology-enabled data collection tools may support accuracy or timely evidence.

    As a rule of thumb:

    • Descriptive questions benefit from tools that surface what happened with accuracy and detail
    • Causal questions are well served by sensemaking workshops, contribution analysis, or comparative case studies
    • Evaluative questions often use rubrics or criteria co-created with stakeholders
    • Action questions are supported through facilitated reflection, design sessions, or scenario planning

    Aligning questions and methods creates a stronger evaluation design and more useful results.

    Think beyond data collection

    Evaluation methods shape more than data. They influence how people engage throughout the evaluation journey and how learning unfolds. We often design activities that support real-time learning so that teams can adjust strategies as new insights emerge.

    Illuminate supports method selection across every phase, including:

    • Framing the evaluation
    • Co-creating theories of change or learning agendas
    • Collecting qualitative and quantitative data
    • Facilitating sensemaking with stakeholders
    • Communicating findings in clear and accessible ways
    • Supporting teams as they use findings to make decisions

    Throughout these activities, we create space to surface challenges and strengths, helping teams build on what is already working well.

    This whole-process approach helps organizations get more value from their evaluation and strengthen learning systems.

    Honor the context and complexity

    Every evaluation takes place in a specific context. We consider the stage of development, visibility of outcomes, the ecosystem of partners, and the complexity of the environment. Understanding the maturity of an initiative and the visibility of outcomes helps us select methods that can credibly assess progress, even when attribution is difficult.

    We also assess feasibility based on available resources, staff capacity, and existing evidence. When it is useful, we explore digital tools that support data quality, reduce burden, or improve access to timely evidence.

    Illuminate focuses on practical and credible evaluation methods that fit real-world conditions. When timing or resources are limited, we help clients choose right-sized approaches that still generate meaningful insight.

    Blend methods to create a fuller picture

    Strong evaluations rarely rely on a single method. We often blend qualitative and quantitative approaches to bring both depth and pattern recognition.

    Examples include:

    • Surveys paired with interviews
    • Learning sessions paired with document reviews
    • Case studies supported by monitoring data
    • Reflection workshops that validate and enrich results

    This combination improves accuracy, reduces bias, and helps stakeholders see their experiences reflected in the evidence. It also strengthens interpretation by linking findings back to the program’s theory and intended outcomes.

    Use a simple evaluation matrix to maintain alignment

    For every project, Illuminate builds an evaluation matrix that links questions, data sources, methods, and analysis. A matrix:

    • Ensures adequate data for each question
    • Creates triangulation across sources
    • Supports deliberate tool design
    • Helps stakeholders understand the evaluation plan

    It is a simple but powerful tool for organizing complex evaluations.

    Confirm what is feasible

    Before finalizing an evaluation design, we assess feasibility by mapping:

    • Timing and sequencing
    • Availability of key informants
    • Skills required to implement each method
    • Technological and platform needs
    • The balance between ambition and available resources

    This prevents overdesign and ensures that the evaluation can be implemented with quality. It also helps identify risks to evidence quality early in the process so that mitigation strategies can be built into the design.

    Invite review and strengthen the design

    Peer review or expert consultation strengthens evaluation design and supports credibility. A short design review often clarifies assumptions, sharpens methods, and highlights opportunities to improve analytic pathways or incorporate technology that strengthens the reliability of the evidence.

    Stay flexible and transparent

    Evaluations evolve. Illuminate encourages documentation of changes so that stakeholders understand why adaptations were made and how they affect findings. We also check in with stakeholders to ensure that adaptation continues to reflect their insight, needs, and lived realities. Flexibility, when paired with transparency, supports credible and useful results.

    Final thought

    Choosing the right evaluation methods is not about following a fixed formula. It is about clarity, alignment, and thoughtful judgment. At Illuminate, we support organizations in selecting evaluation methods that help them learn, navigate complexity, and make better decisions. When methods are selected with clarity about use, strengths, theory, technology, and outcomes, evaluations generate credible evidence and help organizations move forward with confidence and momentum.

    If you are planning an evaluation and want to ensure your design is practical, feasible, and aligned with the outcomes that matter, our team is here to support you. Get in touch to learn more.

  • Using the AI² Approach to Avoid Common AI Pitfalls

    Using the AI² Approach to Avoid Common AI Pitfalls

    Transform Failure into Success

    The AI Implementation Challenge is Real

    When MIT’s NANDA initiative released its 2025 report The GenAI Divide: State of AI in Business, one finding grabbed headlines: 95% of enterprise AI pilots fail to deliver measurable business results.

    After billions of dollars poured into AI, how could so many initiatives be stuck at the starting line?

    The problem isn’t that the technology is broken, the models work. What breaks down is how organizations adopt, integrate, and learn from them. AI isn’t failing. Organizations are – when they don’t build the right systems for learning.

    That’s where the opportunity lies.

    Why So Many AI Pilots Stall: 5 Common Pitfalls

    1. Unclear goals.
    Pilots launch without a sharp definition of the problem they’re solving or the value they’re expected to deliver. When success isn’t defined, it’s nearly impossible to measure or justify scaling.

    2. Shallow integration.
    AI runs in isolation, disconnected from core systems and workflows. Tools never move beyond “sandbox experiments.”

    3. Limited readiness.
    AI adoption is treated as a tech project, not an organizational change. Without the right mix of talent, collaboration, and leadership sponsorship, even strong pilots fizzle.

    4. Lack of training.
    Teams get access but little guidance. Without structured onboarding and “unlearning” old workflows, adoption is inconsistent and shallow.

    5. No quality assurance.
    Organizations assume “human in the loop” equals safe. But without clear QA processes—expert checkpoints, feedback loops, and traceability—errors slip through and trust erodes.

    Enter AI²: 5 Principles for Turning Pilots Into Success Stories

    1. Start with strengths.
    Target AI where your organization already has momentum—strong data systems, reliable processes, or teams ready to innovate. Quick wins create visible impact. (Illuminate helps uncover these bright spots through appreciative assessments and facilitation.)

    2. Embed learning loops.
    Define outcomes up front, capture both numbers and stories, and create rapid cycles of reflection and adjustment. Everyday challenges like HR inquiries, report writing, or product feedback become opportunities for learning—not just experiments.

    3. Scale what works.
    Not every pilot will succeed everywhere. Identify where AI is making a real difference and expand from there. Bright spots become models to replicate, while less effective pilots are adapted or set aside.

    4. Invest in people.
    The real measure of AI success isn’t just speed or savings—it’s what it makes possible for people. Successful pilots free staff from repetitive tasks, enable professional development, and allow teams to focus on higher-level, mission-driven work. (Illuminate builds feedback systems that capture these human gains alongside business results.)

    5. Set realistic expectations.
    AI isn’t magic. Pilots succeed when they’re grounded in achievable goals and when leaders are willing to learn from both progress and setbacks. Small, well-measured wins often create more momentum than overhyped promises of transformation.

    Flipping the 95%

    The 95% failure rate isn’t a verdict on AI. It’s a signal that companies need a smarter path forward. With AI², organizations can shift from pilots that stall to solutions that scale by:

    • Defining clear objectives tied to business value,
    • Integrating tools into real workflows,
    • Building the culture and talent to adapt,
    • Investing in their people, and
    • Setting realistic expectations.

    The promise of AI can only be unlocked by organizations that know how to learn, adapt, and grow.

    Be Part of the 5%

    If you’re investing in AI, you don’t have to become another statistic. With AI², your organization can shift from experiments that fade to solutions that transform.

    At Illuminate, we help organizations:

    • Align AI with strategy and strengths,
    • Build evaluation and feedback systems, and
    • Scale successful pilots into enterprise-wide change.

    The AI² Readiness Toolkit

  • Illuminating Learning: Building the Skills of Tomorrow’s Leaders

    Illuminating Learning: Building the Skills of Tomorrow’s Leaders

    Empowering leaders to turn evidence into impact.

    Recent Highlights: Training in Action

    Our Learning Center has the privilege of delivering training programs for partners who are tackling some of today’s most pressing challenges.

    These programs represent meaningful milestones in Illuminate’s early journey. We are grateful to our clients for the opportunity to work on trainings that reflect our commitment to people-centered excellence.

    Why It Matters

    Capacity building often creates value that isn’t immediately visible. A training session can spark a conversation that shifts how a team works. A new framework can inspire an organization to measure what really matters. A connection made in class can grow into a partnership that advances shared goals.

    At Illuminate, we design every course with this bigger picture in mind. We focus on:

    • Interactive learning that connects ideas to practice.
    • Real-world application so skills transfer directly into workplace contexts.
    • Community and networks that keep the learning alive long after the course ends.

    The result? A ripple effect that strengthens not only individual participants but the organizations and communities they serve.

    What’s Next: Introducing AI Essentials

    We’re excited to announce our newest course: AI Essentials for Evaluation and Research: Ethics, Insights, and Practical Applications

    Understanding AI is no longer optional. It’s a core skill for today’s professionals. This course welcomes participants with all levels of AI experience and introduces how AI tools can transform research and evaluation practices. By the end, you’ll walk away with an ethical framework and hands-on competencies to integrate AI effectively into your projects.

    In this interactive three-hour session, you will:

    • Explore essential AI concepts — including prompt engineering and the distinctions between generative and deterministic AI
    • Examine practical applications through real-world examples
    • Learn about key ethical considerations such as data privacy, bias, and model limitations
    • Experiment with leading large language models (LLMs) like ChatGPT, Claude, and Llama
    • Build practical introductory skills in AI-assisted research, statistical analysis, and qualitative thematic coding

    This course is designed for leaders and practitioners who want to cut through the hype and understand what AI truly means for their work. It’s not about coding or algorithms. Instead, it’s about building confidence to use AI responsibly, strategically, and ethically in evaluation, leadership, and organizational learning contexts.

    Participants will walk away with:

    • A practical, plain-language introduction to AI.
    • Case examples of how mission-driven organizations are applying these tools.
    • Strategies to explore AI adoption with integrity and purpose.

    This course builds on Illuminate’s broader commitment to advancing what works and equipping organizations with the skills and mindsets they need to navigate change.

    Join Us on the Learning Journey

    The Learning Center is growing, and so is the community of professionals who learn with us. Whether you’re working in evaluation, leadership, health, or social innovation, our courses are designed to give you practical tools and fresh perspectives that you can put into action right away.

    We invite you to:

    At Illuminate, we believe that when people learn together, they create sparks that drive lasting change. We’re excited to keep building those sparks with you.