IEU examines what makes GCF projects ready for evaluation
The IEU held its final Learning Talk of 2025 on 11 December, focusing on a core question for the Fund: what does it actually mean for a GCF project to be ready for evaluation?
The session was presented by Susumu Yoshida, Evaluation Specialist, and Alejandro Gonzalez-Caro, Evaluation Data Associate, and moderated by Jeehyun Yoon, Evaluation Specialist. The discussion drew on findings from the IEU’s ongoing evaluability assessment series, which has examined hundreds of GCF funding proposals since 2018.
Rather than focusing on project results, the Learning Talk examined what is in place at the design stage of a project, and why those early decisions matter for learning, accountability, and future evaluation.
Evaluation starts at design
A key message from the session was that evaluation does not begin at mid-term or completion. It begins at project design.
IEU evaluability assessments review approved funding proposals to understand whether projects are designed in ways that allow credible and evidence-based evaluation later on. This includes whether proposals clearly explain intended outcomes, how change is expected to happen, and how progress will be measured.
Strong implementation alone cannot compensate for a design that lacks clarity on these fundamentals.
What IEU evaluability assessments look at
Evaluability assessments are desk-based reviews of approved funding proposals. They do not assess project performance or effectiveness. Instead, they focus on quality at entry from an evaluation perspective.
The assessments examine four core areas:
- theory of change and causal logic
- measurement of change and reporting
- implementation fidelity and performance tracking
- data collection and monitoring arrangements
Across these areas, proposals are reviewed to see whether key elements such as assumptions, risks, baselines, and monitoring plans are clearly articulated in the proposal itself.
What “risk” means in evaluability assessments
In the context of evaluability, “risk” does not refer to financial, fiduciary, or institutional risk.
It refers to the likelihood that a project can be evaluated based on the information provided at approval. When project logic, indicators, and data plans are clearly described, evaluability risk is lower. When these elements are missing or unclear, risk is higher because evaluators may lack a sufficient basis to assess outcomes later.
Trends across the GCF portfolio
Drawing on multiple rounds of assessments, the session showed a consistent decline in the proportion of funding proposals assessed as high evaluability risk.
This suggests that, over time, project designs across the GCF portfolio have become clearer and more measurable. The presenters linked this trend to institutional learning, revised guidance, and changes to funding proposal templates.
Direct access and international access entities
The session also explored differences between projects submitted by direct access entities and international access entities. At the portfolio level, evaluability assessments show broadly comparable results across the two groups. While some specific challenges were identified, particularly around articulating causal pathways and monitoring requirements, the overall differences were limited.
This challenges common assumptions that direct access entities consistently face greater evaluability challenges at the design stage.
Why templates and guidance matter
Changes to funding proposal templates, including the introduction of the Integrated Results Management Framework, were associated with improvements in evaluability indicators.
The findings suggest that clearer templates help strengthen how project designs are articulated, particularly in relation to indicators, baselines, and measurement approaches. At the same time, the session emphasized that templates alone are not sufficient. Clear project logic and design choices remain essential.
Supporting learning and future evaluation
Beyond identifying trends, the discussion explored how evaluability assessments can support learning across the GCF.
The findings help identify recurring design gaps, inform guidance to accredited entities, and support dialogue on strengthening project design. Participants also discussed how this work could be linked with evaluation quality assessments and impact evaluations to better understand how design quality affects evaluation outcomes.
To explore IEU’s evaluability assessment series, visit the IEU website:
https://ieu.greenclimate.fund/evaluations/evaluability-assessment




