Assessing GPE grant performance: A review of recently closed programs

Key findings and lessons learned from the review of GPE grants approved during GPE 2020 and before and completed between 2019 and 2022.

December 06, 2023 by Kyoko Yoshikawa Iwasaki, GPE Secretariat, and Priyanka Pandey, GPE Secretariat
|
4 minutes read
A schoolgirl focused reading a book in a classroom at GEPS Ndogpassi 3A in Douala, Cameroon. Credit: World Bank/O. Hebga
A schoolgirl focused reading a book in a classroom at GEPS Ndogpassi 3A in Douala, Cameroon. A GPE-funded program is helping improve the quality of education in 300 host-community schools with refugees and 200 host-community schools with IDPs in Cameroon.
Credit: World Bank/O. Hebga

The GPE Secretariat recently conducted a review of grant completion reports submitted between 2019 and 2022 for grants approved during GPE 2020 and before.

The review covers a full set of 26 completion reports submitted during this period and assesses the performance of closed programs, primarily along 3 dimensions: relevance, efficacy and efficiency.

Here are the major findings and lessons learned from the review.

Were programs’ objectives relevant in country context and vis-à-vis GPE’s goals?

Completion reports show that all programs were aligned with country’s education sector plans and continued to respond to the evolving needs and priorities of the country throughout implementation by restructuring programs, reallocating funds and obtaining additional financing. For example, in Cameroon, in response to an influx of refugees, funds available from savings in other components helped provide new benches in schools hosting refugees.

Program objectives were well aligned with GPE’s goals of improving learning outcomes and promoting equity and inclusion, although some did not measure results at the outcome level.

The largest number of programs addressed learning, followed by access and organizational capacity.

In the area of learning, all but one of the objectives aimed to improve learning outcomes. However, less than half of these objectives had an indicator to measure progress in learning outcomes through learning assessments.

Other program objectives used a proxy (e.g., primary completion rates), or measured output-level results (e.g., number of teachers trained) or intermediate outcome-level results (e.g., increased competency of trained teachers), but didn’t measure improved students’ learning outcomes in their results frameworks.

To what extent did programs achieve their objectives?

All programs achieved most of their objectives—a slight improvement since the last review in 2019. However, looking at the achievement level of program objectives, performance was rated “modest” for 8 out of the 46 objectives and 7 of those 8 objectives were to improve learning outcomes.

They were rated “modest” not only because of their low achievement status, but also because of issues related to data and measurement. For example, lack of comparable baseline values and robust evidence of improved learning outcomes were frequently mentioned as reasons for a “modest” efficacy rating.

Analysis of the achievement of program objective indicators revealed that learning had the lowest share of indicators meeting targets.

Were the objectives achieved in an efficient manner?

Our analysis of 18 World Bank-managed grants (which usually provide independently validated efficiency ratings) showed that more than half (10 out of 18) had low overall efficiency ratings. Almost all programs extended their completion dates, with an average extension of 18 months.

Consistent with findings from a previous analysis conducted for GPE’s Grant Performance Report 2019, the programs that experienced delays in the early stages of implementation tended to extend for a long period—more than 18 months for the programs included in this review.

The main factors for implementation delays related to procurement, such as lack of procurement staff with good technical and administrative knowledge and bandwidth.

Completion reports often assess program efficiency through cost-benefit analysis and unit cost analysis. However, both these approaches have limitations: without any discussion of the effectiveness or cost-effectiveness of interventions chosen, they don’t inform us much about overall potential efficiency gains in learning or access.

A more holistic approach to assess programs’ value for money may be needed.

Lessons learned

The review found that programs were relevant and achieved most of their objectives, however it also highlights challenges in improving and measuring learning outcomes and achieving program objectives in an efficient manner.

These findings reaffirm that a robust monitoring and evaluation (M&E) mechanism is fundamental to evidence-based program management, learning from and course-correcting implementation and to ensuring a program achieves expected results by the time it closes.

Identifying key facets of good M&E design and establishing more robust standards for assessing M&E aspects in program documents will help partner countries identify appropriate measurements, indicators and targets.

Procurement of goods and services (e.g., to construct classrooms, to distribute textbooks) was and has always been the biggest barrier to program implementation. Closed grants facing procurement problems typically provided capacity building support to overcome delays, suggesting the need for a more rigorous assessment of procurement capacity and arrangement in the preparation stage.

Alternatively, the most efficient way of procuring goods and services in a partner country may be sought while investing in enhancement in procurement capacity in the long term. It’s also important to ensure lessons learned from a closed grant are used in the design of a subsequent grant in the same country, particularly to prevent the same implementation challenges.

Finally, the review also reflects on the importance of having a more holistic approach when assessing program value for money.

Such an approach should consider the effectiveness or cost-effectiveness of interventions chosen, for example, by examining: whether there’s evidence of the intervention’s effectiveness or cost-effectiveness in improving outcomes in similar country settings; to what extent inputs in the chosen intervention are obtained at least cost; and how efficiently the interventions are implemented.

Related blogs

Leave a comment

Your email address will not be published. All fields are required.

The content of this field is kept private and will not be shown publicly.

Plain text

  • Global and entity tokens are replaced with their values. Browse available tokens.
  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.