This blog was co-authored by Silvia Montoya (UIS), Hetal Thukral and Melissa Chiappetta (USAID), Diego Luna-Bazaldua and Joao Pedro De Azevedo (World Bank), Manuel Cardoso (UNICEF), Rona Bronwin (FCDO), Ramya Vivekanandan (GPE) and Clio Dintilhac (BMGF).
We are in the midst of a global learning crisis: Reports on learning poverty suggest that 7 in 10 children in low and middle income countries (LMICs) do not know how to read with comprehension by age 10. However, in most of the developing world, we can only estimate how many children know how to read as we do not have reliable data to measure outcomes and progress on learning over time.
In 86% of LMICs, we do not know how much learning has been lost due to COVID school closures. If we are to measure progress across and within countries across years, we need data that measures what matters and that is reliable and comparable over time.
Thankfully, we are closer than ever before to this objective. We have political momentum: Education partners came together recently at the United Nation’s Transforming Education Summit to make a commitment to action on foundational learning, including committing to better data on learning. Most importantly, there are now methods available to countries to strengthen their assessments and anchor their measurement in expert advice.
Why has the collection of comparable, reliable learning data been so difficult?
Despite the growth of national and international assessments, collecting comparable learning data over time and between countries is no simple feat. This is because most assessments:
- Don’t measure what matters: Most assessments do not measure the specific sub-skills that lead to reading with meaning and often prioritize the measurement of content knowledge. The measurement of sub-skills is important to allow education actors to identify and target the specific gaps among learners who are unable to read with comprehension.
- Are not comparable over time: Many assessments are not designed to be psychometrically comparable over time. When subject and grade assessed change, it also prevents comparability.
- Are not comparable between countries: Different countries assessments test different skills at different grades. It’s difficult to learn or benchmark across countries because difficulty levels are not the same.
Furthermore:
- International assessments may enable comparability, but they have low coverage in low-income and lower-middle income countries, particularly for the early grades of primary school. Moreover, primary grade international assessments take place in cycles, every five to six years, which is too long to provide meaningful information and inform decisions. This contrasts with lower secondary assessments that happen every three years.
- Learning assessments within donor projects are often limited to the beneficiaries and timeline of the projects, limiting the sustainability of these efforts.
This explains why data gaps are still so high. For example, 24 countries in sub-Saharan Africa did not report data in the 2022 learning poverty report. The below map of the UIS highlights the data gaps.