There has been a surge of interest in “system diagnostics” among those seeking to improve education in poor countries. The interest is seemingly driven by recognition that numerous efforts to improve learning have not had the desired impact and that there is limited understanding of why this is the case.
Advocates of ‘system thinking’ argue that developing a better understanding of the education sector will help foster a transformative reform agenda, targeting interventions at key challenges and increasing the likelihood that they will lead to sustained improvements in learning outcomes.
In developing countries, education sector plans epitomize the sector reform agenda mainly financed on domestic resources and supplemented by international aid.
In theory, these sector plans are developed in an inclusive manner to set the stage for government-led collective action, domestic and international financing mobilization, and alignment of all education stakeholders’ support to the education system sector.
In practice, the picture is more nuanced. Recently conducted country-level evaluations by GPE highlighted that while the overall quality of education sector plans keeps improving, the link between planning and implementation remains to be strengthened.
Nevertheless, sector plans are important tools to improve accountability and have potential to influence gains in student learning.
In 2016, DFID commissioned Moira Faul to scan available ‘education system diagnostics’ in the global education space. No tools existed then that truly served as a “system diagnostic”, that would seek to understand how a particular system works and why. 1
This was not entirely surprising; the use of the phrase ‘education systems’ was nascent though has since become common – meaning very different things to different people.
What is a ‘system diagnostic’?
Terminology matters, and there is currently a lack of coherence what a system diagnostic is, and what it should entail. This has led to a proliferation of differing—and sometimes competing—approaches.
First, there are different understandings of the scope of a diagnostic. For some, a “diagnostic” refers to a description of symptoms, whether in the form of a narrative (such as sector performance reviews or reports that document policy intent only) or a collection of statistics.
Others apply a medical analogy to define a diagnosis as a step further - as the process of determining which condition explains those symptoms. We subscribe to this view - that diagnostics (whether system-level or used within a system) must ask why. This differentiates diagnostic tools from education sector assessments or descriptive analytical tools.
Second, there are different uses of the term system. For some, the term is a synonym for “sector.” We (and others) use the term in a more technical sense, defining a system as a set of parts that perform some collective function.
Applied to education, this means that realizing learning outcomes depends on the contributions of many actors: students, teachers, parents, local communities, school administrators and ministry officials among others.
No one actor or input can affect learning outcomes alone because that outcome depends on the ways the actors in that system interact with one another.
We therefore define system diagnostics as a holistic consideration of the inputs and resources of a system (infrastructure, books, textbooks, information, finance), the relationships within the system (actors and institutions), the functions of the system (in theory and in practice), and the politics and feedback loops within the system (see also DFID Education Policy 2018.)
A third difference relates to the use of a diagnostic. While it is important to understand a system as it is, for actors seeking to influence change in education outcomes, this is not sufficient.
Thus, a diagnostic cannot just be an explanation of the root causes of the identified symptoms. It must also pave the way for remediation through informed decision-making, targeted prioritization and, most importantly, collective action.
The goal of a system diagnostic is to prompt a new mindset and bridge the gap between education analysis, policy intent and implementation reality. We want to ensure that system diagnosis links up understanding of what is feasible (politically, given capacity) with the best evidence available to governments to be able to make informed choices and prioritize. We are enthusiastic about supporting a refined design-to-implementation approach in education that follows the model of diagnose – test – learn – adapt.
Why might a system diagnostic be useful?
Our hypothesis at the outset was that better understanding of the system would lead to more contextually appropriate, politically feasible, institutionally possible application of the best available evidence to produce reform agendas, education sector plans, strategies and interventions that could have impact for all children, at scale.
While the 2016 report found system diagnostic tools in other sectors, none had been tracked for evidence of attitudinal or behavior change and so we didn’t find evidence of this from other sectors.
What system diagnostics exist
In 2018, GPE and DFID co-funded Moira Faul to write a follow-up report to check progress since 2016 and map out the three analytical tools described in that report, so that governments around the world could make informed choices about which they would choose to use if seeking to understand their sector better.
This second report was finished in October 2019 and is available upon request from the authors (see details below). Global debate moved a fair amount since then, in part maybe in part prompted by some of the questions we were raising.
New tools and approaches are being developed and tested both by major global education actors and by practitioners in-country. Some tools are being presented as education system diagnostics but do not fit our definition; they are more diagnostics used within education systems, or sector analytical (not diagnostic) tools. Some are explicitly not being associated with the ‘system diagnostic’ trend; already perhaps this phrase is losing traction.
The country feedback that Moira collected – from a small sample (of 5-8 respondents per tool; 6 in total) is, to us, the most interesting part in her report.
So there remains value in exploring this space.
Over the last few years there has been an increase in the availability of robust evidence of what works in education. Our goal, including in the context of reflection and consultation of GPE’s next strategic plan, is to explore how to better support countries so that funding – domestic and external resources – and delivery capacities are being directed to evidence-based, context-appropriate interventions with the greatest potential for impact.
To date this discussion has been in the global education space, among global actors. A next major and essential step is to test the hypothesis and the new tools and approaches from diagnosis to implementation through policy design developed in-country.
The report Education System Diagnostics: What is an ‘education system diagnostic’, why might it be useful, and what currently exists? is available upon request. To obtain a soft copy, please contact: Raphaelle Martinez or Laura Savage
- Faul, Moira. 2016. Scoping Study: Education System Diagnostics Tools. HEART.