I recently read a blog post from Jean-Marc Bernard “Data Are Not Just to Please Statisticians” in which he makes a strong case for getting the “data revolution” done.
I was intrigued by many good points he makes and strongly agree that we need to address the lack of education data more vigorously. Let me add a few concrete suggestions about what to do and maybe not do.
I’ll do it in the two main areas he suggests: household surveys and learning assessments.
On household surveys:
Many countries already do survey, but the data are not always used. So, we need to make sure that the data can help resolve controversial issues. For example, we need to find out why children don’t complete primary school. Is it because they are dropping out or because they never start school? We should create public debates around the whole issue of a “crisis in the foundation years”, including early childhood development.
We should generate less data per survey, but do them more often. It would also be useful to include queries on children with disabilities, on school safety, on the interaction between poverty, ethnicity, and gender and so forth. We could add on elements or rotate them as we go along - rather than trying to capture everything at once. Some of the household surveys RTI has done are massive, and even I wonder whether all that data get used.
Surveys need to detect overall progress. Sub-national data is useful for local planning purposes and should be managed locally. But what we really need is a picture of overall progress, especially for groups at sociological disadvantage. This requires reasonable sample sizes, but not of the size you need for sub-national surveys.
Part of the limitation is cost. While Jean-Marc’s estimate seems about right, we need to work harder to optimize cost in general. We need to convene the experts who’ve done education-oriented household surveys and seek ways to lower the cost per survey so we can do more of them.
Surveys need to be more frequent: Some countries change quickly. Doing a survey every 5-10 years is not enough. Large surveys could be done less frequently, but more specialized ones that are smaller and cheaper should be done more often.
More emphasis on capacity building. We need more local capacity on data collection, data analysis and how to use data for more rationalized decision-making.
Innovate with use of technology. Mobile phone surveys, for example, may have limitations (but so do face-to-face surveys), but they also can be amazingly fast and inexpensive and can result in automatic tabulation. While not every household has a mobile phone, we can take advantage of their speed and low-cost. We could also think of integrating GIS technology in the survey design, implementation, and even analysis stages.
Linking to Education Management Information Systems (EMIS) and joining forces is key. EMIS are the standard administrative systems that gather data on enrollments and so on, directly from schools. They can learn to use surveys, and can help develop the approaches to sampling.
Repository of data: We need to develop and maintain a repository of information about methodology and results for each survey.
The last couple of points bring up a hugely important issue: what is the institutional base of such surveys? Do we need a loose partnership or, as Jean-Marc suggests, call for some kind of network or panel or task force of experts? Ownership and leadership are important as some of the successful surveys such as Multiple Indicator Cluster Surveys (MICS), done by UNICEF and Demographic Health Surveys (DHS), funded by USAID have shown. While a loose networks or panels of experts can provide technical support, real action probably requires funding, institutionalization, and leadership. How this is to be sorted out remains an open question.
On learning assessments:
Assessments are key for global analysis and comparison. While learning assessments are not going to fix the problem, it is obvious that without them we won’t know much. We need to encourage local measurement, but experience around the world suggests that local experts derive a lot of support and skill from participating in internationally-comparable, rigorously-constructed assessments.
Common metrics are useful. As noted by Jean-Marc, ACER, in Australia, has proposed methods and approaches whereby one could have common constructs and metrics, even if actual assessments are carried out by different institutions.
Assessments help to improve teaching and learning. While international comparisons are useful, the real purpose of assessments is to support teachers. Let’s look at Latin America: while countries do increasingly use assessments to drive classroom practice, teacher coaching and support, it was a long process getting there. The actual “transmission belt” between assessment and teaching and learning did not receive sufficient attention at the beginning. Hence, it is important to create successful examples of how assessments can be used -- otherwise they may wither or even regress.
It will also be important to avoid a situation where assessments cause a “closed loop of measurement” and very narrow forms of support. This could be the demand for specific textbooks and techniques that may help improve performance on the assessments. We should avoid that assessments are used to create a market for books that just help “answer” the questions in the assessment. Neither should there be markets for specific forms of teacher training).
Overall, I think it would be worthwhile to support Jean-Marc’s call on both topics. Maybe we should start by getting an expert group together to develop concept notes that can be floated to funders, and to coordinate with already ongoing initiatives in these areas.