1, 2, 3 testing: Assessing learning of what, for what, and for whom?
The need to shift from questioning whether or not we should be assessing learning, to what is being tested; what is happening with all the data being collected; and can it be better used to improve learning particularly for those at risk of being left behind?
November 30, 2016 by Pauline Rose, Research for Equitable Access and Learning Center
8 minutes read
A student identifies letters during the GPE supported National Learning Assessment in Sudan. Credit: GPE/Kelley Lynch

Heated debates persist on whether or not to include a target for learning in the early grades of school and whether or not to develop a global learning metric as part of the Sustainable Development Goals.

Meanwhile, the reality is that children in all classes in Pakistan (and probably elsewhere) are being tested multiple times within a year. The real question then shifts from whether or not we should be assessing learning, to what is being tested; what is happening with all the data being collected; and can it be better used to improve learning particularly for those at risk of being left behind?

What is being tested?

In a recent visit to rural schools in Pakistan as part of our Teaching Effectively All Children (TEACh) research, teachers complained that the tests administered on a monthly basis are too difficult for their students.

They consider these are set at the pace for ‘city children’ rather than the poor communities which their schools serve, where parents themselves have often not been to school. They also expressed concern that large numbers of children are failing on the basis of the questions being asked. As one teacher said: ‘how can our students be expected to know when the wheel was invented if it isn’t in the syllabus’?

A review of a sample of monthly tests given to children in classes 3-5 supports teachers’ concerns.

One question for class 4 students who are around 9-10 years old is:

When you sacrifice one thing to buy the other, it is called:

a. economic choice          b. economic decision      c. opportunity cost          d. economic services

Perhaps my favorite question for these students is an open-ended question to ‘Define democracy’. It would be interesting to know what scores full marks for this question in different countries at the moment!

We know from ASER Pakistan data that, by grade 4, only around one in five rural students from poor households can read a sentence. It is therefore highly unlikely that those taking the test will be able to read these questions, let alone know what the correct answer is.

A first step is to work to improve the nature of the assessments so questions being asked provide useful information on the extent to which children are learning.

How are data from school assessments being used?

In each school visited, the first sight on the outside wall is a noticeboard with the ranking of the school within the local area. So one school visited had been ranked 23 out of 25.

Within the school, a chart is posted on the head teachers’ wall grading teachers on the basis of the performance of their children in the test and observations during monitoring visits.

In one school visited, teachers were graded as C or D. Asking the teachers what support they received to help them improve, it appeared that they did not receive any. It is not therefore apparent what use such grading achieves, other than potentially to demotivate teachers who are being criticized for teaching children from poor backgrounds.

This is not to say that teachers themselves do not adopt strategies to try and improve their student’s learning. Indeed, in one of the schools visited, teachers had compiled a list of ‘slow learners’ in each class.

They told us that they gathered this information based on the experience of teaching the children, and did so to identify those who needed additional support and to seat them at the front of the class so they would receive more attention.

Using assessment data to improve learning

At a policy dialogue organized by the Institute of Development and Economic Alternatives (IDEAS) in Lahore to discuss our TEACh project, one question posed was how to make sure that all these data being collected for high-stakes purposes can be better used to improve learning for those at most risk.

One starting point is to make the data available in ways that teachers can use to help them identify those in the class who were falling behind, and in what areas they needed particular support.

Another proposal is to make sure the data are available in an appropriate form to researchers to enable them to identify where improvements in learning are happening, and what is facilitating this.

To give one example, a question frequently raised is whether strategies to improve learning of children progressing at a slower pace is at the expense of stronger learners. With the data already available it should be possible to track this.

Rather, at the moment, fears that this might be the case is resulting in education systems that are set at the pace of the strongest learners, leaving many invisible within the classroom – and at risk of dropping out before being able to even read a sentence.

None of this is an argument not to assess children’s learning. Governments will continue to do so in any case. And if collected in the right way, the data provide potentially useful information on how well the education system is working, and for whom.

Instead, the discussion needs to shift from whether to assess to what to assess; and to how to make sure the data are used for improving learning for those at risk rather to penalize schools and teachers who are often working against the odds to educate children from disadvantaged backgrounds in their schools.

Related blogs


Yes those are fair and amusing points, Pauline. But does that suggest the direction of travel should be towards atomisation of assessment to local, school and classroom levels; or is there any merit in the pursuit of a universal, discriminating (in the good sense!) assessment approach that is comparable across school systems that subscribe to the SDGS?

In reply to by Jake Ross

Hi Jake

In my view, assessment should first and foremost be aimed at informing practice at the school and classroom level - providing feedback to teachers and students, importantly on those who need most support in improving their learning with the aim ultimately of raising their learning. However, this does not negate the need for national and internationally-comparable accountability, although it does not mean there is a need for a global assessment infrastructure - there are ways to compare across school systems using robust data collected for national purposes. UNESCO Institute for Statistics and partners are working on this.

Some points related to this are in my article: Is a global system of international large-scale assessments necessary for tracking progress of a post-2015 learning target? Compare Volume 45, 2015 - Issue 3. http://www.tandfonline.com/doi/abs/10.1080/03057925.2015.1027514?journa…
[Let me know if you are unable to access the article]


In reply to by Pauline Rose

I agree with Pauline. The most critical element of testing is to inform and support teachers. When collected at local level, we unfortunately use it to either label children or blame teachers neither of which are of any use.
Recently we used external assessment of primary students to guide a teacher development programme. The surprise was that only 60% children in Grade 1 are able to understand place value and by Grade 2 this declined to a mere 40%. Which immediately had an impact on students' ability to tackle basic operations in Grade 2 and above. However, here is the interesting fact - the students could not answer correctly to WRITTEN questions but ORAL testing showed they knew their operations very well and could tackle complex oral word problems requiring use of multiple operations in the same question.
Makes me seriously question the limited way in which we are using and analysing assessment data!

When it comes to Math Education "Using assessment data to improve learning" amounts to (by definition) conducting Formative Assessment first.
I am hypothesizing (based on my own clinical practice and research) the following 3 LEVELS:

LEVEL 1 : Formative Assessment is not a brief end-of-the-lesson Quiz to "see if the class is doing ok". It's a process that is used to elicit periodic feedback from their students as they teach them. Establishing such a feedback loop on a continuous basis helps teachers adjust their teaching whenever the feedback indicates a dip in student learning, even at the tiniest incremental levels. So, in a sense, the teacher is continuously correcting/adjusting her/his teaching trajectory to minimize any dips in learning at the other end, dips that would otherwise go un-noticed and grow larger as the instruction and engagement proceeds. I found that during a 45-min class time, at least 5 "pauses" are needed to craft such engagements between teacher and student. Therefore the assessment data in such cases is fluid and dynamic. It is contained in the teacher's head, and processed. It is then fed back to boost student-learning whenever or wherever it dips. By the end of the class, both teachers and students feel that they achieved optimal learning, at least to the extent that such formative assessments make possible.

LEVEL 2 requires that the teacher assesses (say at the end of each month) the cumulative learning that took place during that month of teaching with formative assessments. For this, something like a Quiz needs to be designed that serves three purposes simultaneously:
(a) to assess the pattern of learning-deficits that persists across a cumulative spread of items taught and learned. This data will help the teacher make any additional modifications to formative assessments e.g. if the deficit suggests a particular sub-topic, say, in Math, that's proving to be a little problematic for a majority of learners.
(b) To test the students' ability to stretch their knowledge and understanding into problem-solving areas that fall outside their typical class-room experiences. This is designed to provide practice in elasticity and flexible thinking in order to engage in problem-solving. Students do not just do typical text-book "math problems" as in the classroom. They use math to unravel interesting, teasing, thought-provoking problems that demand tenacious numerical reasoning. (c) It should also help to orient students for Examination-Readiness (Summative Assessments). The quiz will be a Sheet with printed Questions, and the topic-spread will increase incrementally in subsequent quizzes

In other words the Quiz needs to be designed to show three "Faces":

1. to help adjust/modify Formative Assessment practices even further, in specific areas
2. to orient students to use Math to solve problems by "stretching" their knowledge and understanding of the Math they learned during that month
3. to serve as an introduction to Summative Assessments which contain an increasing "spread" of Topics being tested. Measurable Data can be obtained from these Quizzes to improve Learning Outcomes.

LEVEL 3 : The Summative Assessment (or Annual or bi-annual Examination) This builds upon the 2nd Level. A Summative Examination would have to cover all the items covered in the total number of monthly Quizzes over the school year. So its spread would be very extensive. In order to anticipate this, each consecutive Monthly Quiz should include a few Big Idea topics from the earlier Quizzes so that each Quiz's topic-spread is a little more extended than the preceding Quiz.

Needless to state, the challenge would be to (i) introduce formative assessment practices in the Math classroom, perhaps using technology (ii) design a new generation of Quizzes that can generate three types of Data (ii) to design Summative Assessments that build upon the preceding 2 Levels seamlessly.

This something that I am presently engaged in producing for Math Education projects designed to teach rural and culturally deprived populations in Pakistan.

My work on Assessment is entirely focused on local schools use. Which is why it needs to be designed exclusively as a catalytic agent to stimulate better classroom practices and pivot them to boost student learning outcomes. Equally significantly, it is also designed to integrate such practices into the school's eco-system so that teachers may gradually increase their capacity to conduct formative and summative assessments meaningfully, and with a clear purpose.
An important aspect of developing such assessment Levels is to link them directly to the courseware in use, which in turn represents the National Curriculum.

It is very true that assessment scores in most instances are merely used to grade and segregate students into different silo’s. None of the assessment data is further arranged or analysed to bring value to either the students or teachers within the system. My best friend Ashley teaches 3rd graders at a local convent school two blocks away from home. She’s spent about 6 years with the institution now and says she’s found great difficulty in helping students succeed as she’s not able to pinpoint exactly what keeps them from getting better. An instance she recited was that of a math problem she included in a test to the class, where just 6 out of 30 children understood the question. When she saw that most failed to understand what was asked, she read out the same question and provided an anecdote which led almost everyone in the class to unanimously recite the correct answer aloud. The same question recited over being written was fully understood by the students, which found her begging the question- does that mean those that do poorly in written test are actually capable of scoring well in oral tests? Is it the complexity of our conventional methods that hinder their progress within the classroom?


Leave a comment

Your email address will not be published. All fields are required.

The content of this field is kept private and will not be shown publicly.

Plain text

  • Global and entity tokens are replaced with their values. Browse available tokens.
  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.