Going to scale is the Holy Grail of donor-funded international education development projects. Scale is seen as both a requisite for and the logical culmination of any “successful” donor- or government-funded education program.
The focus on scale makes sense: Who doesn’t want the greatest number of teachers instructed in the most cost-efficient manner or every child in every region to receive the same high-quality literacy approach? Yet, from what I’ve seen, our notions of scale are grounded in a series of myths—misconceptions about what scale is, how easy it is to do, and what types of interventions should be scaled—that paradoxically serve to undermine the quality outcomes we say we want. Below I explore what I see as the five most common myths associated with “scaling up” in international teacher education projects.
Myth #1: We know what “scale” is
The most common definitions of “going to scale” or “scaling up” typically focus on replication, expansion and quantification—synonyms that beg a number of questions. If an NGO introduces a reading program to one primary school and later expands it to all schools in a region, is that an adequate definition of scale? Should we scale quality projects that are complex or simple projects that may lack measurable indicators of quality but that seem easy to replicate? Are we scaling up, down or out?
The most comprehensive definition of scale includes such components of depth, spread, shift, and sustainability (Coburn, 2003). This is a far more complex definition than the simpler ones to which we often adhere in international education projects. Further, this focus on changing behaviors, norms and beliefs, and cultivating ownership implies an intense dedication of efforts and resources that are long-term, responsive, and multidimensional.
Myth #2: Our project is worth scaling
In fact, many of our educational projects do not undergo the kind of meaningful or rigorous impact evaluations that determine whether they are indeed worthy of being scaled (1). There are often two other “worth”-related issues. First, acquiring evidence of success in education is often challenging because impact and change take years to accrue. Next, in addition to “why,” there is often confusion about “what” exactly we are scaling. Is it the innovation itself (a particular literacy approach)? Is it the program (our particular variation of this literacy approach)? Is it a practice associated with an innovation (guided reading)? Or is it everything? (World Bank, 2003:11) Again, are all of these “pieces” of the overall literacy approach equally impactful—and how do we know?
Myth #3: We know how to do it
Scaling innovations is hard to do, in part because “scale” itself is so poorly understood; in part because we don’t often know what exactly we are scaling; in part because the notion that a good model alone is sufficient for scaling has not been proven (Carrig et al. 2005); and in part because ministries of education and NGOs simply may not have the personnel, resources, time or know-how to carry out and manage everything needed to scale an innovation.
Effective replication often depends on standardizing the context within which a program operates (Bradach, 2003). Yet transplanting a successful initiative to another area is difficult because every context is different. It doesn’t help that we triage many of our initiatives, starting in the “easiest” locations (e.g., capital cities) where technical and organizational capacity exist and then attempt to scale initiatives to far more difficult locations (for example, rural areas) where such capacity is limited or non-existent. (This is the “New York, New York” approach to scale—If (we) can make it here, (we) can make it anywhere, it’s up to “us”…etc.).
Myth #4: Scale disseminates best practices and standardizes quality implementation
Given the almost fetishization of scale, quantity—versus quality implementation— becomes an end in and itself for many educational projects. Because we often begin with the notion of going to scale, donors, policymakers, and implementers often undermine the very quality we say we seek by draining the complexity and richness from an intervention to simplify it in order to make it scalable.
As an example of this quantity-quality paradox, one need look no farther than the ubiquitous “cascade” (train-the-trainers) approach of teacher training and capacity building. We all use the cascade approach for teacher professional development despite research showing that it has no impact, despite its extremely low rates of implementation, and despite the fact that it often disseminates, not good practice, but malpractice (Ono & Ferreira, 2010; Navarro & Verdisco, 2000).In contrast, far more quality-based and proven approaches, such as coaching (Fixsen, et al. 2005; Joyce & Showers, 2002) are often ignored because they are deemed “complex” and expensive.
Education (especially teacher in-service education) is primarily about changing practices and beliefs—particularly complex things to scale. If complexity makes replication difficult, we should ask ourselves a most fundamental question—is the goal of our intervention quality or is it scale? Sometimes the two are incompatible.
Myth #5: Scale is good because more is always better
But in fact, more is often less when it comes to our focus on scale.
The focus on “more” often results in a sort of “Through the Looking Glass” approach to teacher professional development where inputs (teachers trained) are outcomes, quantity is quality, training is implementation, more is less, and where we expend more energy trying to meet numbers than promoting quality program implementation.
This obsession with scale often leads us to ignore small, successful programs with quality impact in favor of large-scale projects with little proven impact. Because so many donor-funded programs work in highly diverse countries, professional development programs must respond to local teacher needs, which may mean these programs need to stay small to effectively respond to these needs. But failure to scale is not equivalent to failure to thrive.
Funders, program designers and implementers all want to promote reform or effect meaningful change for the greatest number of beneficiaries. But we’ve lost much in the mythology we’ve constructed around scale. Lost in this focus on “more” are the “localized” characteristics that constitute successful implementation—adaptive planning and instruction that are tailored to the needs of local participants; and ongoing monitoring and support of teachers (Fixsen, et al. 2005).
Lost in our obsession with quantification is the harm we do to teacher professional development by focusing on simplistic interventions that often fail to address the most essential and germane teacher needs.
Lost in this unexamined assumption that scale must always be the end result of any intervention is open and honest discourse about the difficulty and complexity associated with human and organizational change and system reform.
And lost in the whole mythology of scale are incentives to modify the design and delivery of international education projects so that we are truly providing the best and most meaningful learning opportunities to the greatest number of people who need them most.
(1) Many international evaluations test the null hypothesis. But the small sample sizes of our studies (relative to the larger population from which the sample is drawn) often means we get misleading statistically significant results with high t values and low p values (and a high degree of standard error).
Bradach, J.L. (2003, Spring). Going to scale: The challenge of replicating social programs. In Stanford Social Innovation Review. Retrieved from http://tinyurl.com/kr35rke
Carrigg, F., Honey, M. & Thorpe, R. (2005). Putting local schools behind the wheel of change: The challenge of moving from successful practice to effective policy. In C. Dede, J.P. Honan, & L.C. Peters, (Eds.), Scaling Up Success: Lessons Learned from Technology-Based Educational Improvement, pp. 1-26.
Coburn, C.E. (2003, August/September). Rethinking scale: moving beyond numbers to deep and lasting change.Educational Researcher, 32 (6), pp. 3-12.
Fixsen, D. L., Naoom, S. F., Blase, K. A. & Friedman, R. F. (2005). Implementation research: A synthesis of the literature. Retrieved from http://tinyurl.com/bn6w5k8
Joyce, B. & Showers, B. (2002). Student achievement through staff development (3rd ed.) Alexandria, VA: ASCD.
Navarro, J.C. & Verdisco, A. (2000). Teacher training in Latin America: Innovations and trends. Washington, D.C: Inter-American Development Bank.
Ono, Y. & Ferreira, J. (2010). A case study of continuing teacher professional development through lesson study in South Africa. In South African Journal of Education, 30: 59-74.
World Bank (2003, June). Scaling-up the impact of good practices in rural development: A working paper to support implementation of the World Bank’s Rural Development Strategy. Report Number 26031. Washington, DC: Author.