Jutta Treviranus is Professor and Director, Inclusive Design Research Centre and Inclusive Design Institute, at OCAD University.
During times of financial constraint, governmental and educational policy-makers are faced with difficult decisions, and this difficulty is intensified by the heightened public scrutiny surrounding public spending decisions. No distribution of limited funds can make everyone happy. A rational solution is to guide and justify difficult decisions using scientific research and evidence.
The legitimization of policy and public commitments using scientific empirical validation is not new. The pendulum between policy steered by ideology and policy driven by rationalist knowledge has been in motion for countless transitions of administrations globally.1 The current resurgence or reanimation of the appeal to evidence-based governance is at least in part fueled by the fervor that accompanies new technologies.2 Big data and data analytics, including learning analytics, have been nicknamed the "power tools" for more responsible governance.3 This new, more proficient, and better connected means of measuring and monitoring the impact of candidate initiatives has captured the imagination of educational decision-makers at all levels.
Evidence gathered through learning analytics has been recommended both as a way to set spending priorities and as a reasonable gate that must be passed to justify any specific investment of public funds.4 Given the rising popularity of big data, opponents to these data-supported strategies are cast not only as anti-science but also as anti-innovation.
What about the Outliers?
Leaving aside the more general debate regarding empirically driven policy, the proposed approach to making policy decisions with the use of the celebrated "power tools" amplifies and heightens serious issues faced by individuals who are outliers and candidate measures that are at the margins.5 These concerns cannot be dismissed as side issues, since the outliers and/or margins may collectively outnumber the norm.
Who are these outliers? This considerable group includes anyone not captured under the body of the bell curve. Among the margins are learners who are classified as having a disability, learners who are gifted, and learners who have been termed the "doubly marginalized"—those who are not served by the standard educational system but who also do not qualify for special education.
Traditional or established research methods have always privileged the norm or majority. Individuals at the margins are frequently eliminated or discounted as "noise" in large data sets. There is an implicit hierarchy of scientific evidence. The pinnacle of the hierarchy is a well-controlled experiment with a large representative sample size. Although small-sample quantitative and qualitative methods have been reluctantly admitted into the academy, they are viewed with greater skepticism. The yardstick of research findings is statistical power, since it justifies our measures of probability. When measuring impact to support high-stakes decisions, we want a large degree of certainty. At the other end of the scale are fuzziness, variability, instability, and unpredictability—all hallmarks of the margins and the outliers. The outliers are deemed insignificant. Big data and learning analytics have inherited the same yardstick.
Evidence to Support Funding Decisions
If evidence regarding impact levels arrived at through big data and learning analytics is used to determine spending priorities, learners at the margins and the diverse programs that support them will never pass the threshold. Outliers are by definition highly diverse or heterogeneous. The programs or measures that are effective for these individuals are as diverse and variable as the learners themselves. Any candidate measure intended for learners at the margins will serve a comparatively small number of learners and will therefore have a comparatively small impact. Therefore, specialized programs for the margins cannot compete with programs suitable for the norm.
Even empirical evidence of impact as a gate to funding of a specific initiative can be problematic when addressing outlying learners. Outcomes are often diffuse and inconsistent. The dominant research methods have never worked for outliers. It is difficult, if not impossible, to find a representative sample, let alone a sufficiently large sample to achieve external validity.
A cogent example is research into the impact of assistive technologies in special education for students with disabilities.6 There is no lack of research in the domain. Several countries have supported the aggregation, review, and dissemination of research findings on this topic. However, there is almost no external validity or generalizability with respect to these findings. Most of the research is single-subject, within-subject, or anecdotal. Many years are needed for sufficient research replications that will increase the statistical power, at the same time risking significant changes in conditions and threatening the longevity and relevance of the findings.
One of the benefits of the Internet to individuals at the margins is the increased opportunity to find other individuals with common needs or interests. Finding a common soul in a small local community may be difficult, but doing so is easier when the pool of possibilities spans the globe. Does the same advantage not apply to big data that aggregates data points from a much larger pool than any single research study? Unfortunately, to date, the tools and algorithms have not been designed to expose and pool minority effects. Big data has inherited the biases of traditional research. Not unlike the popularity echo-chamber of the web, which intensifies the impact of popularity, big data algorithms intensify the statistical power of the norm.7
The Difference
Beyond the obvious issues associated with gathering supporting evidence for initiatives required by outliers, marginalized learners are also least served by the status quo. Evidence-based governance, on the other hand, is most likely to support the status quo—the "tried and true" or "proven" measures for which it is easier to amass data.
Sharing the limelight with big data and learning analytics is the acknowledgment that we live in transformative times and that our educational system must transform in response. We are no longer living in the Industrial Age; we live in a creative/knowledge/digital/networked economy. Conformity, uniformity, and rote learning are no longer useful values. We need diverse, creative, responsive, collaborative, resourceful, and resilient learners. These values are most easily found at the margins when diverse learners are given personalized support. This is also where innovation thrives.8
Design based on metrics for the norm may be detrimental to the margins, but the converse is not true. That is, design based on the margins can benefit the norm. For example, stairs exclude anyone in a wheelchair, but a ramp in the sidewalk helps everyone get up the curb. Design that encompasses the margins tends to make the world more usable for the whole of humanity.9
At a macro level, the vicious cycle driven by exclusion presents huge risks, not just for the excluded individual but for society as a whole. These risks have been empirically documented by researchers such as Richard Wilkinson and Kate Pickett10 and recognized by the World Economic Forum, which ranks severe economic disparity and exclusion as the greatest global risks (above global warming and terrorism).11
Beyond the Mass
Economies of scale have great appeal; however, the social and environmental costs may outweigh the benefits. Through disruptive practices such as 3D printing, social networks, and digital repositories, our economies are slowly moving away from the mass—mass production, mass marketing, and mass communication. Our educational system can follow suit.
The most promising power tool for responsible educational governance and policymaking is not big data but little data, not certainty but responsive and dynamic instability. We need informed governance for education in general but also informed decisions for each unique and diverse learner. More important, we need informed self-aware, self-governing learners. We can design our systems to enable learners to discover and refine their understanding of what works best for them in a given situation in pursuit of a given goal, bolstering meta-cognition and making sure each learner learns to learn.12 We have the opportunity to replace education for the masses with one-size-fits-one learning at comparable cost by leveraging open education, connected classrooms, and peer learning.13 Learning analytics, designed for diversity, can provide a dynamic research engine that informs this process. We can dispense with the impossible challenge of finding a representative sample to support external validity, since each learner is self-represented.
Our world is not becoming less complex: the combinatory factors are only increasing. So what makes us think that technology can give us easy answers? If our research tools are working as they should, they can accurately reflect that complexity. These tools can help us navigate and leverage that complexity. In turn, doing so can help us design our education for diversity and inclusion. The margins, the locus of innovation, are not to be dismissed.
Research in inclusive open education was supported by the William and Flora Hewlett Foundation.
- Ian Sanderson, "Evaluation, Policy Learning, and Evidence-Based Policy Making," Public Administration, vol. 80, no. 1 (Spring 2002), pp. 1–22.
- Jeffrey B. Liebman, "Building on Recent Advances in Evidence-Based Policymaking," joint paper by Results for America and the Hamilton Project, April 2013.
- Thomas Kalil, "Power Tools for Progress," Grantmakers for Effective Organizations Learning Conference, June 6, 2011.
- Veronica Diaz and Shelli Fowler, "Leadership and Learning Analytics," EDUCAUSE Learning Initiative (ELI) Brief, November 2012.
- Gerry Zarb, "On the Road to Damascus: First Steps towards Changing the Relations of Disability Research Production," Disability, Handicap, and Society, vol. 7, no. 2 (1992), pp. 125–138.
- Sandra Alper and Sahoby Raharinirina, "Assistive Technology for Individuals with Disabilities: A Review and Synthesis of the Literature," Journal of Special Education Technology (JSET), vol. 21, no. 2 (2006).
- J. Treviranus and S. Hockema, "The Value of the Unpopular: Counteracting the Popularity Echo-Chamber on the Web," Science and Technology for Humanity (TIC-STH) IEEE Toronto International Conference, 2009.
- Ann Meyer and David Rose, "The Future Is in the Margins: The Role of Technology and Disability in Educational Reform," in D. H. Rose, A. Meyer, and C. Hitchcock, eds., The Universally Designed Classroom: Accessible Curriculum and Digital Technologies (Cambridge: Harvard Education Press, 2005), pp. 13-35.
- Steve Jacobs, "Section 255 of the Telecommunications Act of 1996: Fueling the Creation of New Electronic Curbcuts," 1999, Center for an Accessible Society website.
- Richard G. Wilkinson and Kate Pickett, The Spirit Level: Why More Equal Societies Almost Always Do Better (London: Allen Lane, 2009).
- Katy Barnato, "Inequality Threatens 'Global Society': WEF Founder," CNBC, January 21, 2013.
- Floe Project, Inclusive Learning Design Handbook.
- Jutta Treviranus, "The Value of Imperfection: The Wabi-Sabi Principle in Aesthetics and Learning," in Open ED 2010 Proceedings, Barcelona, November 2010.
© 2014 Jutta Treviranus. The text of this article is licensed under the Creative Commons Attribution 4.0 International License.
EDUCAUSE Review, vol. 49, no. 1 (January/February 2014)