The Difference Between Efficacy and Evidence, and Why It Matters

min read

When it comes to edtech, one type of research places too much emphasis on the platform over the learner's experience.

sky view of campus quadrangle
Credit: Stephen Griffith / Shutterstock © 2018

Last May, the University of Virginia's Jefferson Education Accelerator held a first-of-its-kind event. The goal was to pair entrepreneurs with educators, researchers with venture capitalists, and philanthropists with policymakers for a candid conversation about the role of efficacy research in education technology.

The symposium represented the culmination of a yearlong collaboration through a series of working groups that considered a range of topics, including the impact of efficacy research on procurement and the variables that impact real-world outcomes. Participants shared perspectives on the role of efficacy in the development, adoption, and implementation of educational technologies. But as it turned out, the conversation focused on evidence more than efficacy. There's a difference, and it matters.

Efficacy research is intended to identify differences between treatment and control using a randomized controlled trial (RCT) or quantitative evaluation and design (QED) but only accounts for implementation fidelity as an afterthought; this is certainly distant from the needs and desires of users. Edtech research, as a result, tends to focus on how technology works and whether specific tools (e.g., learning management system, courseware, mobile device) support or enable a learning experience.

Evidence, on the other hand, is about the experience and outcomes of using those tools and the manner in which technology is implemented. While efficacy asks whether technology can work, evidence asks whether technology will work in a particular context.

Efficacy is the domain of academics and researchers. Evidence, in contrast, shifts the burden to the supply-side of the edtech market: the technology vendors themselves and their platforms, philosophies, and business models. Conversations about evidence tend to acknowledge that technology is nothing more than circuit boards and code—and that the use cases and desired outcomes that technology enables should command our attention.

In February 2018, in a shift sparked by its first symposium, the Jefferson Education Accelerator announced an ambitious new initiative designed to capture the real-world experiences of educators in an effort to understand and document the multiplicity of variables shaping not just efficacy but also evidence. Its goal is simple: to help educators make better choices about the tools and technologies that they use in the classroom. As stakeholders in its success, we applaud this shift in focus because, although it may be well-intentioned, efficacy places too much emphasis on the platform over the experience it creates for the learner.

Our role as edtech researchers and practitioners should be to understand the nature of digital environments, how they work, and how they differ from face-to-face experiences. The learning management system, for instance, is a space in which students must learn not only how to navigate but also how to make meaning. Depending on the design of digital environments, such systems may facilitate radically different experiences and outcomes. We must take this dynamic into account and use our research to help draw distinctions between the learning environment we aspire to create and the techniques or technologies we choose to enable these environments.

Students may show signs of engagement simply because they are more comfortable with a platform, or because the platform itself is more enticing or easier to use. Researchers, in turn, need to do a better job of sorting out and analyzing the experiences of students across different platforms, even ranking platforms based on their ability to drive student engagement.

To help close the gap between research and practice, pioneering edtech bloggers Phil Hill and Michael Feldstein are launching The Empirical Educator Project, which promotes the collaboration between researchers and vendors to embrace projects that "advance the state of the art using the same intellectual tools of inquiry, debate, and peer review that academics apply all the time in their disciplinary work."

Their work, like that of the Jefferson Education Exchange, reflects growing demand for evidence, beyond efficacy research, in classrooms, on campus — and among policymakers. The US Department of Education recently released "Using Evidence to Strengthen Education Investments" to guide educators and other stakeholders in identifying local needs, selecting evidence-based interventions, planning for and then supporting interventions, and examining and reflecting on how the interventions worked.

Even with such guidance, however, confusion abounds. It is therefore vital that educators, researchers, and entrepreneurs do not allow the line between evidence and efficacy to blur. Efficacy research tends to be focused on edtech tools, whereas evidence is needed to showcase technology's tangible impact on the learning experience. We would be wise to remember the difference.


Whitney Kilgore is co-founder and Chief Academic Officer of iDesign.

Brian Fleming is Executive Director of the Sandbox ColLABorative at Southern New Hampshire University.

© 2018 Whitney Kilgore and Brian Fleming. The text of this work is licensed under a Creative Commons BY 4.0 International License.