A Question of Blended Learning: Treatment Effect or Boundary Object?

min read

Two oppositional perspectives bound the domain for research in blended learning. To make progress, the research community will need to investigate the success or failure of blended practices with much greater clarity.

Book spine which says 'Blended Learning'
Credit: Tashatuvango / Shutterstock.com © 2020

Blended learning is one of the most discussed and researched instructional modalities in higher education. The topic appears on the programs of most educational technology meetings, and the Sloan Consortium (now the Online Learning Consortium) sponsored the Blended Learning Conference and Workshop for several years. The blended component was integrated into the OLC Innovate conference and also appears frequently in the fall OLC Accelerate meeting. In addition, the EDUCAUSE Learning Initiative (ELI), the AACE's E-Learn, and other educational technology meetings feature blended learning. Why does this interest show no sign of diminishing? We believe it is because blended learning enhances flexibility for students and faculty, reduces infrastructure needs, and provides viable alternatives for students who are facing academic challenges. Greater reliance on blended learning may be what a post-COVID-19 higher education landscape will turn to for a variety of reasons, so a better conceptual understanding is much needed.

One enduring artifact of the online learning movement has been the comparison of student learning (or its surrogate, student success) across multiple course modalities. These studies attempt to parse the variance in student knowledge attainment by course configuration, as in the now-famous online and face-to-face comparisons. The collage of studies ranges over descriptive, predictive, probabilistic, data mining, and hypothesis testing, where course modality is considered a treatment effect augmented by effect size.

While the range of studies has remained constant, there has been a dramatic shift in how researchers analyze and interpret blended learning data. For many of us who were trained in the methods of sampling, estimation, and hypothesis testing, our world has been altered by big data with its modeling, prediction, small-pattern identification, and machine learning. As we strive to understand methods that place additional responsibility on us to make informed interpretations, we face uncertain mediation in which decisions must be made despite the ambiguity of our findings. This change is important in fields such as behavioral economics, game theory, artificial intelligence, philosophy, and other disciplines encompassing multiple perspectives on how we transform data into information and then into insight and, finally, into action. However, hypothesis testing maintains a strong place in our research paradigm.

Two Perspectives on Blended Learning

As popular and influential as blended learning has been, it faces several challenges, many of which converge on just what it is. The definition problem has been ubiquitous, creating multiple framing approaches—some complementary and others conflicting. For instance, consider two perspectives at the extremes of the definitional spectrum.

In 2005, Martin Oliver and Keith Trigwell stated:

The term "blended learning" is ill-defined and inconsistently used. Whilst its popularity is increasing, its clarity in not. Under any current definition, it is either incoherent or redundant as a concept. Building a tradition of research around the term becomes an impossible project, since without common conception of its meaning, there can be no coherent way of synthesizing the findings of studies, let alone developing a consistent theoretical framework with which to interpret data.1

On the other hand, Rhona Sharpe and her colleagues view blending quite differently:

Some institutions have developed their own language, definitions or typologies to describe their blended practices. We suggest that this poor definition may be a strength and part of the reason why the term is being accepted. The lack of definition allows institutions to adapt and use the term as they see fit, and to develop ownership of it.2

These oppositional perspectives bound the domain for research in blended learning. Can it be considered a treatment effect by which we can accurately index its impact, or is it a generally conceived construct with multiple interpretations?

What Is a Treatment Effect?

Frequently, colleges and universities implement blended learning initiatives to improve outcomes. If we consider the blended modality a "treatment" that improves learning, we should see an average effect across multiple studies. This average treatment effect can be measured as the gain scores that the treatment (blended learning) group achieves relative to the control or comparison group (e.g., purely classroom or purely online instruction). Here, we might ask two questions:

  • How much better do blended learning students do on individual tests or cumulative grades relative to students who don't participate in blended learning?
  • Is the learning improvement seen in blended learning an effect of the treatment?

This assumes that there is "fidelity" about the treatment across studies. Another way of saying this is that all students received the same "medicine," in the same dose, and that the effect of this standardized treatment is visible from the design of the study.

Consider the following perspective:

The term treatment effect refers to the causal effect of a binary (0-1) variable on an outcome of scientific or policy interest [and here, educational interest]. Economics examples include the effects of governmental programs and policies, such as those that subsidize training for disadvantaged workers, and the effects of individual choices on things like college attendance. The principal economic problem in the estimation of treatment effects is selection bias, which arises from the fact that the treated individuals differ from the non-treated for reasons other than the treatment status, per se.3

Therefore, a treatment must be well defined both so that that it can be replicated in other contexts and so that its effect is not muddied by different circumstances in those contexts. Ideally, this can be done through random selection and random assignment of subjects to treatment and control groups. If we have a large enough pool from which to select subjects, then we can be assured that the "active ingredients" in the treatment (blending) are the cause of the improvement in learning rather than pre-existing differences among students in knowledge or ability.

One issue with the blended learning research literature is that these conditions are not present in many studies. However, although these assumptions are difficult to achieve, there are excellent meta-analytic studies intended to mitigate the problems of attributing learning gains to blended learning.4

What Is a Boundary Object?

Anders Norberg, Charles Dziuban, and Patsy Moskal address boundary objects as follows:

Blended learning has shown itself to be a problematic term. What is it that is blended? What kind of blend is it? Does blended learning seem ad-hoc, using a combination of traditional components as the blend? Would it be preferable for a blend to transcend the elements that formulated it? The current emphasis on blended teaching and learning has evolved into what Susan Leigh Star terms a "boundary object" (Bowker and Star, 1999). Those objects are ideas, things, theories or conceptions that resonate and hold together a large community of practice where each member has some intellectual or emotional investment in the idea. Interestingly, when members of this community assemble, the separate constituencies tend to differ the object's definition and application. Boundary objects are malleable enough to satisfy the needs of the individual constituencies, but cohesive enough to hold the larger community of practice together. They tend to support what Johnson (2010) terms "liquid networks." Boundary objects are generally constructed in the larger common community, but much more precisely developed by the individual constituencies. The advantage of boundary objects is their ability to maintain the interaction among several separate communities of practice. In many respects, blended learning is a prototype boundary object, pulling together faculty members, students, administrators, instructional designers, chief information officers, librarians, evaluators and journalists in an extensive liquid network (our addition). Each one has a somewhat different definition and agenda for the concept but together they subscribe to the generalized notion of blended learning and participate in continuing developmental conversation.5

A Blend by Any Other Name

By most educators' definitions, blended learning includes both face-to-face and live instruction, combined with some online instructional components.6 While some colleges and universities have clear mandates for the percentage of time allowed online versus face-to-face for instruction in order to be called "blended," others are less prescriptive with their requirements. Table 1 provides a summary of face-to-face and online instruction in a number of blended situations, taken from a meta-analysis of course designs by Barbara Means and her colleagues.

Table 1. Face-to-face and blended comparison model examples

Face-to-Face Time

Online Time

50 minutes live, twice weekly Tutorial assignment
Writing instruction Discussion board Communication, materials, assignments
Traditional class time Instruction modules
50 minutes live, twice weekly 50-minute weekly lab
Live instruction Students who chose class website
Computer lab—pairs shared computers Assignments via email
Face-to-face instruction Six 1-hour web sessions in lab
First class is live to introduce class site, tutorial Online curriculum beyond text
Live course section Six 45-minute electronic field trips
Three-hour interactive presentation Emailed weekly cases
Standard class schedule in live lab Blackboard, digital tables
Live instruction Online supplemental learning
Live classes regularly Email communication with partner
Weekly live lecture Discussion boards, Web CT software
Live instruction Virtual experiments
Source: Barbara Means, Yukie Toyama, Robert Murphy, and Marianne Baki, "The Effectiveness of Online and Blended Learning: A Meta-Analysis of the Empirical Literature" [https://archive.sri.com/work/publications/effectiveness-online-and-blended-learning-meta-analysis-empirical-literature], Teachers College Record 115, no. 3 (2013); Chuck Dziuban, Patsy Moskal, Andrea Hermsdorfer, Genny DeCantis, Anders Norberg, and George Bradford, "A Deconstruction of Blended Learning," presentation at the 11th annual Sloan-C Blended Learning Conference and Workshop, Denver, CO, July 2014.

The wide variety of course designs makes comparisons in the sense of a treatment difficult. While the meta-analytic articles referenced here are sophisticated, extremely well done, and prototypes for how to perform this work well, the course blending in the studies they reference is varied in the extreme. And in many cases, the courses seem to be pedagogically non-analogous. Instructors incorporated different course designs, technologies, instructional arrangements, communication protocols, time frames, and implicit contexts; as a result, the similarities in their blending dissipate very quickly with the increasing granularity of their instructional design. The fidelity of the blended learning treatment across the studies is, thus, questionable. One might ask, "Is this the same medicine treating the same issue when the active ingredient appears to be consistently different?"

In the Final Analysis

In the final analysis, there may be no final analysis, but there is also no question that blended learning reframes our thinking about education. We recognize that combining face-to-face and online modalities can enhance learning opportunities. However, designating blended learning as a treatment effect in the sense of an experimental or even quasi-experimental study seems to be a stretch. Blended learning as a category with well-defined and crisp boundaries doesn't pass the precision test. The edges are fuzzy. On this point, we agree with Oliver and Trigwell: There doesn't appear to be a consensus prototype of a blended learning course other than some combination of face-to-face and online, with the inherent ambiguity therein.

Perhaps blended learning is best considered an evolving process. Instructors change their blends during a semester, or from year to year, depending on several factors. One semester the blend works well—the next semester, much less effectively. So how does one answer the question, "Does blended learning work?" The question creates a recurring problem because of blended learning's complexity and emergent properties where the whole is more than the sum of its parts. Courses are complex systems in which intervention impacts can rarely be anticipated with any degree of accuracy or consistency. No matter what modality a course is designated, that designation is never free from mediating and confounding factors, making it problematic to attribute learning gains to the format alone. Influences from multiple environments that have little to do with blended learning impact how students learn.

Class boundaries in contemporary higher education are flexible, fluid, and in a state of constant change. Clay Shirky, in his 2009 TED talk "How Social Media Can Make History," portrays a transformation of the world's informational ecosystem as the largest increase in expressive capability in human history.7 What we see in Shirky's thinking accurately fits the transformation of the "class" in higher education. There is a great deal of spontaneous learning going on outside courses, thereby impacting any results attributing learning impact to modality, and there is little we can do to assess its impact because the network is unpredictable. Sampling bias is real. Cause and effect are difficult to achieve.

To make progress, the research community will need to investigate those conditions under which "the blend" is successful—and those in which it is not—with much greater clarity. Only then might we arrive at a well-defined treatment effect that we can call blended learning. Of course, the other possibility is that by being a boundary object, blended learning displays flexibility and responsiveness that will accommodate many more educational contexts.

Notes

  1. Martin Oliver and Keith Trigwell, "Can 'Blended Learning' Be Redeemed?" E-Learning and Digital Media 2, no. 1 (2005), p. 24.
  2. Rhona J. Sharpe, Greg Benfield, George Roberts, and Richard Francis, "The Undergraduate Experience of Blended E-Learning: A Review of UK Literature and Practice," The Higher Education Academy (October 2006), p. 4.
  3. Joshua D. Angrist, "Treatment Effect," in The New Palgrave Dictionary of Economics (New York: Springer, 2016), p. 1.
  4. See, e.g., Barbara Means, Yukie Toyama, Robert Murphy, and Marianne Baki, "The Effectiveness of Online and Blended Learning: A Meta-Analysis of the Empirical Literature" [https://archive.sri.com/work/publications/effectiveness-online-and-blended-learning-meta-analysis-empirical-literature], Teachers College Record 115, no. 3 (2013); and Robert M. Bernard, Eugene Borokhovski, Richard F. Schmid, Rana M. Tamim, and Philip C. Abrami, "A Meta-Analysis of Blended Learning and Technology Use in Higher Education: From the General to the Applied," Journal of Computing in Higher Education 26, no. 1 (2014).
  5. Anders Norberg, Chuck Dziuban, and Patsy D. Moskal, "A Time-Based Blended Learning Model," On The Horizon 19, no. 3 (August 2011), p. 209.
  6. See, e.g., Charles R. Graham, "Blended Learning Models," in Mehdi Khosrow-Pour, ed., Encyclopedia of Information Science and Technology, 2nd ed. (Hershey, PA: IGI Global, 2008); Charles R. Graham, "Emerging Practice and Research in Blended Learning," in Michael Grahame Moore, ed., Handbook of Distance Education, 3rd ed. (New York, NY: Routledge, 2012); and Michael C. Johnson and Charles R. Graham, "Current Status and Future Directions of Blended Learning Models," in Mehdi Khosrow-Pour, ed., Encyclopedia of Information Science and Technology, 3rd ed. (Hershey, PA: IGI Global, 2015).
  7. Clay Shirky, "How Social Media Can Make History," TED@State Conference, Washington, DC, June 2009.

Charles D. Dziuban is Director of the Research Initiative for Teaching Effectiveness and a Professor Emeritus and Inaugural Pegasus Professor at the University of Central Florida.

Peter Shea is the Associate Provost for Online Learning and a Professor in the School of Education at the State University of New York at Albany.

Patsy D. Moskal is Director of the Digital Learning Impact Evaluation and Associate Director of the Research Initiative for Teaching Effectiveness at the University of Central Florida.

© 2020 Charles D. Dziuban, Peter Shea, and Patsy D. Moskal. The text of this work is licensed under a Creative Commons BY 4.0 International License.