With predictive analytics, colleges and universities are able to “nudge” individuals toward making better decisions and exercising rational behavior to enhance their probabilities of success.
Like most other enterprises, academia is on the quest to leverage data to improve outputs and outcomes. At their core, academic enterprises are focused on advancing knowledge in society and transforming society through their outputs (e.g., the students they produce, the research they generate, and the interactions they cultivate with communities both local and global). Data management and analytics can significantly increase the odds that a higher education institution will deliver on its goals in an optimal manner.
Today, colleges and universities are collecting data on just about all facets of the academic ecosystem. For instance, data is being collected on students — not only regarding how they perform on a given course but regarding all aspects of student life: housing, finance, social activities (e.g., participation in student organizations, attendance at sporting events, gym membership). In addition, data is being collected on the research performance/productivity of faculty: the number of grant applications submitted, research awards received, publications and patents produced. Academic enterprises also collect an enormous amount of data on their donors and alumni. And finally, data is generated from institutional general operations (e.g., data on buildings, energy systems, human resources). Historically, colleges and universities have used this data for simple transaction-processing purposes such as invoicing, resource-allocation decisions, and budgeting. However, they are becoming much more sophisticated in their usage of data through predictive analytics.
Interest in using data more creatively (some might say, more innovatively) as a way to become more precise in how interventions are devised to improve outputs and outcomes is at an all-time high. And rightly so: precision allows academic enterprises to get near real-time situational awareness on agents (e.g., students, faculty, researcher teams) and objects (e.g., buildings, systems) that are of interest to them. Data collected can also be contextualized both temporally (i.e., against historic performance and future trends) and spatially (i.e., across units and enterprises). Precision is executed through predictive analytics in the mining of large swaths of data — or big data — to spot trends and probabilities and thus predict future behaviors. With predictive analytics, colleges and universities are able to “nudge” individuals (predominantly students) toward making better decisions and exercising rational behavior to enhance their probabilities of success.
The concept behind nudging and nudge theory centers on prompting individuals to modify their behavior in a predictable way (usually to make wiser decisions) without coercing them, forbidding actions, or changing consequences. Nudging has been around for some time but became popularized with the 2008 book Nudge: Improving Decisions about Health, Wealth, and Happiness, by Richard H. Thaler and Cass R. Sunstein. The book extolled the potential of nudging to increase the effectiveness of government by understanding the psychological or neurological biases that cause people to make choices that are not in their best interest (e.g., driving without a seatbelt, smoking, doing drugs, not studying, cheating on taxes). Later researchers followed up on this work by providing an organizational framework that categorized nudging along four dimensions (see table 1).1
Boosting Self-Control vs. Activating a Desired Behavior
Assisting in follow-through with decisions vs. influencing a decision that an individual is indifferent or inattentive to
Externally Imposed vs. Self-Imposed
Acts that are deemed important and voluntarily adopted vs. acts not requiring people to voluntarily seek them out
Mindful vs. Mindless
Guiding toward a more controlled state that helps people with follow-through on acts they would like to accomplish but have trouble enacting vs. using emotion, framing, or anchoring to sway the decisions that people make
Encourage vs. Discourage
Facilitating a particular behavior vs. hindering or preventing behavior that is believed to be undesirable
Table 1. Four Dimensions of Nudging
Research has proven that nudging is effective in steering individuals toward better decisions through choice architecture: the presentation of choices in different ways.2 Choice architecture doesn’t look for individuals to act more rationally; instead, it seeks to create environments that accord with rational decision-making. We frequently see (without noticing) nudging in our daily lives: electronic highway signs that say “Drive Drunk and Get Nailed” to prevent drunk driving; modified food displays that bring healthier food to eye level and make junk food harder to reach; organ donations that are an opt-out policy on drivers licenses instead of an opt-in policy.
In academia, data collected from and about students can signal how they are performing at any given time, which can be integrated with other course performance data (i.e., the courses that students are enrolled in at the same time) to see whether students either are not performing well in just one class or are having a challenging time across all (or a majority) of the courses in which they are enrolled. This information can then be shared with faculty and advisors. It can also be contextualized further if linked to other data sources such as meal habits, which can show that a student has gone from eating three meals a day on campus to just one, or number of gym visits, which may show a change in activity. In the purest sense, we could have multiple views of a student (this is not that difficult to do, since most institutions link these activities to the ID cards that students carry) with all data integrated, thereby giving us a more holistic view of the individual. Now imagine this at scale, and we have the ability to see data more comprehensively across the entire student population and to perform comparisons across all groups (e.g., freshmen versus seniors, engineering students versus business students).
We then have the possibility to leverage experimentation to test the effects of various types of interventions. What happens if we have a different instructor in one class versus another? Does that improve the odds of interest in a given major? What happens if we intervene early on to assist students struggling with writing or math courses? Does that increase their chance for success in the long run? We might even be able to ask questions such as how might we identify early signs of poor well-being (e.g., a student who has never used the gym facilities or one who, based on data from his/her meal plan, is not getting the right diet). Yes, this last example is a bit unusual, but we will be able to ask these types of questions because the data is there. For each of the above questions, we could try several interventions to see what works and what does not work. We can collect data, analyze it, identify significant effects, and then implement evidence-based nudging strategies.
But things are seldom this simple in life, and with academic institutions, things are rarely ever this simple. With nudging, several additional issues need to be considered.
Shoving and Smacking
The business and science of changing behaviors has mixed reviews. To be clear, we should note that nudging, by itself, is generally harmless and noncoercive. However, efforts to move students toward “rational” behaviors — efforts that are often seen as a nudge — can turn questionable when they become more coercive, restrictive, or punitive. At this point, the nudge becomes a shove or a smack. A shove (a much more deliberate nudge) occurs by making certain desires tougher to achieve, such as by increasing taxes on cigarettes or necessitating numerous procedures that students must go through for approval to live off-campus. A smack occurs by directly restricting activities, such as banning smoking in public places or living off-campus.
Consider the public health issue of smoking. More than fifty years after the U.S. Surgeon General’s landmark report on the health risks of smoking cigarettes, the goal of reducing the number of people who smoke is still a challenge in the United States and in most other countries as well. Up until about the 1990s, smokers could smoke just about anywhere: on airplanes, in office buildings, in restaurants, in colleges and universities, and in bathrooms. As public health officials intensified their efforts to decrease smoking, they began nudging the public, using nudges such as financial incentives or commercials showing the severe effects of smoking. Though effective in some groups, these nudges weren’t enough to change behaviors en masse. As a result, some anti-smoking groups resorted to shoves and smacks.
For instance, policymakers began shoving smokers into new behaviors by implementing higher state excise taxes on cigarettes. With financial disincentives, this practice moves away from nudging and into a paternalistic and coercive type of behavior modification. But according to the Campaign for Tobacco-Free Kids, the shoving had an effect: increasing excise taxes has helped reduce smoking, especially among children. For every 10 percent increase in the price of cigarettes, overall cigarette consumption declines 3–5 percent and consumption by kids declines 6–7 percent.3 Policymakers then went a step further and began to smack smokers, through the restriction of smoking in public places. The actual elimination of choice is coercive, even if effective.
The differences between nudging, shoving, and smacking are extremely important in higher education as colleges and universities make their foray into the realm of predictive analytics and automation. For a number of reasons, the effect of nudging on students and academics can be the opposite of what is intended. The first, and perhaps most obvious, reason is the question of who decides what an “appropriate” intervention is. For small issues, such as students who are underperforming in class, there might be a simple solution, but for more complex problems, there aren’t always simple descriptions of the problems or simple answers. In a nonacademic example, Saurabh Bhargava and George Loewenstein discuss the complexity of deciding where to nudge regarding the problem of obesity.4 Although obesity results from the continued intake of excessive calories, the causes of this problem are rooted in issues related to food production, policy, culture, and socioeconomic factors. Food production innovations enable faster and cheaper food generation, which makes unhealthy food much cheaper to purchase (and thus appealing to low-income individuals) and enables policy decisions such as the subsidization of corn (and corn syrup). Also exacerbating the obesity issue is the cultural phenomenon of “super-sized” portions of meals and the increased physical inactivity due to video games, the Internet, and television. Given all of these contributing factors, nudging healthy behaviors can be done by suggesting the intake of fewer calories, by reducing access to unhealthy foods, and by encouraging more physical activity and smaller portion sizes. But who is to decide which of these nudges is appropriate?
A second reason that nudging can have the opposite effect on students and academics involves balancing individuals’ privacy and freedom versus knowing what is “best” for them. Students today have a much different notion of privacy than their parents had, but they could nevertheless object to their information being used in a way that could be construed as damaging to them. The choices that students make along their pathway to academic success are their own. What happens when nudging has been used and an individual still chooses not to perform? How do we deal with outliers at the low end of the spectrum when it comes to various distributions associated with performance and behavior? We might simply say that the nudge didn’t work, or we might reevaluate the nudge, but resulting actions could have quite damaging effects, such as students being placed on probation or even being asked to leave the college or university. In addition, acting on data from just one instance or just one case is going to be problematic. We need to see trends across groups and make sure our results are significant. With predictive analytics, we are trying to personalize the delivery of services and content; we thus need to consider the tradeoffs between how much generalizable knowledge we want to generate versus the specificity of knowledge that matters to a single individual or a small community or group.
Third, nudging thrives on information that creates profiles full of probabilities and trends about behaviors, but what if these trends are not indicative of other issues? Most academic enterprises pride themselves on attracting a diverse student body across dimensions of socioeconomic status, race, religion, orientation, sexuality, veteran status, and other subcultures, many of which are not captured in data. Nudging students for “rational” behaviors that may not be rational to them is an error and can be considered a misuse of information and authority. This can be particularly damning when we consider the use of automation in the nudging process.
Automation of the Academic Enterprise
The combination of automation and nudges is alluring to higher education institutions because it requires minimal human intervention. This means that there are greater possibilities for more interventions and nudges, which are likely to be much more cost- and time-effective. In retail and merchandising, for example, automated nudges alert sellers to opportunities such as adding products to avoid going out of stock and sharpening prices to increase competitiveness. In his 2015 letter to shareholders, Amazon founder and CEO Jeff Bezos noted that the company sends more than 70 million of these nudges weekly.5
The “secret sauce” of predictive analytics and automation is sophisticated algorithms. Algorithms are rules that order the sequence of operations, thus driving the technological innovations we know and love today. Algorithms power our mobile phone operating systems so that we can interact with our devices and get the same experience each time, they help make matches on dating apps, they assist with identity theft and fraud detection, and they allow us to get the most relevant information when we search Google.
Algorithms are what make data so valuable and useable. With automation, we can speed up the process and turn insights into action. Thus, as we collect more data, we will know precisely what works and what does not work under varying conditions. This is good news. The even better news is that this will lead to a rethinking of the academic enterprise and its educational focus and mission.
For instance, serious games enable students to learn at their own pace and to process material better than they may be able to in a classroom setting. Students gain from personalized feedback and the engagement of a gaming environment to learn concepts and to maneuver complexities associated with those concepts as they advance through the various levels of the game. In addition, the provision of rewards (e.g., badges earned) gives students not only a sense of achievement but also the ability to compare their performance and ranking with that of their peers. Education technologists will soon look at how much automation can be brought into the course delivery of what might be considered standard and structured (and even semi-structured and loosely or unstructured) content. The development of automation (artificial intelligence redefined) not only will impact how we drive our cars (the rise of autonomous vehicles) but also will shape how we think about higher education content delivery. Simply put, we can build algorithms that learn from interactions with subjects as they maneuver their learning environments. These algorithms can direct the learning sequence and also motivate students to perform at higher rates, engage groups in activity, and advance learners’ knowledge. Many of the frameworks and platforms on which electronic games are based are extensible to the education space.
Will we really need instructors to teach college and university students basic statistics? Can’t students get the content they need through a combination of online videos, pre-canned online lectures, and a series of game-based content progression and examination? The answer to this question will depend on several factors, early on. We believe that for most students the answer will be “yes.” The benefit is that these students will not have to sit through 15 weeks of classes to receive content they may be able to learn in 8 weeks or even 1 week. The bad news is what will happen to those students who do not fare well on these platforms and who need human instruction. Do we charge the second group of students different fees, just as banks and airlines charge fees if a customer wants a human-driven transaction versus an automated one? Will we offer two categories of degrees — those that are earned through autonomous learning environments and those that are traditional? Early on, hybrid-learning environments are likely to be embedded in traditional degrees. Over time, however, the sophistication of automation will create new business models for higher education. In fact, we have been contemplating these scenarios for a while, and for us, the bottom line is that the future does not look good for traditional academic institutions. The days when an institution’s brand or a particular degree (e.g., MBA) generates differential revenues just because courses are delivered in person, through traditional modes of instruction, will be numbered. Institutions that are late adopters of the digital education innovation space — not simply repurposing traditional content and delivering it online but, rather, leveraging the digital platform and technologies to create immersive, anytime, learner-focused, and knowledge-intensive experiences — will be left in the dust.
Assuming the academic enterprise gets analytics right and learns how to conduct experiments ethically and responsibly, higher education institutions will be able to learn quite a bit to help transform the educational experience of students. Although we have focused on educational outcomes in this article, the same can be said of all other facets of running an academic enterprise (e.g., managing research and development efforts from collaborations to seed investments). Automation goes beyond students and the classroom and extends to faculty. For instance, colleges and universities have nudged faculty to adopt education technologies by directing them to make their courses more amenable to technology-driven delivery and to develop course content that is open or is designed to be replicated for mass consumption. Institutions do so by incentivizing faculty efforts.
Right now, these processes are not incredibly automated, but they could be, in more nuanced ways, as academic enterprises consider how to break apart the education process as we know it today and make it more efficient and profitable. Academia is ripe for disruption, given the intensified focus on using technologies to transform the process of how students learn and consume content and given the interest in automation of experiences through gaming and artificial intelligence. Ethical issues come into play, however. Some researchers argue that algorithms are more ethical than humans because the former have limited biases, but this is not 100 percent true. Algorithms are designed by humans and can be programmed to capture biases or make judgments within those biases, either on purpose or accidently. This can create serious implications of misdirected nudges, shoves, and smacks.
Flirting with Disaster?
Nudging opens up risks on opposite extremes linked to data and how data is used. The first risk is the danger of ignoring variances in data. Valuable data elements that may impact our understanding of the underlying phenomenon and the design of the intervention — elements such as diverse information that is difficult to capture — can be overlooked. Second, on the other extreme, academia may be flirting with discrimination by using group attributes to generalize patterns across individuals who might have features connecting them to one or more categories. Algorithms pick out data points that make up a small (e.g., high school GPA, major, hometown, residence, financial aid status) or large (e.g., race, socioeconomic status, marital status, gender) portion of an individual’s experience, but should these data points become a factor in the types of nudges used?
One way to prevent this is by having a theoretical base for an intervention. The temptation with big data analytics is to see correlations as causations. Big data often spews out spurious correlations that, if not examined carefully, can be acted on with negative results. To get an underlying causation, we need to conduct systematic randomized controlled trials, which are not going to be easy in academia. Can we justify giving special resources to one group and not to another in order to test a causation? Other issues are also critical, such as getting informed consent and letting participants know about the experiment.
Further, understanding how to share performance data so that individuals can make their own decisions and choices is imperative. Consider the case of utility companies that have experimented with giving individuals data regarding how their consumption compares with that of their neighbors who have similar properties. These experiments have shown that individuals are more likely to consider modifying their consumption behavior when they see how neighbors are consuming a resource and that this data is more influential than other forms of data shared, such as how much money they could save on their utility bill through behavior modification. Similarly, in the academic space, we need to devise more performance measures that can be shared and that go beyond the traditional measure of GPA. In addition, experiments need to be conducted to see how the provision of information — that is, the frequency and the nature of the information provision — actually modifies behavior.
Another issue is the measuring of outcomes versus outputs. Academia is very good at measuring outputs: research, grades, awards. However, outcomes are harder to capture, and most colleges and universities justify their performance by relying on the arbitrary rankings conducted by third-party organizations (e.g., U.S. News & World Report) as a surrogate. To truly leverage analytics toward long-term measures that benefit students, we need to be able to measure outcomes and track them over time. This is not going to be easy or cheap to do. In addition, we will need to build a culture in which students are encouraged to share reliable data with their institutions after graduation. This again will require investments, along with a rethinking of how alumni associations interact with the academic enterprise.
Although nudging in small doses makes a difference, nudging is no panacea for all of the complex problems found in higher education. There are few studies that evaluate the overall effectiveness of nudging in changing behaviors and sustaining impact.6 Some studies even note the adverse effects of nudging.7 Like anything else in life, knowing when to use nudging — and when enough is enough — can be a challenge.
The answer is not simple. Perhaps the deepest concern lies in the definition of the problem and in who decides the direction of nudges. Nudging can easily become shoving or smacking. Obviously, the intentions behind most higher education practices are pure, but with new technologies, we need to know more about the intentions and remain vigilant so that the resulting practices don’t become abusive. The unintended consequences of automating, depersonalizing, and behavioral exploitation are real. We must think critically about what is most important: the means or the end.
With the transformative nature of new capabilities, we should explore both the opportunities and the threats associated with nudging in higher education. This is especially true at a time when academic credentials beyond the high school diploma are needed to acquire entry-level jobs, when colleges and universities are experiencing retention challenges, and when funding for higher education is decreasing. Nudging, used wisely, offers a promising opportunity to redirect students’ decisions and to contribute to the success of those students facing the steepest barriers.
The views expressed in this paper are those of the authors and do not represent official viewpoints of any organization with which they are associated.
- Kim Ly, Nina Mažar, Min Zhao, and Dilip Soman, “A Practitioner’s Guide to Nudging,” research report series, Rotman School of Management, University of Toronto, March 15, 2013.
- Esther Duflo, Michael Kremer, and Jonathan Robinson, “Nudging Farmers to Use Fertilizer: Theory and Experimental Evidence from Kenya,” American Economic Review 101, no. 6 (October 2011); Raj Chetty et al., Active vs. Passive Decisions and Crowd-out in Retirement Savings Accounts: Evidence from Denmark, National Bureau of Economic Research, December 2013.
- Campaign for Tobacco-Free Kids, “Raising Cigarette Taxes Reduces Smoking, Especially among Kids,” April 1, 2016.
- Saurabh Bhargava and George Loewenstein, “Behavioral Economics and Public Policy 102: Beyond Nudging,” American Economic Review 105, no. 5 (May 2015).
- Jeffrey P. Bezos, “To Our Shareowners” .
- See Theresa M. Marteau et al., “Judging Nudging: Can Nudging Improve Population Health?” BMJ 342 (January 2011).
- Benjamin R. Handel, “Adverse Selection and Inertia in Health Insurance Markets: When Nudging Hurts,” American Economic Review 103, no. 7 (December 2013); Thomas Ploug and Søren Holm, “Doctors, Patients, and Nudging in the Clinical Context: Four Views on Nudging and Informed Consent,” American Journal of Bioethics 15, no. 10 (October 2015); Vaiva Kalnikaitė, Jon Bird, and Yvonne Rogers, “Decision-Making in the Aisles: Informing, Overwhelming, or Nudging Supermarket Shoppers?” Personal and Ubiquitous Computing 17, no. 6 (July 2013).
Kevin C. Desouza (Kev.Desouza@gmail.com, http://www.kevinDesouza.net) is a Foundations Professor in the School of Public Affairs in the College of Public Service & Community Solutions at Arizona State University.
Kendra L. Smith (Kendra@KendraSmithPhD.com, http://www.KendraSmithPhD.com) is a Research Analyst at the Morrison Institute for Public Policy, a research and community service unit of the College of Public Service & Community Solutions at Arizona State University.
© 2016 Kevin C. Desouza and Kendra L. Smith. The text of this article is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License.
EDUCAUSE Review 51, no. 5 (September/October 2016)