The Post-Learning Era in Higher Education: Human + Machine

min read

In anticipation of an emerging environment in which technologies are cognitive partners, humanity enters into something that could be best described as post-learning.

sculpted face obscured with striated colored lines
Credit: Kkgas / Stocksy © 2020

"We'll think nothing is happening and all of the sudden, before we know it, computers will be smarter than us."
—Douglas Hofstadter, quoted in Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (2019)

A critical question confronts those of us in the modern higher education system, particularly CIOs, faculty, and learning designers: What remains for people to learn when technology is able to outperform us, cognitively, on many basic knowledge tasks? Learning has long been viewed as the exclusive domain of biological entities. Even when statements are made that an organization or society can learn, the central point of reference remains the people in the organization or society. This assumption is starting to change, however, and the implications will be dramatic for how knowledge is created and shared and how we seek to develop the next generation.

Artificial intelligence (AI), for all its recent hype, has been a domain of research since the seminal Dartmouth Summer Research Project on Artificial Intelligence workshop in 1956. The ensuing 64 years were marked by bold proclamations, hype, disappointment, and over the last decade, surprising advancements. AI is not a future technology with a future impact: it is here and now, present in our mobile phones and our daily lives. AI processing is used for photos taken with newer smartphones. A simple purchase at a restaurant involves a network of fraud-detection algorithms. AI is increasingly evident in discussions regarding the future of education. AI touches everything.

The use of algorithms to provide a perfect picture or to determine a fraudulent credit card purchase doesn't threaten our concept of our own humanity. Instead, these are the direct outcomes of a data-rich world in which we rely on technology to solve problems caused by other technologies. Nervousness sets in when machines begin to exhibit capabilities that encroach on our uniquely human cognition. For example, in domains that would generally have been thought to be exclusively human just a few decades ago (e.g., image detection, language, and game play), tasks are now completed at superior levels by AI.1 Human cognition, it appears, is continually acquiescing its superiority to artificial cognition.

The vision of AI to advance beyond human cognitive performance was common in the earliest proclamations of what AI might offer humanity. In 1958, Herbert A. Simon and Allen Newell stated: "Within ten years a digital computer will be the world's chess champion."2 (This did happen, but in 1997—a full forty years later.) Meanwhile Alan Turing, in the early 1950s, suggested that computers could readily "outstrip our feeble powers" and at "some stage therefore we should have to expect the machines to take control."3

The enthusiastic hype and overpromising remain prominent in public AI conversation today. There is no shortage of opinions, from both scientists and hypesters, about AI's long-term impact on humanity. The camps are sharply divided. One—including Bill Gates, Elon Musk, and Stephen Hawking—posits AI as worrisome and potentially species-altering or contributing to a catastrophic event. Others—such as Mark Zuckerberg, Demis Hassabis, and Ray Kurzweil—see limited downsides.4 Considering the extreme and disparate viewpoints among experts in the field, we should not expect accurate forecasts or even consensus about the longer-term development of AI and how it may intersect with, and impact, humans.

While debate remains unresolved regarding AI ending or augmenting humanity, dramatic short-term impacts on learning (and, as a result, on colleges and universities) can be anticipated. Once a machine learning model has mastered a task, though often within very bounded and domain-specific settings, it is capable of vastly outperforming humans. As Stuart J. Russell has stated: "As soon as machines can read, then a machine can basically read all the books ever written; and no human can read even a tiny fraction of all the books that have ever been written."5 An acquired capability can be rapidly scaled, which is a pronounced departure from learning in biological systems; in the latter case, learning can be readily transferred to new domains but cannot scale almost infinitely.

Clearly, we are being shaped by the machine. As Alan Kay has noted: "In normal science you're given a world and your job is to find out the rules. In computer science, you give the computer the rules and it creates the world."6 Humans are meeting AI halfway by allowing our learning and our actions to be made routine and heavily structured. Metrics drive the pedagogy. Greater use of personalized learning technologies will likely only advance our receptivity to being nudged and shaped to better fit the algorithms presented to us. With the automation and technification of all aspects of modern life, the desire to find uniquely human domains, untouched by routine and forced structure, is understandable.

While humans are wired to learn—we cannot not learn—we are in an age when our learning needs are more networked and less individual. AI is a node with growing presence in that network. Our learning peers are not exclusively human; they are also algorithms and automated agents. In anticipation of an emerging environment in which technologies are cognitive partners, humanity enters into something that could be best described as post-learning. Essentially, this is the point at which traditional learning activities that define modern education are better performed technologically and we, as educators, begin to explore a broader range of knowledge activities that are likely to remain outside of the domain of AI. These include activities such as sensemaking, wayfinding, creativity, and meaning making. The logic that underpins this assertion is as follows:

  1. Historically, humans have created institutions, such as libraries and academies, that reflect what is possible with the information technologies that are available.
  2. As information quantity increased, additional systems were developed to capture information and share it with the next generation via classification schemes (e.g., Linnaean, encyclopedias) and more institutions were created to disseminate that information (e.g., colleges and universities, corporate settings).
  3. With innovations in information generation and global connectivity, existing mechanisms for sharing the scope of human knowledge with the next generation became inadequate.
  4. In response, data science and analytics have developed to organize and gain insight into this new scope of information, building on advanced computational capabilities.
  5. While analytics have enlarged the scope of humans' ability to understand large quantities of information, a secondary and more significant trend is emerging and is beginning to overlap with human cognition: AI.
  6. AI, in learning settings, increases the sophistication of what is possible cognitively, outperforming humans in many learning tasks. This raises questions about how to balance human and artificial cognition and about which domains of human cognition can (and cannot) be duplicated by technology.
  7. If machines can outlearn humans and, increasingly, do not have the challenge of passing information on to the next generation, we need to consider our relationship with data and with learning—essentially, our entrance to a "post-learning era."
  8. In a post-learning era, educators make decisions around what is sensible for humans to do and what is sensible for technology to do. Conceivably, human knowledge work will focus on learning adjacent activities such as sensemaking, wayfinding, meaning making, and related creative and cultural activities that remain uniquely human.

Returning to our original question: What is left for humans to learn when AI "outlearns" us? Essentially, colleges and universities move from learning to beingness (or in Ronald Barnett's words, from epistemology to ontology)7 as the key focus. Global connectivity, allowing access to an unending stream of information, is changing daily life. After the initial enthusiasm of Web 2.0 and the "read-write web" gave way to large-scale social media platforms, it quickly became clear that technology had created a context in which more technology, in this case, AI, was needed to track misinformation, security threats, and a range of challenges created by information abundance. Early indications also began to emerge regarding how technology might impact us cognitively.8 While society grapples with the role of AI in these settings, higher education institutions face a different type of existential threat as artificial cognition matches and exceeds human cognition in many areas. Re-centering curriculum and teaching practices on the skills and mindsets that are needed to flourish in a complex and contradictory world is an important first step to better understanding long-term roles where we take agency over how we are shaped and remade cognitively. The return to less-technified and more-holistic teaching and learning practices holds promise for how higher education advances society and knowledge.

Notes

  1. The Electronic Frontier Foundation tracks AI progress. See "AI Progress Measurement" (website), accessed 1/17/20.
  2. Herbert A. Simon and Allen Newell, "Heuristic Problem Solving: The Next Advance in Operations Research," Operations Research 6, no. 1 (January-February 1958).
  3. Alan Turing, "Intelligent Machinery: A Heretical Theory," lecture given to 51 Society, Manchester, UK, 1951.
  4. Maureen Dowd, "Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse," Vanity Fair, March 26, 2017.
  5. Stuart J. Russell, quoted in Martin Ford, Architects of Intelligence: The Truth about IA from the People Building It (Birmingham, UK: Packt Publishing, 2018), p. 49.
  6. Kevin Kelly with Steven Levy, "Kay + Hillis," Wired, January 1, 1994.
  7. Ronald Barnett, "Supercomplexity and the Curriculum," Studies in Higher Education 25, no. 3 (2000).
  8. Betsy Sparrow, Jenny Liu, and Daniel M. Wegner, "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," Science 333, no. 6043 (August 5, 2011).

George Siemens is co-Director at the Centre for Change and Complexity in Learning at the University of South Australia and is Professor at the University of Texas, Arlington.

EDUCAUSE Review 55, no. 1 (2020)

© 2020 George Siemens. The text of this article is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License..