5 AIs in Search of a Campus

min read

To grasp how artificial intelligence will play out in higher education, and how we can strategically address these changes, we should think about how artificial intelligence might unfold over the next few years.

5 AIs in Search of a Campus
Credit: TUTOR, Spaxiax; HEADMASTER, Lemusique; MUSE, Rimglow; MANAGER, Mahirates; DANGER, Lemusique / Getty Images, © 2019

How will artificial intelligence (AI) manifest in colleges and universities? In late 2019, professors research, create, critique, and teach various forms of artificial intelligence. Students, staff, and faculty increasingly experience artificial intelligence in digital devices, ranging from autonomous vehicles to software-guided computer game opponents, that are unsupported by the campus IT department. AI capabilities are gradually infusing the services, used by all in the campus community, of powerful computing enterprises such as Google, Amazon, Facebook, and Microsoft. Homegrown experiments are under way on our campuses, while vendors offer AI tools for us to purchase and implement. Our work world looks likely to be revolutionized, and we struggle to adjust.

Clearly, we are in the early days of artificial intelligence in higher education. How will AI changes play out in the future, and how can we grapple with them strategically? Given the deep complexity and broad scale of the emerging AI industry, forecasting its growth or constructing full-fledged scenarios is difficult. In addition, the historical record of the field is marked by bursts of hype alternating with bitter failures, which should also make us cautious. Let us therefore proceed based on several relatively cautious forecasts. First, artificial intelligence will experience steady and incremental growth for the next decade. It will neither blossom (or metastasize) into world-threatening, Skynet-like sentience1 nor utterly collapse. Artificial general intelligence (AGI)—that is, machines able to understand or learn any intellectual task that a human being can—will not occur in our future story's timeline. Second, academia and its IT departments will continue to face most of their current constraints: financial limitations, escalating service demands, and demographic challenges just starting to take hold. Third, the technological environment will develop along the following lines: increasing bandwidth; a continued shift to mobile computing; a mix of interfaces, including keyboard and windows as well as voice and gesture; and ever-richer multimedia as the virtual reality (VR) and augmented reality (VR) revolution proceeds.

With this context, let's consider how AI could unfold academically. I offer below five archetypes, or categories, that artificial intelligence could take in colleges and universities.

Five AI Archetypes

Tutor

This is software that instructs students on a given topic. Its basic form is familiar to most of us as a tutorial. Tutor takes a learner through a set curriculum, established by a government, professional association, or academic entity. The program assesses students' progress through objective criteria. Its precursors include the Duolingo language learning app and Jill Watson, the AI teaching assistant created by Ashok Goel at Georgia Tech University.2

The artificial intelligence tracks learners' skills and weaknesses, celebrating the former while trying multiple and repeated approaches to address the latter. Tutor follows the pedagogy of computer games in that it begins by establishing learners' current skills, then presses them onward—not too far but also not too lightly, matching Vygotsky's zone of proximal development.3 All users have a customized experience as they proceed, based on their interaction with the tool. Tutor is always available, of course, but can also proactively address students to remind them, through email, of upcoming challenges or outstanding lessons.

Advanced versions of Tutor can request access to different parts of a learner's digital environment, such as a microphone, social media accounts, or a gamer profile. This way Tutor can capture content relevant to the program's specific curriculum, such as speech in a foreign language or news reports about the topic under study. Tutor can work these into multimedia lessons, reminding the learner of their provenance.

Headmistress

This application helps organize an institution's learning mission. It is a single site for corralling student data as students proceed through grades, exams, performances, and more. Headmistress monitors success and failure rates, alerting students, faculty, and staff (depending on policies and campus culture) about problems to be addressed and triumphs to be recognized.

This application starts with today's current student analytics projects, such as the one notably implemented by Georgia State University, and builds on those.4 Headmistress proactively sends advisors updates on students, recommending courses of action. It notifies residence life staff about those first-year students who are least likely to return for a sophomore year, so that the former can devote resources to retaining the latter. The AI publishes strategic reports for deans and trustees on demand.

Manager

One of the most revolutionary ways that artificial intelligence enters our world today is by impinging on business and nonprofit administration. These fields already tend to depend on software for a great deal of functions: budgeting and payroll, communication, inventory, time management. In our future story, artificial intelligence offers assistance in many of these areas and can be deployed on campus accordingly.

Manager can be very basic software, offering autocomplete, document searching, inventory searches, staff communications (e.g., reminders, alerts), and low-level recommendations. It can go further and analyze job applicants for the best fit or assess current employees' performance. Manager can generate compensation packages, severance strategies, and succession plans based on much of the data that administrators now access.

Campus units can deploy Manager in different ways. An academic department could train the tool to seek out, hire, and manage adjunct faculty. The physical plant might run Manager to determine maintenance priorities and schedules. An education technology team may apply Manager to generate faculty outreach and relationship-management plans.

Muse

During the early 21st century, it was popular to observe that AI tools could do some things, but they could not act creatively. Yet as functions, projects, and then platforms appeared, it became clear that software can actually generate creative work, from writing to video production and game content. At the same time, AI tools can assist humans in their own creativity.

Muse is software that suggests phrasing for writers, strokes for painters, and composition for videographers. A user can give Muse a setting or problem, let it generate many possible next steps, and then select one or two to work from. One version of Muse helps users assemble bibliographies and look for threads between articles (e.g., Meta) [https://www.meta.org/]. Other versions of Muse assist students in writing term papers and lab reports (e.g., Revision Assistant).

As with Tutor, Muse may draw on a given user's digital ecosystem for inputs. A photographer might train the software to monitor light in a given setting over time, helping to determine the best window for capturing images. Or Muse could trawl a writer's e-books for relevant passages or inspiration.

Danger

While artificial intelligence can offer many benefits, it can also present threats. Today's AI critics and "techlash" advocates outline many of the latter, including privacy violations, software biased by prejudiced data, and the devaluation of humanity. Danger represents these problems, not as an application designed to embody them but as a characterization of other apps.

For example, even though Headmistress is a helpful too, some see it as a threat to human autonomy and privacy—as Danger. Muse is a way to boost creativity, but it becomes Danger when students see the tool as disabling their own imaginative impulses. Tutor becomes Danger in the eyes of faculty members when it usurps some of their pedagogical role. Manager can become Danger in many ways, such as when the software recommends human resource actions that are unpopular or inhumane, or when it errs in physical plant operations with an incorrect temperature or a missed repair, or when it generates communications that people perceive as uncanny or hilarious.5 No open-source community or business would produce an app called Danger, but that won't stop some academics from adding that label to other software.

Orientation Day for AIs

Assuming these AI archetypes or related versions appear, integrating them into campus life will present significant and serious obstacles. They cannot arrive without vetting from campus IT, legal, and academic leaders, at a minimum.

Those in the IT department must determine how to connect such tools with campus systems. Are APIs available, or do they need to be obtained or written? Does a given AI tool behave according to open standards? How dependent will an institution be on vendor support, and how much work is required to herd multiple vendors? What are the hardware and VPN implications? How will these emerging technologies connect with other emerging technologies, such as VR and AR? Where will the requisite data live, by which policies, and under what protection? To what extent can or should IT rely on the open-source world for development and support?6 Businesses will offer services and products that try to answer these and other questions.

Academic leaders face related challenges. Which departments and which individual faculty members will be most likely to try out Tutor? How much curricular and professional development support is necessary? When will students accept or resist Muse and/or Headmistress? The question of people being replaced by, or working with, artificial intelligence will inevitably arise. Should software replace or augment human instruction? How will staff work with an app in a managerial role? How will artificial intelligence play out differently in hybrid versus wholly online classes? How should academic integrity policies be modified in the face of generative software? Should Tutor focus on preexisting classes or, instead, expand current offerings? If Muse is available to enhance research, should it be managed by staff in the writing center, the faculty development center, or the library? Should they use Manager, wielding one AI archetype to organize another? How should deans and division leaders respond to faculty who see Danger on campus? What are the appropriate copyright policies for content generated by—or with—all of these applications?

Pedagogies will shift when AI archetypes manifest on campus, As noted by Joseph E. Aoun, this will require a variety of professional responses.7 Meanwhile Charles Fadel, Wayne Holmes, and Mya Bialik ask us to distinguish between instructive, constructive, and teacher supporting strategies, each having a different mix of teaching style and support mechanism.8 If some learning occurs through Tutor, a faculty member might shift classroom and asynchronous online time to focus on other parts of the curriculum. For example, a German professor could have students use Tutor for vocabulary and basic grammar, opening up more class time for dialogue practice and cultural exploration. Art, writing, and performance faculty would recalibrate assignments when Muse is in play. Media studies, computer science, and political science courses could include exercises based on awareness of Danger.

When AIs Fail to Make the Cut

The preceding discussion has assumed such AI archetypes will appear on campus, but they could well never set digital foot upon a quad for a variety of reasons. One easy way for them to flunk out is if our current burst of AI enthusiasm withers in the face of real-world uses. IBM has already run into significant obstacles introducing Watson into the medical field and has actually stepped back.9 Campuses may never host Tutor, Headmistress, Manager, or Muse if the technology underperforms.

Privacy concerns may also ward off these AI archetypes. Or Danger might become the best-known campus AI character and spark general dislike and resistance. The tide of dissatisfaction with the digital world, including artificial intelligence and the technologies it depends on (e.g., big data, analytics, sensors), appears to be rising. Critics within academia like Shoshana Zuboff (Harvard University), Chris Gilliard (Macomb Community College), and Amy Webb (New York University) have persuasively explained how this technology could be badly abused.10 My description of AI archetypes here has shown that artificial intelligence can intrude upon our lives in ways we might not expect or accept. Academia might ultimately decide that artificial intelligence is not a technology we should engage.

Geopolitics may also delay or warp campus artificial intelligence, as the United States and China struggle for global technology supremacy. Currently we are experiencing a talent race as companies in those two nations vigorously compete for the best AI experts, a competition rendered more intense as a trade war heats up.11 The FBI has advised academia to more closely monitor Chinese researchers and students—advice that has apparently been taken to heart by at least one professional organization but has been criticized by several university presidents.12 Campus AI implementation might be constrained by such security pressures. In response, the Chinese telecommunications and electronics company Huawei decided to give generously to US academics interested in working on artificial intelligence.13 How many campuses will have to decide between national political imperatives and international opportunities? Will campus IT departments be forced to modify or shut down AI implementations already under way when/if political winds shift?

Some of these questions may seem remote from campus life in 2019. Thorny legal questions, transpacific political wrangling, and the fate of human intelligence are not often areas in which many academics take the lead. These are questions, at least in part, for national policy makers. Yet it is imperative that academics start thinking about these topics. We can influence national conversations and policy, and we should, since we can bring to bear the intellectual armature of our colleges and universities. We also need to start planning now, while the AI revolution is accelerating. We may not end up hosting Tutor, Headmistress, Manager, Muse—and Danger—on every college and university, but imagining that possibility can help us prepare for the fast-moving future.

Notes

  1. The most thoughtful treatment of that possibility to date is Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: University of Oxford Press, 2014).
  2. Hillary Lipko, "Meet Jill Watson: Georgia Tech's First AI Teaching Assistant," Georgia Tech Professional Education (blog), November 10, 2016.
  3. See James Paul Gee, What Video Games Have to Teach Us about Learning and Literacy (New York: Palgrave Macmillan, 2003).
  4. Tim Renick, "At Georgia State, We Transformed Our Grad Rates: Here's How" [http://blog.kipp.org/college/at-georgia-state-we-transformed-our-grad-rates-heres-how/] KIPP:Blog, April 19, 2017.
  5. Think of "flash crashes" in automated financial trading and how disturbing that software becomes in the light of error. See David Easley, Marcos Lopez de Prado, and Maureen O'Hara, "The Microstructure of the 'Flash Crash': Flow Toxicity, Liquidity Crashes and the Probability of Informed Trading," Journal of Portfolio Management 37, no. 2 (Winter 2011).
  6. See, for example, the work of OpenAI.
  7. Joseph E. Aoun, "Artificial Intelligence and the Opportunity of Lifelong Learning," EDUCAUSE Review 54, no. 4 (fall 2019); Joseph E. Aoun: Robot-Proof: Higher Education in the Age of Artificial Intelligence (Cambridge: MIT Press, 2018).
  8. Charles Fadel, Wayne Holmes, Maya Bialik, Artificial Intelligence in Education: Promises and Implications for Teaching and Learning (Boston: Center for Curriculum Redesign: 2019), 164.
  9. Eliza Strickland, "How IBM Watson Overpromised and Underdelivered on AI Health Care," IEEE Spectrum, April 2, 2019.
  10. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, Hachette Book Group, 2019); Chris Gilliard, "Pedagogy and the Logic of Platforms," EDUCAUSE Review 52, no. 4 (July/August 2017); Amy Webb, The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. (New York: PublicAffairs, Hachette Book Group, 2019).
  11. Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order (New York: Houghton Mifflin Harcourt, 2018).
  12. Emily Feng, "FBI Urges Universities to Monitor Some Chinese Students and Scholars in the U.S.," All Things Considered (NPR), June 28, 2019; Sean Keane, "Huawei Ban Revoked by Science Publisher IEEE," CNET, June 3, 2019; L. Rafael Reif, "Letter to the MIT Community: Immigration Is a Kind of Oxygen," MIT News Office, press release, June 25, 2019.
  13. Karen Hao, "Huawei Is Giving $300 Million a Year to Universities with No Strings Attached," MIT Technology Review, July 3, 2019.

Bryan Alexander is a futurist, researcher, writer, speaker, consultant, and teacher, working in the field of how technology transforms education. A senior fellow at Georgetown University, he is the author of Academia Next: The Futures of Higher Education (Baltimore, MD: Johns Hopkins University Press, 2020).

© 2019 Bryan Alexander. The text of this article is licensed under the Creative Commons Attribution 4.0 International License.

EDUCAUSE Review 54, no. 4 (Fall 2019)