Artificial Intelligence: Threat or Opportunity?

min read

Half of Americans surveyed said they trusted higher education to build, manage, and govern artificial intelligence. We should lean into this finding.

robotic arm throwing a paper airplaine
Credit: Colin Anderson / Stocksy © 2020

The last few years have been rough for higher education. According to a 2018 Gallup Poll that tracked Americans' confidence in colleges and universities, over the previous five years higher education saw its sharpest decline in public trust, with only 48 percent of those surveyed expressing confidence, down from 57 percent in 2015.1

But statistics like these can be overstated. Americans distrust many traditional institutions these days: not only higher education but also government and the media. That distrust extends to big technology companies such as Facebook and Google. According to the Edelman Trust Barometer 2020, which tracks consumer sentiment across a range of sectors, Americans distrust—or are at least ambivalent about—the development of advanced technologies such as artificial intelligence (AI) by companies that may not be positively and responsibly shaping our future.2

Think about the fallout from Facebook's Cambridge Analytica debacle, in which millions of users' profiles were harvested without consent and used for political advertising. And consider Uber's Advanced Technologies Group, which had no official safety plans in place when one of its self-driving test cars crashed and killed a woman. These examples are frightening because they appear to be void of responsible leadership acting in the public's collective best interests. They leave us not knowing who we can trust in a brave new world. There is, however, one exception—according to a 2019 survey from the University of Oxford's Future of Humanity Institute (FHI), which asked 2,000 Americans to rate their confidence in actors developing artificial intelligence. Half of Americans surveyed said they trusted higher education (and the military) above all (more than government agencies, non-profit research collaboratives, and big technology companies) to build, manage, and govern artificial intelligence.3

We should lean into this finding. It not only signifies at least a pocket of trust remaining in higher education institutions but also offers an opportunity for college/university researchers, faculty, staff, and administrators to regain lost ground and exemplify AI leadership at a time when our institutions—and our world—need us most.

Leadership is increasingly digital in focus and is present in just about every sector today. Generally, digital leadership describes an emerging class of roles, responsibilities, and competencies needed to lead organizations in a digital world. But we should not confuse digital leaders with digital evangelists, at least not in higher education.

Digital leaders are equipped to lead in a digital world. They understand its complexity and also the dissonance and distrust that digital can create, and they help others make meaning within and out of it. Good digital leaders are virtuous and altruistic. According to Deborah Ancona, who studies digital leadership at the MIT Sloan School of Management, digital leaders are sense-makers who help others "create meaning out of the messy world."4 Their lens is digital, but their focus is human.

We need more digital leaders in higher education who are sense-makers not only for their own institutions but for the public at large. We need leaders who are optimistic about this technology but also cautions. We need leaders who are engaged in the world of artificial intelligence—whether as researchers, subject-matter experts, educators, ethicists, or administrators in our communities and the world at large—and who are committed to building transparency and trust within the AI world.

This is something technology companies struggle to do, but it's in the DNA of higher education. Think of digital leadership as a strategy of engagement, taking the understanding of, resources for, and experiences with artificial intelligence cultivated within colleges and universities—whether through basic research, experimentation, teaching, or academic innovation—out into the world to meet its most pressing challenges. Doing so not only will quell fears but also may instill—perhaps even increase—confidence in higher education at a time when we need it most.

As artificial intelligence continues to move further into the mainstream (which it will) and as regulators struggle to govern AI research and development (which they will) and as the market continues to coalesce around big-tech companies such as Facebook and Google (which it will), higher education is uniquely poised to gain public trust once again.

Notes

  1. Jeffrey M. Jones, "Confidence in Higher Education Down Since 2015," Gallup (website), October 9, 2018.
  2. Edelman Trust Barometer 2020, "Special Report: Trust in Technology."
  3. Baobao Zhang and Allan Dafoe, "Artificial Intelligence: American Attitudes and Trends" (Center for the Governance of AI, Future of Humanity Institute, University of Oxford, January 2019).
  4. Deborah Ancona, "Five Rules for Leading in a Digital World," MIT Sloan Management Review, October 28, 2019.

Brian Fleming is Vice President for Innovation and Strategy at Southern New Hampshire University (SNHU).

EDUCAUSE Review 55, no. 2 (2020)

© 2020 Brian Fleming