When selecting AI-enabled programs for hiring, teaching, testing, or student support, higher education institutions should weigh the cost savings and efficiency of the programs against any possible risks to fairness, privacy, bias, and public safety.
While artificial intelligence (AI) is a potential game-changer for improving efficiency, it may also come with ethical baggage. John O'Brien, president and CEO of EDUCAUSE, defines digital ethics as "doing the right thing at the intersection of technology innovation and accepted social values."1 IT departments that want to tap into the benefits of AI can't assume that AI-enabled programs and services on the market will live up to that definition. The vendor that created the product or service may not have considered ethics during each stage of development. Or, perhaps the vendor adhered to high ethical standards but is bought out by a company that is guided by a different set of moral principles. IT departments should be ready to assess and manage ethics before, during, and after AI deployment.
Preparing and Taking Responsibility for AI Ethics
Universities are already using AI, possibly without having considered the risks fully. According to the 2020 EDUCAUSE Horizon Report Teaching and Learning Edition, AI is broadly in use across higher education campuses, embedded in "test generators, plagiarism-detection systems, accessibility products, and even common word processors and presentation products."2
AI can provide game-changing improvements by automating processes that are typically handled by humans, but the areas of risks associated with AI are broad and deep. Potential risks include "everything from AI algorithmic bias and data privacy issues to public safety concerns from autonomous machines running on AI."3 Changes that occur after deployment and are outside of the IT department's control may also increase risk. For example, a university's learning management system (LMS) may start with a robust data privacy policy, but the company that manages it may be sold to a private company, resulting in unforeseen risks to student data.4
Ensuring Ethical AI
Determining who is responsible for ethical AI turns out to be more complicated than identifying the person who created the program. There are potentially multiple responsible parties, including programmers, sellers, and implementers of AI-enabled products and services. For AI to be ethical, multiple parties must fulfill their ethical obligations.
The AI Programmer and the Programmer's Manager
Lynn Richmond, an associate at BTO Solicitors in Edinburgh, Scotland, writes, "Where AI is employed to mimic the process carried out by a human . . . the party who has provided the wrong data, the wrong pre-determined result, or the wrong process is likely to be liable."5 It seems logical that the programmer would be assumed responsible if anyone is harmed by AI. The programmer, however, was not acting alone. The programmer's manager and others who helped create the AI are also accountable.
John Kingston, a senior lecturer in cybersecurity at Nottingham Trent University in England, notes that determining accountability may include "debates [about] whether the fault lies with the programmer; the program designer; the expert who provided the knowledge; or the manager who appointed the inadequate expert, program designer, or programmer."6 With this high level of accountability for developing ethical AI, some vendors are looking into how to track the thought process that the AI is using so they can better prepare in case a legal defense becomes necessary.7
The AI Marketer and Salesperson
Once an AI program is on the market, whose job is it to ensure that it will have the intended outcome and will not just automate ethical missteps? Chris Temple, a partner in the litigation practice at Fox Rothschild, names "sellers" on the list of the many parties who could be seen as legally responsible for nonhuman decision-making if things go wrong.8
But just because sellers have a legal responsibility doesn't necessarily mean that the programs they are marketing and selling are ethical. Chrissy Kidd, a freelance technology writer for BMC Software, reminds us that "some consider 'as a service' offerings a black box—you know the input and the output, but you don't understand the inner-workings."9 This lack of understanding creates a major issue for the end user because IT departments shouldn't simply trust that AI programs on the market will already have ethics "built into" their design and sales processes.
Higher Education Institutions That Implement AI
The responsibility for ethical AI is shared by many different stakeholders, from programmers and their managers to the companies that market and sell the programs—and even the higher education institutions that deploy AI products and services.
Whether or not ethical design was used when the AI was developed (or whether ethical practices were followed when it was sold) can become a problem for the end user. For example, new legislation in Illinois places some of the ethical responsibility associated with AI video hiring with the company purchasing the tool.10 IT departments will need to determine whether or not ethics were carefully applied in the planning, programming, and selling of an AI-enabled program before purchasing the program.
What can IT departments do to be ready to take on this level of ethical responsibility? Here are five practical steps IT departments can take to manage the ethics of AI. These steps should be applied to AI that is already deployed on campuses as well as AI that is being considered for future use.
Five Steps IT Departments Can Take to Manage the Ethics of AI
- Make AI ethics a priority.
- Commit to understanding the ethical issues of AI deployment.
- Consider ethical issues when making decisions so that AI use is more likely to lead to ethical outcomes.
- Consider current and future AI applications across higher education functions.
- Ask, "Where is AI already in use on our campuses, and where might it be used in the future?"
- Read about important overarching ethical principles and guidelines.
- Consult sources with specific guidance on ethical design.
- Create a custom AI ethics code to guide decision-making.
- Review these articles: IBM, Sage, Intel, World Economic Forum, Deloitte, and Nesta.
- Review this AI ethics code example: Society of Corporate Compliance and Ethics.
- Apply the AI ethics code to existing AI deployments.
- Use the AI ethics code to evaluate and manage the ethical risks of AI that is already in use.
- Apply the AI ethics code to future AI deployments.
- Use the AI ethics code to select AI-enhanced programs and plan future AI deployments to avoid bias and unintended consequences.
Taking responsibility for AI means considering the ethical issues at every step. This kind of ethical thinking must be applied methodically whether designing, marketing, selling, or deploying AI solutions, which means that when selecting AI-enabled programs for hiring, teaching, testing, or student support, institutions must intentionally weigh the cost savings and efficiency the programs bring against the risks of human harm in areas including fairness, privacy, bias, and public safety.
For more information about enhancing your skills as a higher education IT manager and leader, please visit the EDUCAUSE Review Professional Development Commons blog as well as the EDUCAUSE Career Development page.
The PD Commons blog encourages submissions. Please submit your ideas to [email protected].
Notes
- John O'Brien, "Digital Ethics in Higher Education: 2020," EDUCAUSE Review, May 18, 2020. ↩
- Malcolm Brown, et al., 2020 EDUCAUSE Horizon Report, Teaching and Learning Edition, research report, (Louisville, CO: EDUCAUSE, 2020). ↩
- John McClurg, "AI Ethics Guidelines Every CIO Should Read," Information Week, August 7, 2019. ↩
- Jeffrey Young, "As Instructure Changes Ownership, Academics Worry Whether Student Data Will Be Protected." EdSurge, January 17, 2020. ↩
- Lynn Richmond, "Artificial Intelligence: Who's to Blame?" SCL, August 8, 2018. ↩
- John Kingston, "Artificial Intelligence and Legal Liability," in Research and Development Systems XXXIII (New York: Springer, 2016). ↩
- Joseph Lazzarotti, "Illinois Leads the Way on AI Regulation in the Workplace," National Law Review, October 24, 2019. ↩
- Chris Temple, "AI-Driven Decision-Making May Expose Organizations to Significant Liability Risk," Corporate Compliance Insights, September 11, 2019. ↩
- Chrissy Kidd, "AI as a Service (AIaaS): An Introduction," Machine Learning & Big Data Blog (blog), BMC Blogs, April 25,2018. ↩
- John Edwards, "AI Regulation: Has the Time Arrived?" Information Week, February 24, 2020. ↩
Linda Fisher Thornton is CEO of Leading in Context LLC and an Adjunct Associate Professor at the University of Richmond.
© 2020 Linda Fisher Thornton. The text of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.