Examining Digital Ethics at Seattle University

min read

Three leaders in the Initiative in Ethics and Transformative Technologies at Seattle University talk about digital ethics and their work in the intersection between technology and ethics.

Digital Ethics interview
Credit: travellight / Shutterstock.com © 2020

Like the administrations at all other Jesuit colleges and universities, the leaders of Seattle University have always understood the fundamental importance of ethics in a well-rounded education, even for students in the professional fields. In 2018, Microsoft provided seed funding for Seattle University to launch an initiative focusing on ethical issues related to digital technology. As the Initiative in Ethics and Transformative Technologies [https://www.seattleu.edu/ethics-and-technology/] took shape, Executive Director Michael Quinn invited CIO Christopher Van Liew to join the advisory board. Realizing that EDUCAUSE President and CEO John O'Brien shared an interest in digital ethics, Van Liew encouraged Quinn and O'Brien to meet. Their initial conversation at a local coffee shop grew into this interview, which includes two other Seattle University faculty members working in the area of tech ethics.

John O'Brien: Is digital ethics just the latest phase of a centuries-old conversation about ethics, or have digital technologies created a new universe of ethical questions?

Michael J. Quinn: At the very least, digital technologies can take classical issues and raise them to new levels. For example, our adoption of cellphones means that our locations can be tracked much more closely than ever before, giving more urgency to debates about the proper balance between public safety and personal privacy. Should governments use this information to identify who may have been exposed to COVID-19, for example? More generally, humans invent technologies to extend their power. When people get more power, they face new choices about what they should and should not do with that power. Whenever we talk about "should" and "should not," we're talking about ethics.

Jeffery Smith: I tend to agree that many of the ethical problems that we associate with new, digital technology are expressions of more general issues that we've experienced before, with older technology. But apart from the scale, scope, and precision of digital technology, the one unique problem we currently face is the extent to which humans are being replaced. Here I'm not thinking simply about robots replacing manufacturing jobs, although that's important. I'm particularly concerned with algorithms and other machine learning platforms that increasingly make decisions instead of humans. Transportation, financial services, regulatory enforcement, and the management of health care are just a few areas where our lives are directly impacted by the choices made by these autonomous systems. Why should we trust those systems? Does their rise in use mean that less-efficient human learning and autonomy are diminished? Digital technology today is unique in challenging our ability to govern our own lives.

Quinn: On the other hand, it's also true that some age-old problems keep recurring. For example, some people who are engaged in the conception, development, and deployment of new technologies are not sufficiently tuned in to the ethical implications of their actions. Many people have not had any exposure to formal methods of making ethical analyses, and it's easy for busy people to rationalize why they are not taking ethical considerations into account: "It's not my job." "If we don't do it, someone else will." "We can't afford to think about that right now." "It will be worth it in the end." "Who am I to say my concerns are justified?"

Colleges and universities can help solve this problem if they make it a priority. Like every other Catholic university that I know about, Seattle University requires all of its students to take at least one ethics class and encourages faculty in every discipline to bring up ethical issues in their own classes as well. We want our students to learn that there are rigorous ways of evaluating situations and making ethical decisions. For example, we want our science and engineering majors to gain an appreciation for the harm that can be caused by implementing a technology in an irresponsible way. Unfortunately, there are plenty of cautionary tales to tell, and some of them are quite recent, such as the fatal accidents related to self-driving cars.

Nathan R. Colaner: Usually, I am firmly in the camp of people who believe that all the important ethical questions were asked a few hundred years ago and thus any area of applied ethics is simply a new domain in which to apply ancient values. I think the most obvious ethical issues in digital technology point in this direction, such as concerns about groups that are being disproportionately affected by innovation and puzzles about trade-offs between safety and privacy. These issues are as old as civilization itself. But the answer to your question really depends on how broadly we want to define ethics. At the very broadest, ethics simply concerns human well-being, and many issues that we are not used to talking about are suddenly relevant. The point Jeffery raises about decision-making is probably the most important one in our current age. There are also questions around how far digital technology will change our connections to other people, to the earth, and even to reality itself. Plus there are the really mind-bending issues such as transhumanism. If these are thought of as ethical issues, we are indeed in new territory.

O'Brien: What ethical concerns keep you up at night?

Quinn: I'm concerned that some AI-powered systems, such as autonomous vehicles, are being tested in our midst without adequate governmental regulation. And this is occurring not because innovation is happening so quickly that legislation can't keep up. The problem is that politicians seem more interested in attracting new industries and high-paying jobs to their regions than in ensuring public safety. Or to put it more charitably, I believe politicians may be misjudging people's tolerance for accidents caused by machines.

I remember one conversation in which an elected official told me that if the accident rate for self-driving cars is lower than the accident rate for cars driven by people, we should allow self-driving cars on the roads. I appreciate that from a strictly utilitarian point of view, the official's perspective makes sense, but it ignores people's lack of trust in automation. Think about it: in the United States on an average day, about 100 people die in an automobile accident, and people accept that. Or at least they haven't stopped driving, and they aren't boycotting automobile manufacturers. Suppose we had a technology that enabled us to make every vehicle in the country autonomous, driven by a computer, and the death rate dropped to 50 people per day. If 50 people per day were dying in computer-driven cars, I think there would be a public outcry against the manufacturers of these vehicles, as well as against the politicians who did not regulate them better.

Smith: Related to that, my greatest concern is not with a particular technology per se but with how any technology's promise is harnessed. Will technology be seen and used as a means to serve human interests, or will it become an end in itself? When we lose sight of the fact that the value of technology is instrumental—to serve our well-being and to assist society in advancing the common good—our most cherished values could become lost from view. As I tell my students, this is at the heart of ethics. Means and ends cannot be confused. I have deep concerns that the pursuit of technology for itself will eclipse its intended social benefits, not unlike the manner in which profit has sometimes eclipsed the benefits of a healthy marketplace.

Colaner: The dystopias I most worry about are the issues around explainable artificial intelligence (sometimes abbreviated XAI). Digital technology introduces any number of potential harms, but if we live in a world controlled by algorithms that no actual human or group of humans is able to understand or control, we will have an existential threat to our species.

O'Brien: Kentaro Toyama, the author of Geek Heresy: Rescuing Social Change from the Cult of Technology, says that far from saving humanity, technology tends to amplify human faults. Do you agree?

Smith: It certainly can. One example that comes to mind is what some call "algorithmic bias." Algorithms that decide whether someone's face matches a recorded image in a database, or algorithms that pick out which applicants are most likely to default on a loan based on their residential Zip Codes, have been shown to be racially discriminatory. This reflects the fact that those algorithms are "trained" on datasets that reflect a host of discriminatory practices from the past. So human faults—institutional and individual—get replicated in how machines make decisions about the world.

Quinn: I agree with Lord Acton: power tends to corrupt. Since technology enhances human power, technology can make accomplishing good, or evil, easier. Technology is also not value-neutral. Technology is developed to accomplish a goal, and depending upon the goal, the technology may be fundamentally beneficial or harmful. For example, I would agree that the technology behind deepfake pornographic videos certainly has amplified our ability to do selfish things that harm other humans.

Colaner: Waiting for technology to save us is certainly never a strategy to consider, although I am a bit more optimistic that we will have new tools that can help us do good things that we never imagined possible. The most important example is climate change. It is obvious by now that we should not hope that humanity will curb its habits enough to stop climate change. But digital technology—and specifically how technology is brought to market by innovative, scientifically minded businesses—could actually change things. This could include stopping or slowing climate change or even allowing humanity to thrive in the midst of a changing climate. I say this while fully aware that climate change would never have been an issue except for the advent of certain technologies, but that does not change my point.

O'Brien: The COVID-19 pandemic has redefined so much. Has it redefined ethics? What should we learn from past pandemics?

Smith: COVID-19 hasn't redfined ethics as much as it has shown how—at key moments—ethics is about managing solutions that balance different core principles at once. This isn't easy. Should citizens' freedom be restricted in order to limit movement across communities? Can human subjects consent to being deliberately infected with the novel coronavirus for the sake of speeding up research for a vaccine? Should we allow companies to set high prices for therapeutic drugs? These issues have surfaced before with other diseases, so I think we are equipped to answer them; however, COVID-19 is a "perfect storm" given its death rate, its level of infectiousness, and the fact that we are living in such an interdependent, global economy.

Quinn: It's easy to fall into the trap of thinking we're the first generation trying to figure it all out. Studying history, literature, and philosophy are great ways to realize that people haven't changed all that much in the past 2,000 years. That's why I'm a big believer in having general education requirements to ensure that all students, even those in the professional schools, are exposed to the liberal arts. Here at Seattle University, all our undergraduate students are required to complete a general education curriculum equivalent to one full academic year of coursework. I think this is why employers tell me that they value our graduates, who have not just a strong understanding of the fundamentals of their discipline but also the ability to see the "big picture."

I've been doing some reading about the Spanish flu outbreak of 1918–19, and it's easy to find parallels with the COVID-19 pandemic. Communities had difficulty accepting the severity of the outbreak. They debated whether it was better to keep children in school, and they lifted closure orders too soon. Some churches argued they had a Constitutional right to remain open, and some people made a similar argument against the mandatory wearing of facemasks. So, to answer your question: no, COVID-19 hasn't redefined ethics. It has simply highlighted the difficulty that individuals and communities have in choosing to do the right thing when "doing the right thing" means making sacrifices now with an expectation, but not a guarantee, that these sacrifices will improve the common good in the future.


Nathan R. Colaner is Senior Instructor of Management and Managing Director of the Initiative in Ethics and Transformative Technologies at Seattle University.

Michael J. Quinn is Professor of Computer Science, Dean of the College of Science and Engineering, and Executive Director of the Initiative in Ethics and Transformative Technologies at Seattle University.

Jeffery Smith is Professor of Management, Frank Shrontz Chair in Professional Ethics, and a member of the Oversight Committee for the Initiative in Ethics and Transformative Technologies at Seattle University.

John O'Brien, President and CEO of EDUCAUSE, is the author of "Digital Ethics in Higher Education: 2020," EDUCAUSE Review 55, no. 2 (2020).

© 2021 Nathan R. Colaner, Michael J. Quinn, Jeffery Smith, and John O'Brien. The text of this work is licensed under a Creative Commons BY-ND 4.0 International License.