3 Pitfalls of AI

min read

Artificial intelligence in higher education isn't without its risks. Here are three possible trouble spots for the use of AI. You can also view a short video on the promising aspects of AI in higher education.

View Transcript

Elana Zeide
Associate Professor of Law
University of Nebraska

Elana Zeide: Hi, I'm Elana Zeide. I'm an Associate Professor of Law at the University of Nebraska. Here are three pitfalls of emerging artificial intelligence used in higher education. One is automated composition applications. You've all probably seen the automated grammar and composition tools I'm talking about. There are applications such as Grammarly, Hemingway, ProWritingAid, and these tools are also being incorporated into a lot of word processors right now. And these, they're like souped up spell checkers, right? So they don't just do the spelling, but some grammar but also some composition now. So they try to detect passive phrasing or repeated words. And in some cases, these are fantastic. They can be powerful learning tools, if students pay attention to the suggestions, and try to incorporate that learning into their own work, but they can also become a crutch. And in some cases do more than merely copy edit. For example, there are systems that kind of remix learning materials and write student essays. And you also see students over-relying on these and just pretty much accepting everything, even though that sometimes makes their work nonsensical. It's a matter of being attentive to the line between providing help and reducing students' motivations to improve their own skills. The next artificial intelligence tool is an increasing use of AI for grading and evaluation. The problem here is is the famous human in the loop problem. You get teachers over-relying on automated suggestions without having a sufficient understanding, or taking the time to evaluate AI generated evaluations. Part of this is because there's a human tendency towards automation bias, to believe the accuracy results that are computer generated. But educators are also not always in the best position to review automated outcomes and assessments, because the results are often complicated. They're based on data that is unknown, and it's difficult to go behind the output into the analytics because of the black box of AI. So it's hard for the human evaluator teachers to know when something might have gone wrong, and often there's not a really good visible signal of something that has gone wrong. Generally, the reason people put these tools into place in the first place is because they save time. And so if you have an educator having to go through everything again, it reduces much of the point of putting an AI tool in the first place. The last AI tool to worry about is something I guess that all you know about already which is automated online proctoring tools. I'm specifically concerned about the artificial intelligence part that is used in the automated monitoring systems that detect when students cheat. Companies refer to this as monitoring students and test takers, but it's more accurately described as surveillance, often involving audio visual input, some digital input, and then subsequent AI analysis that is supposed to detect cheating. Although actually explicitly the vendors are very clear to say they are not trying to detect cheating. That is the teacher's job, but rather they detect suspicious behaviors. However, these tools don't have sufficient evidence supporting again their accuracy or their efficacy. And they're based on a questionable assumption that there is some normal profile of student behavior that artificial intelligence can identify that normal profile. And that deviations from that baseline indicate that something suspicious is happening, rather than just foreseeable diversity of student physiology and environments. During the pandemic, this has led to unacceptably high rates of false flags for innocent behavior. And in many cases, schools implemented these tools without giving teachers enough training, or oversight, or providing ways for students to understand what's going on, if there is a system in place that acts as a safety net in case they're being flagged falsely for innocent behavior and how they might be able to challenge problems they run into with the technology. And the main problem with these are over reliance on the automation, minimal transparency and explainability. And in some cases insufficient evidence supporting their accuracy and efficacy, both overall and across marginalized populations. These tools really do have great promise. It's partly just a matter of being aware of their limitations and not ignoring those in the ambition to try something new, and really the hopefulness of implementing some really exciting, and exciting sounding tools.