AI is here to stay. How can we, as educators, accept this change and use it to help our students learn?
"Thinking begins," John Dewey suggested, "in what may fairly enough be called a forked-road situation, a situation which is ambiguous . . . . As long as our activity glides smoothly along . . . there is no call for reflection. Difficulty or obstruction in the way of reaching a belief brings us, however, to a pause."Footnote1
We—higher education faculty, staff, and leaders—used to believe that the college classroom was where thinking began, where we challenged students to think deeply and critically about complex issues with no simple solutions. We proudly embraced that our courses were difficult and that they made students pause to think differently about themselves and the world.
Until now.
Over the last eighteen months, we have discovered that the traditional education model is broken. Since its release in November 2022, ChatGPT (which I use as a proxy for the current class of artificial intelligence [AI] tools built on large language models [LLMs]) has aced everything from the SAT to the first year of Harvard. Indeed, ChatGPT can instantly spit out an essay that is comparable to that written by an academic researcher.Footnote2 Our students can now glide smoothly along, secure in the knowledge that they can complete, with the press of a button, just about any assignment their professors give them.
I therefore want to suggest that we are at a fundamental crossroads in higher education—our own forked-road moment. We must fundamentally rethink how teaching and learning are done in college and university classrooms if we are to make AI a catalyst rather than a replacement for deep learning.
The Cheating Apocalypse
The first and most important step in this process is to move through the stages of grief and accept that the traditional educational paradigm is over. I do not say this glibly. Since the rise of mass schooling at the turn of the twentieth century, education has been regarded as a knowledge transfer process, with assessments (tests, essays, oral exams, etc.) demonstrating to what extent the knowledge transferred from instructor to student had been successful. Students' performance on these assessments was assumed to reflect their proficiency.Footnote3
ChatGPT has made this paradigm obsolete. If students' performance is AI-generated, there is no meaningful connection between their performance and their proficiency. We can no longer deny the reality that students' submitted work may not reflect what they know (or don't know). So, grieve we must. We cannot be angry at the AI companies for launching products like ChatGPT or at our students for using them, and we cannot bargain our way out of thinking we can catch "cheaters." LLMs are "stochastic parrots" in that they always create unique outputs, which means there is no reliable way to prove that a student cheated. Indeed, Open AI took down its own detection system last summer, and if they can't claim to do it right, neither can we. But we can't sink into a deep depression, resigning ourselves to the belief that students will use ChatGPT for anything and everything.Footnote4
Ethan Mollick, associate professor and co-director of the Generative AI Lab at the Wharton School of the University of Pennsylvania, calls this the "homework apocalypse."Footnote5 I believe the problem goes even deeper than that. It's not just that students see ChatGPT as an easy shortcut to doing their assignments (which they already do). And it's not just that grades, therefore, become unmoored from and unreflective of students' skills and knowledge (which they will). This moment portends that higher education will become an inauthentic spectacle, a charade of teaching and learning.Footnote6 Why should students listen to a lecture, pore over a reading, or struggle to articulate their point of view in a paper when AI can do all of this (and much, much more) for them? Once faculty truly realize this new reality, a vicious downward spiral takes over: We will develop AI-generated assessments that students will answer with AI and which we, in turn, will grade with AI. Why bother putting in the work when students don't either?
If you think I'm being overly dramatic or have just drunk the techie Kool-Aid, you haven't been paying attention. Estimates of students' AI use range anywhere from 40 percent to 90 percent.Footnote7 I did an informal, anonymous end-of-semester survey of my students this past spring, and 80 percent said they were using AI in some form across their college classes. Yet only half of their professors were aware of this.
The next question, then, becomes "why?" Why are students simply offloading their work ("cheating") to AI at such high levels?
The Nature of Cheating
The word "cheating" has highly negative connotations. Cheating—whether in the stock market, at a friend's poker game, or on a final exam—seemingly tears at the very fabric of our sense of fairness and how we think society should work. However, it's slightly more complicated than that. Let me offer a brief thought experiment to explain.
Imagine I wanted to remove a huge tree in my front yard. I obviously wouldn't push it over with my bare hands; maybe I'd just get my axe and start chopping. But I'd quickly realize that this was a major project, so I'd stop and look up some tutorials on YouTube and grab my chainsaw. I could probably make progress, but soon enough, I'd likely realize that the project was just too big and complicated to do correctly on my own. (I'm not a lumberjack!) Thus, I'd hire a professional tree removal service to do the job quickly and effectively.
Now, let's change the scenario slightly.
Imagine I was a student who wanted to write a paper for a college course. I obviously wouldn't write it with my bare hands; maybe I'd just sit down at my computer and start typing. But I'd quickly realize that this was a major project, so I'd stop and look up academic articles about the topic. I could probably make progress, but soon enough, I'd likely realize that the project was just too big and complicated to do correctly on my own. (I'm not an expert in this area!) Thus, I'd hire a professional writing service to do the job quickly and effectively.
All of us, I hope, would agree that I did the smart thing in the first scenario and that I cheated in the second scenario. The question is: What's the difference?
The difference is this: In the first scenario, my goal was something extrinsic to my sense of self, something I needed to accomplish (a "product," a "performance") to get to my real goal of a cleared front yard. In the second scenario, though, my goal was something intrinsic to my sense of self, something that embodied who I was (a "process," a "proficiency").
Let me spell this out clearly. When we see a goal as just a task that must be done—something that's not important to or a part of our identity—there is no such thing as "cheating." Think about it this way: My neighbor would never claim I "cheated" by calling a tree removal service. My wife would never claim I "cheated" by ordering an Uber rather than driving us to the airport. A journal editor would never claim I "cheated" if I used Excel to figure out the correlation coefficient and standard error of data I analyzed. The tree removal service, Uber, and Excel are all just tools. The point is that using whatever tools I have at my disposal—even if I give up complete "ownership"—is a normal and acceptable way to accomplish a particular task.Footnote8
Dear reader, you may want to sit down for this next part.
A college course is just a particular task that many students feel they need to get through to get to their real goals (whether that goal is a career or a party). The task is extrinsic—a product, a performance, or a checklist. It's just not that important to students' sense of identity. Therefore, they use whatever tools they have at their disposal to complete the task, even if that means giving up ownership and letting ChatGPT write their assignments for them. (There is nothing new here, by the way. We have long known that 10–20 percent of students "contract" their assignments to commercial services. AI has simply supercharged the issue.Footnote9)
However, we want students to see college as an opportunity to enhance their critical thinking skills, cultural competence, civic engagement, and so much more. We want the work to be intrinsic—a process, a proficiency. So, if students give up their ownership and let ChatGPT do their work, we call it cheating.
But this way of thinking also clarifies why we—and not our students—are the problem. Research shows that when students see their work as extrinsic and a "performance," they are "more likely to perform academic dishonesty than students who held mastery approach goals."Footnote10
In other words, we have never been that good at showing our students how and why college matters. We assign papers, quizzes, and end-of-semester presentations, secure in the knowledge that we know why the work matters, hoping that our students will, sooner or later, realize it as well. Until ChatGPT came along, most students had no choice but to do what we told them. For most students, there was—to use my earlier metaphor—no tree removal service they could call to get that tree out of their front yard. They had to do the hard work themselves.
Now they don't.
The Way Forward
So now what?
In one respect, we already have a partial answer. Over the last thirty years, there has been a dramatic shift from a teaching-centered to a learning-centered education model. High-impact practices, such as service learning, undergraduate research, and living-learning communities, are common and embraced because they help students see the real-world connections of what they are learning and make learning personal.Footnote11
Therefore, I believe we must double down on a learning-centered model in the age of AI.
The first step is to fully and enthusiastically embrace AI. Many are already successfully stumbling toward strategies for leveraging the ability of AI to serve as a real-time, adaptive, and ubiquitous tutor, mentor, and assistant. For example, I made ChatGPT my formal teaching assistant across all of my undergraduate classes, teaching students how to use it ethically and effectively for brainstorming, generating thesis statements, outlining, and getting feedback.Footnote12
The second step is to find the "jagged technological frontier" of using AI in the college classroom. I do not mean this in terms of what AI seemingly can and cannot do, as AI leaders believe that multiple technological leaps are still ahead of us.Footnote13 I mean this in terms of what AI makes possible for us now as instructors.
Let me put it this way. If college is about helping students to think critically, then the research is clear that we need to do three specific things in the classroom: foster dialogue, engage in authentic instruction, and provide individualized mentorship. We have already moved into a learning-centered model where dialogue and authentic high-impact practices have become commonplace. Yet, and this is the key, researchers note that "mentoring may serve in a catalytic capacity" for such critical thinking.Footnote14
Mentorship is exactly where mass education has always fallen short. How am I supposed to mentor a room of twenty or two hundred students individually? I can't.
Until now.
ChatGPT can serve as a real-time ubiquitous tutor and mentor. My end-of-semester survey revealed that 85 percent of my students found ChatGPT to be helpful or extremely helpful. One student said, "I'm not the best writer, so I try and use ChatGPT to guide me in the right direction and help me lay out my essay." Another wrote, "I use ChatGPT to give me ideas when I struggle with what to write."
If I can't be a mentor to each one of my students, ChatGPT can.
Let me be clear that this is brand-new territory for everyone; my students and I are making this up on the fly, and it takes an incredible amount of time to figure out how to do it well. But we must do it if we want to keep alive our vision of higher education as a place where learning and thinking still occur.
Conclusion
We are indeed at a forked-road moment, one that will define whether higher education flourishes or falters. Therefore, let me return to John Dewey for one final piece of advice. Right after Dewey explained his concept of thinking and the importance of that moment of pausing, he suggested this: "In the suspense of uncertainty, we metaphorically climb a tree." In so doing, Dewey explained, we gain a "more commanding view of the situation" that helps us to think more broadly and clearly about the issue in front of us.
We might all need to climb a tree right now to better understand the situation in front of us as we attempt to rethink and reimagine higher education. Indeed, it is incumbent on all of us to make that climb as we stare at and ponder the uncertainty of the coming crossroads.
Maybe I shouldn't have called that tree removal service.
Notes
- John Dewey, "What Is Thought," in How We Think (Boston: D.C. Heath & Co., 1910), 11. Jump back to footnote 1 in the text.
- Leopold Aschenbrenner, "From GPT-4 to AGI: Counting the OOMs," in Situational Awareness: The Decade Ahead, (self-pub., 2024), 16; Catherine A. Gao et al., "Comparing Scientific Abstracts Generated by ChatGPT to Real Abstracts with Detectors and Blinded Human Reviewers," npj Digital Medicine 6, article no. 75 (April 2023). Jump back to footnote 2 in the text.
- David Tyack and Larry Cuban, Tinkering Toward Utopia: A Century of Public School Reform (Cambridge, MA: Harvard University Press, 1997); Robert Glaser, Naomi Chudowsky, and James W. Pellegrino, eds., Knowing What Students Know: The Science and Design of Educational Assessment (Washington, DC: National Academies Press, 2001); Nicholas C. Soderstrom and Robert A. Bjork, "Learning versus Performance: An Integrative Review," Perspectives on Psychological Science 10, no. 2 (March 2015): 176–199. Jump back to footnote 3 in the text.
- Owen Kichizo Terry, "I'm a Student. You have No idea How Much We're Using ChatGPT," Chronicle of Higher Education, May 12, 2023; Emily M. Bender et al., "On the Dangers of Stochastic Parrots: Can Language Models Be too Big," in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM FAcct Conference, 2021), 610–623; Mark Hachman, "OpenAI's ChatGPT Is Too Good for Its Own AI to Detect," PCWorld, July 26, 2023. Jump back to footnote 4 in the text.
- Ethan Mollick, "The Homework Apocalypse," One Useful Thing (blog), July 1, 2023. Jump back to footnote 5 in the text.
- Dan Sarofian-Butin, "Higher Education Must Wrestle Harder to Escape ChatGPT's Death Grip," Times Higher Education, August 15, 2023. Jump back to footnote 6 in the text.
- "4 in 10 College Students Are Using ChatGPT on Assignments," Intelligent, last modified February 27, 2024; "Productive Teaching Tool or Innovative Cheating," Study.com, accessed January, 2023. Jump back to footnote 7 in the text.
- Tamara B. Murdock and Eric M. Anderman, "Motivational Perspectives on Student Cheating: Toward an Integrated Model of Academic Dishonesty," Educational Psychologist 41, no. 3 (2006): 129–145; J. Adam Carter, "Autonomy, Cognitive Offloading, and Education," Educational Theory 68, no. 6 (December 2018): 657–673. Jump back to footnote 8 in the text.
- Philip M. Newton, "How Common Is Commercial Contract Cheating in Higher Education and Is It Increasing? A Systematic Review," Frontiers in Education 3 (August 2018): 67. Jump back to footnote 9 in the text.
- Megan R. Krou et al., "Achievement Motivation and Academic Dishonesty: A Meta-Analytic Investigation," Educational Psychology Review 33, no. 2 (2021): 427–458. Jump back to footnote 10 in the text.
- Robert B. Barr and John Tagg, "From Teaching to Learning—A New Paradigm for Undergraduate Education," Change: The Magazine of Higher Learning 27, no. 6 (November 1995): 12–26; George D. Kuh, High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter,(Washington DC: Association of American Colleges and Universities, 2008): 28–29. Jump back to footnote 11 in the text.
- Ethan Mollick and Lilach Mollick, "Assigning AI: Seven Approaches for Students, with Prompts," The Wharton School Research Paper, June 21, 2023; Dan Sarofian-Butin, "ChatGPT Is My Co-Pilot," Times Higher Education, January 31, 2024. Jump back to footnote 12 in the text.
- Fabrizio Dell'Acqua et al., "Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality," (working paper, Harvard Business School, Boston, 2023); Aschenbrenner, "From GPT-4 to AGI," 16. Jump back to footnote 13 in the text.
- Philip C. Abrami, "Strategies for Teaching Students to Think Critically: A Meta-Analysis," Review of Educational Research 85, no. 2 (June 2015): 275–314. Jump back to footnote 14 in the text.
Dan Sarofian-Butin is a Full Professor in, and Founding Dean of, the Winston School of Education & Social Policy at Merrimack College.
© 2024 Dan Sarofian-Butin. The content of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.