Integrating Generative AI into Higher Education: Considerations

min read

Integrating AI into higher education is not a futuristic vision but an inevitability. Colleges and universities must adapt and prepare students, faculty, and staff for their AI-infused futures.

Hands on a laptop keyboard. Popping up above the keyboard are clear squares. The center one says AI in the center.
Credit: Deemerwha studio / Shutterstock.com © 2023

Generative artificial intelligence (AI) has quickly become a topic of interest and concern for many aspects of society. Government and industry are embracing generative AI. Several reports predict that AI will result in job losses, become essential to some existing jobs, and lead to the creation of new AI-related jobs. One city in Japan is using ChatGPT to help run the government, and there are already several AI applications in the health care industry. Generative AI tools could result in widespread changes to the workforce and the education sector.Footnote1

Generative AI is a particular form of machine learning that takes a set of samples as input and learns from those samples to generate new content.Footnote2 ChatGPT, developed by OpenAI, and Bard, an AI experiment by Google, are examples of generative AI tools trained on massive text data to create novel, human-like text responses.

The introduction and adoption of generative AI may seem rapid, but the technology is not as new as it is commonly perceived to be. Technologically advanced AI tools like ChatGPT and Bard have been in our lives and workflows for some time. For example, the Associated Press has been using AI to automate stories since 2014.Footnote3 Although generative AI has been around for almost a decade, it didn't really take off until "the latter half of 2022 when the technology was put into the hands of consumers with the release of several text-to-image model services like MidJourney, Dall-E 2, Imagen, and the open-source release of Stability AI's Stable Diffusion."Footnote4 More ubiquitous examples of AI applications include autocorrect, grammar check, and suggested email replies. The underlying technologies for these tools may differ, but the results are the same for the general end user: the technology provides automated text suggestions for the user to consider.

While there are many issues surrounding generative AI, such as ethical concerns, copyright and intellectual property questions, and biases within the training data, this article will focus on the integration of generative AI into higher education teaching and learning.

Higher education institutions—recently rocked by the COVID-19 pandemic and fearing the effects of the enrollment cliff—are now faced with a new disruption: generative AI. Colleges and universities have generally been slow to adopt change. In the not-so-distant past, other technological tools have been met with consternation in the classroom setting. For example, calculators were banned from classrooms, and Wikipedia was considered an unreliable source to avoid. And while calculators and Wikipedia may not be fully integrated into every classroom, they do not draw the same ire as they did in the past. Generative AI is different from these innovations. AI is not a device that can be banned, it is not a source that students can be instructed not to use, and its use cannot be discovered with crude plagiarism detection tools. This new technology will be difficult to avoid. It is already being integrated into tools that students and faculty use, such as Grammarly, Google Docs, and Microsoft Word. SpringerNature (a major academic publisher) permits authors to use generative AI as long as they acknowledge it.Footnote5 Generative AI integration may become so ubiquitous so quickly that students may not even realize the tools they use incorporate it. A recent EDUCAUSE QuickPoll survey of higher education stakeholders provides insights into the interest and perhaps inevitability of AI integration in day-to-day institutional work. Most of the respondents (83 percent) believe that "generative AI will profoundly change higher education in the next three to five years," and 65 percent believe "the use of generative AI in higher ed has more benefits than drawbacks."Footnote6

There is a duality of AI on many college and university campuses. On the one hand, some higher education officials are eager to adopt AI tools that would assist with student recruitment and enrollment, but on the other hand, many faculty and other institutional staff believe the use of generative AI is a type of cheating or a breach of academic integrity.Footnote7 What is more ethical: guiding the use of AI tools or pretending they do not exist?

Ignoring generative AI or banning its use on the academic side of higher education seems naïve and possibly misguided. Shouldn't higher education institutions be preparing graduates to work in a world where generative AI is becoming ubiquitous? In 2022, the United Nations Education, Scientific and Cultural Organization (UNESCO) recommended that member states "work with international organizations, educational institutions, and private and non-governmental entities to provide adequate AI literacy education to the public on all levels in all countries in order to empower people and reduce the digital divides and digital access inequalities resulting from the wide adoption of AI systems."Footnote8

How will campuses integrate these new tools into their honor codes and academic work? The internet is beginning to fill with recommendations on how instructors can use ChatGPT to update their syllabi and get creative with assignments as well as stories about how students use ChatGPT in nearly every aspect of their lives.Footnote9 Thinking about how these tools can or should be used feels a bit chaotic, but rather than developing academic policies and practices from scratch, campuses should first consider using existing methods and resources. The following are just a few of the methods and resources available today:

  • When evaluating tools and technologies (adopting/incorporating or trying to detect AI), consider conducting a technoethical audit of the technologies under consideration. Introduced in 2019 by Daniel Krutka, Marie Heath, and K. Bret Staud Willet, a technoethical audit is a critical evaluation of the chosen technology. Such an audit explores whether it is ethical to use the technology and what potentially unfavorable outcomes might arise from its use in schools.Footnote10 A technoethical audit is guided by questions like these:
    • How is the environment affected by this technology?
    • Is the creation, design, and use of this technology just, particularly for minoritized or vulnerable groups?
    • In what ways does this technology encourage and discourage learning?Footnote11
  • Making predictions about technology and education is tricky at best, but considering the possible futures of generative AI in education may help educators and campus leaders develop a future-oriented mindset. A recently published article by Aras Bozkurt et al. explores "the promises and pitfalls" of ChatGPT and generative AI and the possible implications of these technologies on the educational landscape.Footnote12
  • AI and Education: Guidance for Policy Makers, published by UNESCO, has sections, among others, on the use of AI for education management and delivery, learning and assessment, and empowering teachers and enhancing teaching. UNESCO also recently published a quick start guide for ChatGPT and higher education that includes an overview of how it works and how it might be used in higher education. The guide also includes a discussion of challenges and ethical considerations.Footnote13
  • As academic policies are revised or adopted, consider what it means for a work to be a student's own. According to the International Center for Academic Integrity, academic integrity goes beyond the basic concept of cheating to encompass six fundamental values: honesty, trust, fairness, respect, responsibility, and courage. Drawing on these values may be helpful when creating or revising policies. Another useful resource is an article series by Loleen Berdahl and Susan Bens about academic integrity. The second article in the series discusses how ChatGPT and similar technologies "raise new questions that complicate possible solutions to academic misconduct but may also offer opportunities."Footnote14
  • Consider what generative AI means for assessment. For example, Jered Borup recommends examining "intended learning outcomes and consider[ing] whether better, more authentic assessments could be used instead." Frequent low-stakes quizzes may reduce a desire to "cheat" and provide additional benefits.Footnote15 It's important to note that creating more authentic assessments is typically more labor-intensive, and incorporating this type of feedback and evaluation into courses may require class sizes, teaching loads, or the availability of grading support to be reconsidered.

Given how quickly AI is being embedded into technology tools and workplaces, integrating AI into higher education is not a futuristic vision but an inevitability. Colleges and universities must adapt and prepare students, faculty, and staff for their AI-infused futures. The considerations highlighted in this article are intended to help higher education leaders develop academic policies and practices that enhance the quality of education, improve student outcomes, and foster innovation. AI can also automate administrative tasks, freeing up valuable time for educators to focus on student engagement and critical thinking. Acknowledging AI and its uses in higher education is a crucial, pragmatic step toward equipping students with the skills they need to thrive once they leave our campuses.

Notes

  1. Jessie Yeung and Mayumi Maruyama, "As Japan’s Population Drops, One City Is Turning to ChatGPT to Help Run the Government," CNN, April 21, 2023; Bernard Marr, "Revolutionizing Healthcare: The Top 14 Uses of ChatGPT in Medicine and Wellness," Forbes, March 2, 2023; Kristen Senz, "Is AI Coming for Your Job?" Harvard Business School (website), April 26, 2023; Will D. Heaven, "ChatGPT Is Going to Change Education, Not Destroy It," MIT Technology Review, April 6, 2023. Jump back to footnote 1 in the text.
  2. Eben Carle, "Ask a Techspert: What Is Generative AI?" The Keyword (blog), Google, November 4, 2023. Jump back to footnote 2 in the text.
  3. "Leveraging AI to Advance the Power of Facts," Associated Press (website), accessed July 31, 2023. Jump back to footnote 3 in the text.
  4. Matt White, "A Brief History of Generative AI," Medium, January 7, 2023. Jump back to footnote 4 in the text.
  5. "Tools Such As ChatGPT Threaten Transparent Science; Here Are Our Ground Rules for Their Use," Nature, January 24, 2023. Jump back to footnote 5 in the text.
  6. Mark McCormack, "EDUCAUSE QuickPoll Results: Adopting and Adapting to Generative AI in Higher Ed Tech," EDUCAUSE REVIEW, April 17, 2023. Jump back to footnote 6 in the text.
  7. Scott Jaschik, "Admissions Offices, Cautiously, Start Using AI," Inside Higher Ed, May 15, 2023; Mallory Willsea, "Embrace AI To Boost Your Enrollment Marketing Team's Productivity," Inside Higher Ed, April 27, 2023. Jump back to footnote 7 in the text.
  8. UNESCO, Recommendation on the Ethics of Artificial Intelligence (Paris: The United Nations Educational, Scientific and Cultural Organization, 2022), 33. Jump back to footnote 8 in the text.
  9. Ryan Watkins, "Update Your Course Syllabus for ChatGPT," Medium, December 18, 2022;Susan Svrluga and Hannah Natanson, "All the Unexpected Ways ChatGPT Is Infiltrating Students' Lives," The Washington Post, June 1, 2023. Jump back to footnote 9 in the text.
  10. Daniel G. Krutka, Marie K. Heath, and K. Bret Staudt Willet, "Foregrounding Technoethics: Toward Critical Perspectives in Technology and Teacher Education," Journal of Technology and Teacher Education 27, no. 4 (October, 2019): 555–574; Daniel G. Krutka and Marie Heath, "Is It Ethical to Use This Technology? An Approach to Learning about Educational Technologies with Students," Civics of Technology (blog), Civics of Technology, March 18, 2022. Jump back to footnote 10 in the text.
  11. To read about the results of a discussion around some of these questions, see Marie K. Heath, et al., "Collectively Asking Technoskeptical Questions About ChatGPT," Civics of Technology (blog), Civics of Technology, April 23, 2023. Jump back to footnote 11 in the text.
  12. Aras Bozkurt, et al., "Speculative Futures on ChatGPT and Generative Artificial Intelligence (AI): A Collective Reflection from the Educational Landscape," Asian Journal of Distance Education 18, no. 1 (February 2023): 53–130. Jump back to footnote 12 in the text.
  13. Fengchun Miao, Wayne Holmes, Ronghuai Huang, and Hui Zhang, AI and Education: Guidance for Policy-Makers (Paris: United Nations Educational, Scientific and Cultural Organization, 2021); Emma Sabzalieva and Arianna Valentini, ChatGPT and Artificial Intelligence in Higher Education Quick Start Guide (Paris: United Nations Educational, Scientific and Cultural Organization, 2023). Jump back to footnote 13 in the text.
  14. International Center for Academic Integrity (website), accessed July 2, 2023; Loleen Berdahl and Susan Bens, "Academic Integrity in the Age of ChatGPT," University Affairs, June 16, 2023. Jump back to footnote 14 in the text.
  15. Jered Borup, "This Was Written by a Human: A Real Educator's Thoughts on Teaching in the Age of ChatGPT," EDUCAUSE REVIEW, March 21, 2023; Scott Warnock, "Frequent, Low-Stakes Grading: Assessment for Communication, Confidence," Faculty Focus, April 18, 2013; Lukas K. Sotola and Marcus Crede, "Regarding Class Quizzes: A Meta-Analytic Synthesis of Studies on the Relationship Between Frequent Low-Stakes Testing and Class Performance," Educational Psychology Review 33, (2020): 407–426. Jump back to footnote 15 in the text.

Charles B. Hodges is a Professor of Instructional Technology at Georgia Southern University.

Ceren Ocak is an Assistant Professor of Leadership, Technology, and Human Development at Georgia Southern University

© 2023 Charles B. Hodges and Ceren Ocak. The text of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.