The release of ChatGPT and similar AI tools that generate content including text, images, and audio has prompted both excitement and apprehension among leaders, faculty, students, and others in higher education.
Over her many years teaching European history, Margaret Hilliard had been generally open to incorporating new approaches to teaching, certainly more open than those of her colleagues who bristled at any suggestion that their teaching methods could benefit from updates, including those rooted in technology. But when ChatGPT was released, Hilliard felt unprepared and deeply unnerved. Her courses had long featured essays as a central element, not only as a way for students to demonstrate their understanding of the course material but also as a regular activity to help students improve their writing. Hilliard stressed the importance of thinking broadly about historical events as well as the necessity of being able to clearly articulate those ideas. With ChatGPT, the dynamic in her courses changed, essentially overnight. Her students were suddenly producing longer essays that were better organized, grammatically impeccable, and always on time. At the same time, though, many of the essays were conceptually flimsy, lacking the kind of broad thinking she tried to draw out of her students, and the essays often included outlandish factual inaccuracies.
Hilliard spoke to her colleagues in the history department, as well as faculty who taught literature and sociology, and all were seeing the same changes in student assignments. A design instructor told her that he was wrestling with how to deal with student work that he believed was being generated by image-based AI. Meanwhile, campus leaders had begun issuing statements, policy updates, and other guidance about the use of generative AI tools for academic work, but those efforts seemed always to be a step behind the technology and how it was being used.
Talking directly and candidly with her undergraduate students—and conferring at length with her graduate students and TAs—Hilliard undertook to reorient her teaching to try to preserve the value of the essay without being naïve about the reality that generative AI tools were here to stay and would only get better over time. For some assignments, she required her students to use AI to create a first draft of a short essay and then critique that draft, looking for errors, omissions, and opportunities to improve the arguments. This enabled a different kind of learning for the students, including forcing students to carefully consider the prompts they feed into the AI tools, and it helped remove the stigma of AI in education. For other essays, she told her students to write their own drafts unaided and then use AI to generate an essay on the same topic. Students would then compare the two, seeking to understand how and why they overlapped and diverged; this process proved particularly beneficial for students whose skills with organization and sentence structure needed attention. She knew, of course, that a few students would simply use AI as a crutch and flout the honor code, but such students existed before ChatGPT. For Hilliard, it felt a bit like the Wild West of education, but she found that pulling AI tools into the learning process rather than vilifying them provided opportunities for different kinds of learning and avoided much of the panic that surrounded these new technologies.
1. What Is It?
Artificial intelligence has long operated behind the scenes in many of the technologies we use every day, but with the introduction of ChatGPT in November 2022, generative AI took the world by storm, capturing the imaginations and stoking the fears of broad swaths of users. Within its first 60 days, the platform grew by 9,900% to reach 100 million users. When the 2023 academic year began, the ChatGPT platform experienced another surge, with 180.5 million unique visitors worldwide during the month of August alone.Footnote1 Artificial intelligence came to dominate conversations and left many searching to understand...what is generative AI?
While the term artificial intelligence has taken on many meanings over the years, generative AI creates content that resembles something produced by humans. The best-known tools create written content, but as the technology has evolved, generative AI tools now can create text, images, sound, or even entire applications. Generative AI is increasingly integrated into various products and services—an estimated 77% of all global companies are either using AI or exploring its use in their business operations.Footnote2 In many cases, these functions are so intricately woven into the fabric of the product that users might not even be aware of their existence. ChatGPT and other generative AI tools are extremely versatile, evolving, and developing into multidimensional tools that are challenging norms and disrupting diverse sectors, including education. An important part of the discussion about generative AI is what it is not. It is not sentient. It lacks awareness and cannot perceive or understand human thought or emotions.Footnote3 The operations are based purely on the data used to train the AI models.
2. How Does It Work?
Generative AI revolves around neural networks, complex structures inspired by the human brain with interconnected artificial neurons for processing information and learning from data. Whereas algorithm-based models make decisions based on rules that are provided by developers, generative AI tools review large amounts of data to construct an evolving set of rules that guide the generation of new content. Generative AI operates by making predictions about the next item in a series. When generative AI is writing a paragraph, for example, it constructs sentences by predicting each next word in succession. Over time, with lots of practice and lots of data, the system gets better at making predictions.
Large language models (LLMs) are trained on massive textual datasets, including books, articles, and websites, to learn the intricacies of human language. The datasets used to train ChatGPT, for example, were developed using public information on the internet, licensed third-party content, and information supplied by platform users and human trainers. Patterns are identified within the training data through statistical analysis as the machine learning model detects recurring sequences of data, such as words, phrases, or patterns of pixel values in images, and associates them with specific outcomes or contexts. By discovering and modeling these patterns, generative AI can produce new content, ensuring consistency within the context while introducing creativity and novelty. Combining LLMs with natural language processing (NLP) models, which process and understand human language such as language translation and sentiment analysis, results in content that mirrors human-generated material.
3. Who's Doing It?
The use of generative AI is rapidly growing. Students are increasingly using generative AI tools for essays, research papers, and other written assignments. Faculty members are leveraging generative AI for creating custom learning materials, quizzes, and assessments, and they can also use generative AI to provide timely and accurate feedback to students. Librarians are using generative AI for creating text summaries or discovering insights in their collections. Instructional designers are using generative AI for rapidly iterating on created materials. For administrators, generative AI aids decision-making and communication by summarizing documents and meetings as well as creating documents. Beyond textual tools, generative AI is being used to create visual artifacts and to generate or debug computer code. Software companies are including generative AI into their products like Adobe's Photoshop's Generative Fill or Microsoft's 365 Copilot.
4. Why Is It Significant?
Generative AI is suitable for a wide range of applications. There are stand-alone applications—such as chatbots, customizable agents, and virtual assistants—in which generative AI is the primary feature. In other applications such as content generators, creative tools, and everyday tools like Microsoft Word and Google Docs, generative AI is essentially invisible and just one feature among many. Part of the reason for this proliferation is that generative AI is becoming open to developers with cloud-based services, open-source frameworks, and a growing number of AI tools that make it easy to implement generative AI solutions.
By allowing individuals and businesses to create, design, and innovate in ways that were previously difficult or expensive, generative AI has the potential to be transformative in many ways, in various professional and educational contexts. Free (or inexpensive) tools that can quickly create content that otherwise requires human skill and time might alter the landscape of what certain jobs look like and the qualifications needed for those roles. In education, the ability for any student to generate written work that is difficult or impossible to discern as having been produced by technology can fundamentally change the kinds of activities that have long been the hallmarks of how students learn and demonstrate mastery of learning material.
5. What Are the Downsides?
The mismatch in the pace of technological development and the readiness of our social, ethical, and legal frameworks raises concerns about the responsible use of generative AI. It necessitates the development of robust guidelines, policies, and ethical considerations to ensure that this technology is harnessed for the greater good while minimizing potential risks. Training generative AI by using content from the internet means that significant amounts of material that is racist, sexist, or violent must be managed so that it doesn't infect the tool's output. Generative AI's ability to produce human-like content raises the risk that plagiarism and academic dishonesty will become more challenging to detect, necessitating the adoption of advanced detection tools or a shift in policy. This can also be addressed by fostering a culture of academic integrity and educating students about the ethical use of AI tools.
Because these tools can appear to magically provide the correct answers to questions, some users will trust the output without question, even though the tools sometimes generate counterfactual content. This misplaced confidence creates opportunities for those who use technology to do harm, such as through deepfake images or videos. For those inclined to verify the accuracy of generative AI output, the tools do not always cite sources or explain the reasoning behind the generated content. Meanwhile, the output is difficult to cite as a source because it is not repeatable.
Generative AI tools are accepting larger amounts and more types of data, and it can be difficult to know whether the tools are retaining and using data for other purposes. This uncertainty risks exposing data that should not be shared outside of an organization, including personally identifying information and other private data. The ownership of both the data that the tools are trained on and the output is currently being litigated. Several lawsuits are pending against companies for using data without permission. Depending how these cases are resolved, generative AI tools might be required to remove some data or exclude certain datasets, potentially causing the tools to be less accurate. Meanwhile, the output generated by AI tools has been ruled as ineligible for U.S. copyright because it is not authored by a human.Footnote4 However, a U.S. Supreme Court ruling from 1884 could provide a different argument for those who are interacting with generative AI tools as a "copilot" to create new content.Footnote5
Many of the companies developing generative AI tools have not provided information regarding how the tools are trained and how safety tests are conducted. Each company has its own set of guidelines for what kinds of content are appropriate in terms of ethical, moral, and equity considerations, but this information is often not made clear to the users of these tools. While multiple governments are developing policies and guidelines for detecting AI-generated content and rooting out algorithmic discrimination, this guidance has not yet been finalized.
Another downside is the environmental impact of these tools. The carbon footprint includes training the model, running the model, and operating data centers. It is estimated that creating GPT-3 "consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide equivalent, the equivalent of 123 gasoline-powered passenger vehicles driven for one year."Footnote6
6. Where Is It Going?
In response to privacy concerns, organizations will begin to purchase or develop their own generative AI tools that do not share data beyond the organization. A college or university could purchase a tool for its faculty, staff, and students that has access to the institution's data but does not share the data beyond the institution or with certain user groups. As generative AI tools are provided with larger, cleaner datasets, larger LLMs will become better at more tasks. Some of these tasks include providing image-to-text services that could be used to make online courses more accessible by describing images used in the course, better online captions for audio and video, and language translations that are more accurate and localized. Several vendors have already integrated generative AI tools into their products to provide students with tutoring and support in areas such as coaching through math problems and the writing process.
Generative AI tools are emerging that will help faculty curate content for their courses. Some of these tools will build a learning module based on the course learning objectives and other prompts. Other tools can be used to develop assessment activities that are linked to the course content and learning objectives. Future developments may include tools that monitor students' performance and suggest adjustments to course content and assessments.
Another expectation is that generative AI tools will be further integrated into many everyday applications, including learning management systems, common productivity applications such as document and spreadsheet tools, and image editors. It will become difficult to use these common applications without AI tools providing suggestions and guidance.
As the workforce changes, how we prepare students will also need to change. We will need to teach students how AI in their fields will impact their work. This includes determining the source of data and understanding how algorithms are currently being used. Then we will need to teach students how to work with AI so that they are able to use these tools as a "copilot," as opposed to being replaced by these tools.
7. What Are the Implications for Higher Education?
Generative AI promises to give educators the ability to provide deeper learning and a more personalized experience along with reduced workload, but this will require focus on what and how we want students to learn and how they can obtain the tools to thrive in an environment in which AI is ubiquitous. The ability to rapidly generate many versions of material from different viewpoints allows students and faculty to explore ideas in novel ways. For example, an image could be synthesized around a given concept based upon a student's understanding. This image could be compared with images created by other students to discuss what commonalities and differences are present. This is something that is a strength of generative AI and gives all students, not just those with artistic skills, the ability to communicate concepts visually. It is incumbent on institutions to promote AI literacy among students and faculty and to provide information on how to use generative AI tools effectively and ethically, which can improve students' and faculty's experiences and facilitate learning.
For students, feedback is critical to success. Generative AI can assist with many of the techniques of effective feedback, two of which state that feedback should be specific and given on a timely basis.Footnote7 Faculty can use generative AI to rapidly craft specific feedback for students. Faculty can also leverage AI to teach students how to evaluate their own work, which involves them more deeply in the process of learning.
Institutions need to evaluate how students are being taught and evaluated in light of a tool that can generate content. What is the role that an essay plays in demonstrating mastery, and what is an appropriate role for generative AI in creating that essay? What is the role of homework? What constitutes ethical use of generative AI, and how much uniformity does the institution want when it comes to the use of generative AI tools? Significant investments in professional development and support will be needed to help faculty and administrators effectively integrate generative AI into teaching and administrative processes. Meanwhile, the absence of clear policies and guidelines regarding the use of AI in higher education could lead to inconsistencies in its implementation. Educational institutions should develop comprehensive strategies addressing areas including ethical considerations, data privacy, and intellectual property rights to ensure responsible and equitable use of generative AI.
EDUCAUSE 7 Things You Should Know About… is a long-running series covering emergent and influential technologies and practices in higher education. All publications in the series are available in the EDUCAUSE library.
EDUCAUSE 7 Things You Should Know About… is a trademark of EDUCAUSE.
- Graham Duggan, "We've Been Using AI for Years — We Just Didn't Know It," CBC Docs, September 19, 2023 (last updated September 28, 2023); "107 Up-to-Date ChatGPT Statistics & User Numbers," Nerdynav, November 27, 2023; Anna Tong, "Exclusive: ChatGPT Traffic Slips Again for Third Month in a Row," Reuters, September 7, 2023. Jump back to footnote 1 in the text.
- Anthony Cardillo, "How Many Companies Use AI?" Exploding Topics, July 24, 2023. Jump back to footnote 2 in the text.
- Elizabeth Finkel, "If AI Becomes Conscious, How Will We Know?" Science, August 22, 2023. Jump back to footnote 3 in the text.
- See "Second Request for Reconsideration for Refusal to Register Théâtre D'opéra Spatial," Copyright Review Board, September 5, 2023. Jump back to footnote 4 in the text.
- See "Burrow-Giles Lithographic Co. v. Sarony," March 17, 1884, Legal Information Institute, Cornell Law School. Jump back to footnote 5 in the text.
- The Conversation US and Kate Saenko, "A Computer Scientist Breaks Down Generative AI's Hefty Carbon Footprint," Scientific American, May 25, 2023. Jump back to footnote 6 in the text.
- Marianne Stenger, "5 Research-Based Tips for Providing Students with Meaningful Feedback," Edutopia, August 6, 2014. Jump back to footnote 7 in the text.
Heather Brown is Instructional Designer at Tidewater Community College.
Steven Crawford is the District Director for the Maricopa Center for Learning and Innovation at Maricopa County Community College District.
Kate Miffitt is Director for Innovation at California State University, Office of the Chancellor.
Tracy Mendolia-Moore is Manager of 3D Education Technology Innovations at Western University of Health Sciences.
Dean Nevins is Executive Director of Information Technology at Santa Barbara City College.
Joshua Weiss is Director of Digital Learning Solutions at Stanford Graduate School of Education.
© 2023 EDUCAUSE. The text of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.