Generative Artificial Intelligence and Education: A Brief Ethical Reflection on Autonomy

min read

Given the widespread impacts of generative AI, looking at this technology through the lens of autonomy can help equip students for the workplaces of the present and of the future, while ensuring academic integrity for both students and instructors.

robot hand typing on a laptop. human hand writing on a piece of paper.
Credit: Vectorium / Shutterstock.com © 2024

What was a novel technology in the fall of 2022 is now a central concern to education. What should we think about generative artificial intelligence (AI)? Should student use of AI be encouraged or dissuaded? Do we prepare students for a workplace that is quickly adopting AI? Do we teach students to take a measured approach to using tools to augment, but not replace, human capacity? Within the framework of applied ethics, which addresses the practical considerations of real-world situations, we adopt a rather unconventional approach for considering technological effects on humans. Applied ethics has been used widely in bioethics, particularly within the four principles of autonomy, non-maleficence, beneficence, and justice.Footnote1 For the purposes of this article, we focus on the first of these principles, autonomy—the notion that people are "self-governing agents."Footnote2 Specifically, we explore issues related to informed decision-making, disclosed authorship(s), freedom of human creativity, and freedom of rejection, which are critical to the intersection of higher education and AI.

Many excellent uses of AI will advance human knowledge. These can be as multifaceted as performing tasks that would take enormous amounts of human time or putting together information in novel ways to develop new theories of knowledge (for example, exploring the human genome). Some researchers are beginning to think through the implications of considering AI and autonomy together, but those implications may not move at the pace of AI development and societal adoption.Footnote3 Instead, we propose a set of questions to consider autonomy in relation to student AI use, questions that will be helpful today and adaptable in the near future (see table 1 below).

Situating Higher Education and AI Use

While the use of generative AI is new, particularly at the societal, non-specialist level, the motivations for adopting tools to make everyday life easier are as old as humanity itself. Humans have long created inventions intended to simplify work-life balance. In this next great leap of technological capability, AI poses new kinds of questions to higher education and the workplace because "AI is a branch of computer science concerned with building smart machines that are capable of performing and even outperforming human jobs."Footnote4 Much has changed since the 1950s and the advent of AI because it has proliferated across many sectors and disciplines. "Advancements in artificial intelligence have led to the development of ChatGPT, a revolutionary technology that generates human-like responses to natural language prompts."Footnote5

AI found in ChatGPT and in many commonly used apps may increase student motivation and engagement by having them use a tool they already use daily; however, its advent has autonomy implications for students and their instructors. For now, we can think about AI as an external tool that is either consulted by an end user via computer use or is an add-on to a mobile app. However, this very limited use is quite likely going to change rapidly.

Student Use of AI: Issues of Production and Consumption

AI use is multidirectional because students use the technology as consumers of educational content and as producers of learning artifacts. Students interact with content differently when they produce it versus when they consume it, and distinguishing between these actions is important for reasons related to autonomy.

Students who enroll in an online, hybrid, or face-to-face course generally interact with a learning management system (LMS). As consumers of information, students engage with educational content that was traditionally developed by content experts and/or the instructors themselves. Now, however, content—including written and audiovisual materials—might be generated using AI. If students are viewing educational content that was produced partly or entirely by AI, do they have a right to know this? Additionally, AI changes the dynamic of the learning experience: Do students understand the logic and underlying mechanisms of how AI content is generated—i.e., do they have enough foundational knowledge to understand how something is generated from a large language model (LLM)? One key issue related to autonomy is teaching students how to understand bias, which can enter any content. If they are unaware of how that content is generated, or from where it is generated, they may lack the skills needed to question what they are consuming.

Conversely, students demonstrate their learning through the production of academic artifacts. AI can be used to produce credit-bearing academic work, such as essays and discussion board postings.Footnote6 This means AI allows students to circumvent the conventional production of learning artifacts. And, while an instructor might use tools of detection (such as the AI score in Turnitin) or deterrence (such as assignments that are produced on paper while students are in a classroom), perhaps educational evidence and assessment will have to be rethought alongside the ethical implications of the (mis)use of AI. This is an issue of autonomy because the process of education has relied heavily on what students have learned and can apply at the end of a course, program, or degree. However, given the ability to enter a prompt into AI and receive a paper in a matter of minutes, will students take on the same accountability for learning? A recent study suggests that academic misconduct is on the rise due to a number of social factors.Footnote7

Student plagiarism and the use of technology to bypass the learning process are nothing new, and the motivations are known: "The variables found to be of the greatest importance by the students as causes of plagiarism relate to time management issues and social pressure, in addition to the lack of clarity and incomplete policies that regulate plagiarization."Footnote8 The problem with student use of AI, particularly as it pertains to autonomy, is the surrender of agency to the outputs of AI. When pressed for time or for performance, as a quick fix students might choose to cede their agency as free learners to what AI offers. This means that institutional policies now exist in an era when they must quickly and comprehensively adapt to changing conditions. Plagiarism, even as it was understood just a few years ago, has entirely different nuances today. For example, prior to the development of generative AI, the following definition and moral principle were widely accepted and understood: "Students who submit under their own name papers that were actually composed by someone else are cheating. This is a moral issue that obviously belongs in the category of academic integrity."Footnote9 Today, however, the question of authorship—and what or who constitutes an author—is rapidly changing. When producing academic outputs for credit-bearing courses, is there an obligation to credit AI?

The question of authorship might begin with a notion of ownership, but today authorship blurs between inputting parameters (or a prompt) into AI and then taking ownership of what it produces. Who is the author—the human who provides the parameters or the AI generating the content? Plagiarism detectors simply identify what content was copied from another source and what was composed by the student. With the advent of generative AI, however, detection relies on an algorithmic probability that the examined text was composed using AI. Further complicated by writing aids and text suggestions, the line between human authorship and machine-generated text is increasingly blurry. So, too, with how students view "their" work. As AI proliferates, what belongs to the student and what is generated by AI will become increasingly difficult to differentiate. For example, if a student provides the AI with search and execution parameters and then refines the outputs to a desired product, does the ensuing text belong to the student or to the AI?

This should be taken a step further. Ethically, the current state of generative AI muddles the line between acceptable use of technology and academic cheating. As Taneer, Hassan, and Bhaumik indicate, "Education policy implementation for AI is still in its adolescence, but it is likely to increase rapidly over the next decade."Footnote10 We suggest that students should be fully aware of when AI should not be used, when AI can be used with citation, and when AI should be used to fulfill part or all of an assignment. When AI is not properly used or cited, institutions can rightly allege cheating.

Why might institutions eschew or even obfuscate clear policies toward AI use? The decline in enrollments means that colleges and universities are competing for fewer students.Footnote11 The implications for academic integrity from the use or misuse of AI might compel institutions to have the same stringent consequences as they do for plagiarism, which could result in the academic probation and unenrollment of already scarce revenue-producing students. Further complications occur when AI detection wrongly flags the use of writing-enhancement or correction tools. In a recent incident, a college student's work was flagged as an "AI violation" for the use of Grammarly, which later led to a loss of a scholarship.Footnote12 This type of situation might explain why some colleges and universities are stopping the use of AI detectors and indicating that the use is a violation of FERPA because students' sensitive information might be susceptible to outside sources.

Students should learn how to use AI because many of their future jobs will require knowledge of it. Yet students should also be able to perform tasks they've learned on their own. AI can be a valuable tool in many fields of knowledge, but the use of AI to bypass writing essays and doing the hard work of learning should be discouraged. Students should be taught when there are appropriate and inappropriate uses of AI.

Instructor Use of AI: Issues of Assessment and Content

The issues related to instructor autonomy and AI use are not the other side of the coin, so to speak. Many of the issues for students and instructors are quite similar, particularly when it comes to consumption and production. However, what differentiates them is a sense of leadership: Do instructors have an additional responsibility of disclosure of AI use or of effort to detect AI when it is not permitted in student work?

Instructors who use AI in the composition of course content or of assessment materials face a similar disclosure question as do students. The connection is similar to students' disclosure of authorship and informed decision-making. Should students have the right to refuse course content generated with AI? As AI becomes more sophisticated, for example, it is possible for AI to provide feedback on student work; will instructors have a responsibility to inform their students of this use of AI?

When instructors evaluate student coursework for academic credit, they have a basic duty to assess the work for plagiarism. To carry out this duty, many institutions equip instructors with plagiarism detectors, which, until the dawn of freely available generative AI, were essentially straightforward. Now, what one instructor might consider plagiarism with AI, another instructor might consider to be simply part of the assignment. This means academic freedom over course content is taking on nuances that simply did not exist until recently. If colleges and universities provide plagiarism- and AI-detection software, are instructors also given the freedom to specify what qualifies as acceptable without blanket acceptance or rejection of AI? Further, are faculty allowed to use differentiated assessment if students are allowed or encouraged to use AI for part of an assignment but then produce their own work in another portion of the assignment?

Students should be taught how to use technology, including generative AI, because they likely will use it in the future workplace. Failing to do so may impair students' abilities to compete for jobs after graduation. Some domains now seek recent graduates and interns who have been educated on the uses of AI. For example, investment firms now want students who have been trained on the uses of AI for finance decisions.Footnote13 Researchers describe the phenomenon colorfully: "AI-empowered finance and economy has been a sexy and increasingly critical area."Footnote14 Other domains such as healthcare and software engineering may also have compelling reasons to hire individuals who have experience using AI.

Though the adoption of AI will very likely cause serious reductions in the workforce due to efficiency, cost reductions, productivity, and output in the coming years, students find themselves at colleges and universities that have vague or nonexistent policies toward AI. This paradox can cause conceptual confusion among students and instructors alike because there are disagreements about when and where AI should be used. Such policy confusions might lead to issues related to academic freedom over content. For example, some institutions or programs might strongly encourage or mandate the use of AI or the teaching of AI use in some or all classes. If this is the case, do faculty have the right to reject the curricular use of AI without administrative interference or retribution?

The issues of instructor use of AI, freedom over content and assessment of student work, and best practices to equip students for a world of rapidly changing work are far from settled. Institutions should begin drafting their response to AI in clear language, but also in ways that help students understand why AI is permitted or prohibited in varying contexts.

Table 1. Autonomy, AI, and Higher Education: Some Questions for Discussion
Autonomy-Related Issue Definition and Application to AI in Higher Education Question(s) from Perspective of:
Student Producing Credit-Bearing Content with AI Student Viewing Course Materials Produced by AI Instructor Evaluating Student Academic Work Produced by AI Instructor Producing Course Content with AI
Informed Decision-Making

The ability to make decisions based on accurate and relevant information.

Students and instructors know when use of AI is permitted and not permitted in the production of academic work.

If using a technological tool, app, or website, does the student know what, if any, of the outputs are AI-driven?

Is the student aware of the different types of generative AI, including generative tools, grammar-correcting tools, and other writing aids?

Is the student aware that portions of the instructional content, feedback mechanisms, or communication are AI-produced?

Does the instructor have access to tools to distinguish AI-generated content from non-AI content?

Has the student cited the generative AI tool used to produce credit-bearing work?

Does the instructor indicate what portions of the content are AI-produced?

Disclosed Authorship

Consumer awareness of authorship without misrepresentation or impersonation.

Authorship of AI-generated content and human-produced content is clearly denoted.

Does the student know or have access to the various inputs from the AI model to assess for bias?

If the instructor has assigned work to be completed with the aid of AI, is the student free to use little or no AI to complete the assignment?

Is the student exposed to the cultivation of expertise and nuanced views or derivative information gleaned from sources that may or may not be academically vetted?

Is the instructor led to believe that the content is generated by the student and not AI? Has the student claimed authorship of materials that were generated partially or fully by AI?

Are students led to believe the content is generated by the instructor and not AI? Has the instructor claimed authorship and expertise of the materials that were generated partially or fully by AI?

Freedom of Human Creativity

The freedom to produce without interference.

AI is given second priority to human production and consumption of academic work. The purpose of AI is as a secondary tool to human life, not a means to produce better and more accurate future AI products.

Is the student able to produce similar content without the aid of generative AI?

Is the student able to achieve better or more advanced outcomes with the use of AI-generated content? What does the student do with such content after learning it? How is the AI-generated content further refined for better human learning?

Is the instructor able to assess AI-generated and student-generated outputs with differentiated assessment? Are separate grades possible for different forms of production?

Is the instructor able to produce similar content without the aid of generative AI?

Freedom of Rejection

The freedom to reject information or outcomes in favor of other alternatives.

AI can be abandoned, rejected, or delayed in the production of academic work.

Does the student have control over the AI outputs to reject some of the content?

If the student believed the content would be developed and delivered by the instructor and not AI, is the student able to petition the school for a partial/full refund?

Is the instructor able to reject an assignment and compel the student to reproduce the assignment without AI?

Is the instructor subject to onerous bureaucratic forms to allege cheating if AI was not permitted in an assignment?

If a school stipulates the use of AI in the curriculum, is the instructor able to reject curricular use of AI without administrative retribution?


Autonomy and the Academic Use of AI: A Valuable Tool or Deception?

We are at the very beginning of a rapid technological transformation caused by generative AI, a transformation that will affect most, if not all, areas of human life, including higher education.Footnote15 AI, like any technology, can be used consciously, responsibly, and thoughtfully. There are untold leaps ahead that will be possible with AI. Specifically, we hope that AI will drive a conversation in how human time is valued—AI might be used to automate processes, searches, and mundane chores, opening up the possibilities of human flourishing.

With issues of autonomy, particularly when considering self-governance, the motivations for using AI should indicate how institutions, students, and instructors consider the use of AI. In terms of coming inventions, human flourishing, and scientific breakthroughs, the motivations to adopt AI should be as a tool to better our lives. However, if AI is used to circumvent learning, such as through the automatic generation of an essay that a student simply does not want to write, the motivation is quite different: it is deception. Students are motivated by varied reasons to engage in plagiarism. By the same token, a range of reasons might motivate students to use AI when directed not to do so, but if the motivation is to deceive, it is an academic problem.

The principle of autonomy stresses that we should be free agents who can govern ourselves and who are able to make our own choices. This principle applies to AI in higher education because it raises serious questions about how, when, and whether AI should be used in varying contexts. Although we have only begun asking questions related to autonomy and many more remain to be asked, we hope that this serves as a starting place to consider the uses of AI in higher education.

Notes

  1. Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics (New York: Oxford University Press, 1994). Jump back to footnote 1 in the text.
  2. Sarah Buss and Andrea Westlund, "Personal Autonomy," Stanford Encyclopedia of Philosophy (2018). Jump back to footnote 2 in the text.
  3. Wanshu Niu, Wuke Zhang, Chuanxia Zhang, and Xiaofeng Chen, "The Role of Artificial Intelligence Autonomy in Higher Education: A Uses and Gratification Perspective," Sustainability 16, no. 3 (2024): 1,276. Jump back to footnote 3 in the text.
  4. Muhammad Tanveer, Shafiqul Hassan, and Amiya Bhaumik, "Academic Policy Regarding Sustainability and Artificial Intelligence (AI)," Sustainability 22, no. 12 (2020): 9,435. Jump back to footnote 4 in the text.
  5. Dinesh Kalla, Nathan Smith, Sivaraju Kuraku, and Fnu Samaah, "Study and Analysis of ChatGPT and Its Impact on Different Fields of Study," International Journal of Innovative Science and Research Technology 8, no. 3 (2023): 828. Jump back to footnote 5 in the text.
  6. Heather Brown, Steven Crawford, Kate Miffitt, Tracy Mendolia-Moore, Dean Nevins, and Joshua Weiss, "7 Things You Should Know About Generative AI," EDUCAUSE Review, December 6, 2023. Jump back to footnote 6 in the text.
  7. Alicia McIntire, Isaac Calvert, and Jessica Ashcraft, "Pressure to Plagiarize and the Choice to Cheat: Toward a Pragmatic Reframing of the Ethics of Academic Integrity," Education Sciences 14, no. 3 (February 2024). Jump back to footnote 7 in the text.
  8. Hanaa A. Elshafei and Tamanna M. Jahangir, "Factors Affecting Plagiarism among Students at Jazan University," Bulletin of the National Research Centre 44, no. 1 (December 2020): 4. Jump back to footnote 8 in the text.
  9. Sandra Jamieson and Rebecca Moore Howard, "Rethinking the Relationship between Plagiarism and Academic Integrity," Revue International des Technologies en Pédagogie Universitaire 16, no. 2 (January 2019): 83. Jump back to footnote 9 in the text.
  10. Tanveer et al., "Academic Policy Regarding Sustainability and Artificial Intelligence (AI)." Jump back to footnote 10 in the text.
  11. Jessica Blake, "Doubts About Value Are Deterring College Enrollment," Inside Higher Ed, March 13, 2024. Jump back to footnote 11 in the text.
  12. Jeanette Settembre, "College Student Put on Academic Probation for Using Grammerly: 'AI Violation'," New York Post, February 21, 2024. Jump back to footnote 12 in the text.
  13. Paige McGlauflin and Joseph Abrams, "Wall Street's Next Leaders are Embracing AI and Bullish about Their Financial Future, According to Goldman Sachs's Summer Intern Survey," Fortune, September 25, 2023. Jump back to footnote 13 in the text.
  14. Longbing Cao, "AI in Finance: A Review," SSRN, August 6, 2020. Jump back to footnote 14 in the text.
  15. Brown et al., "7 Things You Should Know About Generative AI." Jump back to footnote 15 in the text.

Viktoria A. Strunk is Core Faculty in the Department of Teaching & Learning at American College of Education.

James E. Willis, III, is Assistant Professor of Practice for Religion at the University of Indianapolis.

© 2025 Viktoria A. Strunk and James E. Willis, III. The content of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.