Particularly for new technologies that disrupt long-standing practices and cultural beliefs, the work of carefully and intentionally developing effective policies can pay significant dividends.
Policy creation might not be exciting, but it is essential, especially when something new arrives on the scene and an organization does not have an easy or clear means of modifying existing policies to accommodate the new development. One such arrival happened in late November 2022 with generative AI, and a year later, many colleges and universities are still scrambling to figure out an AI policy that works at the institutional, departmental, and course levels.Footnote1
New technologies tend to follow a particular hype cycle, particularly in higher education.Footnote2 However, the possibilities, problems, and paradigms that generative AI tools such as ChatGPT, Google's Bard, and Microsoft's AI-powered Bing represent are many and can touch every part of the institution and its stakeholders.
Higher education focuses on building knowledge and investing in the written word, whether it's scholarly research or demonstrations of learning such as essays, research papers, theses, or dissertations. The underlying assumption is that such work could not be significantly faked aside from paying another individual (such as student paper mills) or taking the work of others without attribution. Generative AI upends much of that baseline and in fact will challenge many institutions' academic integrity policies because they often prohibit copying from websites or acquiring papers from other individuals. Generative AI will also introduce other ethical and procedures considerations throughout the institution.
In this article, we provide guidance and recommendations for approaching the development of institutional policy around using generative AI tools in higher education. We aim to help leaders in higher education institutions work more effectively to establish a pathway to a policy that includes various stakeholders and ultimately reduces the risk that members of the institution will take policy matters into their own hands, resulting in problems for the entire institution.Footnote3
This piece evolved from our institutional roles (instructional design, writing support, and research and library support), our individual work at our respective institutions, and a half-day workshop on generative AI policy development that we facilitated for the NERCOMP 2023 Annual Conference. The workshop included a student panel, followed by focused conversations around several topics related to generative AI: ethics and plagiarism, bias, instructional technology, opportunities, and threats. For the final part of the workshop, we facilitated a mini design sprint for policy development.
The Need for Policy
The number of ways that generative AI tools can be used is part of what has drawn so much attention to them. In higher education, generative AI has been used for press releases about students' deaths, for graduation speeches, and as tutors, to name a few examples.Footnote4 Given the nature of knowledge work throughout higher education, institutions need to determine where generative AI tools are appropriate and where they represent ethical or legal challenges.
One of the most pressing examples of areas for guidance is the discourse around plagiarism and how to detect it in an era of AI-generated content. Institutions are under pressure (whether organic or manufactured) to respond in some manner to the rise of generative AI as a widely available tool. The first wave of awareness seemed to be generated around plagiarism concerns and worker replacement theories ("Students will never write original papers again" and "Robots are replacing everyone"). For those in higher education, the drumbeat of fearmongering about plagiarism has succeeded in capturing faculty and administrative attention.
Early on, some assumed that generative AI work submitted by students could be identified with generative AI plagiarism checkers. TurnItIn's documentation states, "Our AI Writing detector's false positive rate is less than 1% for documents with 20% or more AI writing (our tests showed that in cases where we detect less than 20% of AI writing in a document, there is a higher incidence of false positives)."Footnote5 However, after the TurnItIn AI Detector was used in practice, across 38 million papers, false negative and positive detections are now at a higher risk.Footnote6 TurnItIn and other companies claim to offer greater odds of correct detection than reality seems to bear out. Additionally, further research is showing biases in what AI-plagiarism detection tools flag as being AI generated, including an increased likelihood of inaccurately flagging multi-language learners' work as having been generated by AI.Footnote7 Beyond the problem of false accusations, this environment also creates an untenable situation for students who must somehow defend themselves against a machine that cannot show its work but is just a projection. Additionally, students are using tools such as QuillBot, which will paraphrase text and use synonyms, to circumvent AI detection tools.
As original student writing becomes increasingly difficult to verify and confirm, developing a policy for AI detection might result in its being outdated just as the policy is implemented. Having an agile committee representative of diverse campus needs who can both review these issues and considerations to craft a more useful policy is part of what is needed to both protect students and choose the right tools for navigating the technological changes.
This is but one issue that higher education needs to navigate when crafting policy around generative AI. Other issues include but are not limited to the following:
- The role of generative AI in visual and textual outputs of the institution, be they for marketing, for social media, or in reports
- How and where faculty can use generative AI in the creation of course content, assignments, and feedback or assessments
- Addressing the embedded biases of the data and outputsFootnote8
- The impact and challenge of information literacyFootnote9
- The environmental impact of generative AI, including greenhouse gassesFootnote10 and water usageFootnote11
- The impact on workersFootnote12 needed to run generative AIFootnote13
Starting with the End
In many areas of life, including instructional design, an ideal place to start is with the results—what you want the outcome to be or do. To feel that your—and your collaborators'—time has not been wasted, and to see that you have made progress toward what might still be an abstract version of achievement, codify what success looks like for you, your collaborators, your stakeholders, and the final institutional arbiters. As AI continues to develop at a rapid pace, it is important to pause and do enough research so that you can formulate the questions you ultimately seek to answer. Below are some questions to get you started in thinking about this policy:
- Whom is the policy going to focus on? Students, faculty, staff, administration, third-party vendors, contractors, etc.?
- Can the same policy apply across the institution, or will different policies be needed for different parts of the organization?
- Will the policy stand on its own, or will there be room for adjustments (for example, will students encounter variations depending on whether instructors—under the notion of academic freedom—want to encourage or discourage certain uses of generative AI for the purposes of teaching)?
- What can or will be the implications of violating the policy?
- What methods of accountability with the policy can be created when it may be hard to actually detect generative AI text?
- Will there be differences between institutionally affiliated generative tools and those that are available to anyone?
Perhaps you are still in the fact-finding and idea-generating stage—you might need to specifically dedicate time to consider what will constitute your goal for the mission. You may decide that success for your group means developing and writing down these measurable goals. That is, you might still be in the phase of brainstorming and research, and you might need to take time to decide what your goals are. Ultimately, all involved should agree on specific measurable outcomes that are necessary for your group to have completed your work.
It is natural to feel a sense of urgency to take action, and it might be tempting to rush through this step. Although this process will not necessarily be easy, it will be helpful in carving out exactly what you are going to do in this phase. One of the trickiest parts about generative AI is that it has continued to change and shift in the past 12 months and will likely continue to do so, especially as other forms of generative AI (image, audio, video, slides, etc.) become increasingly available. Therefore, no matter what your goals are, creating a mechanism for revisiting, adjusting, updating will be equally important to name and anticipate early on. In that case, think in terms of not getting it perfect the first time through but having a process to iterate.
Identifying Stakeholders
Generative AI has the possibility of hitting every part of the institution. Because it is difficult to imagine an area that generative AI won't touch, it's useful to think about all the institutional stakeholders who will need guidance around usage of these new tools. Start by considering all the user groups and their iterations at your institution. One approach would be to conduct an audit of your campus; if your institution has an organization chart, that is a great document to use as the foundation. Some people and groups are likely not represented, so detective work will come into play. This is a great opportunity to break down institutional silos and examine your own invisible biases about the roles others play on our campuses.
Identify non-classroom student- and faculty-facing roles, which could include instructional designers, writing specialists, librarians, and academic support services. Then move on to people whose work has an impact on the campus experience as a whole. This could include staff working in IT (e.g., information security, academic computing, or web services), institutional communications, student life, even alumni relations. Be mindful that just because a group is not readily visible to you does not mean that they will not be impacted by or have an impact on your policies. We are a good example of coming together to collaborate on AI policy. The diversity of our professional perspectives has lent itself to not only seeing different viewpoints but also creating the opportunity to truly understand the nuances in just how many ways AI is impacting our institutions.
An important early step should be to involve your organization's office for accommodations—that is, the person or people who oversee the institution's support for people with learning, cognitive, and ability differences. Historically and even with generative AI, faculty and institutions often look to ban a given technology without genuine consideration of why or how it might actually benefit students with disabilities or be part of a larger strategy involving universal design for learning. Keeping an awareness of ADA compliance and the ways that generative AI can improve learning can only happen through intentional conversation with accommodation services on your campus.Footnote14
Consider, too, your contingent communities, such as part-time educators (adjuncts) and part-time staff. How are they included in the community in general? Do they experience regular communication patterns as the full-time employees do, such as being included in campus-wide emails? Will you need to set aside time to engage them in discussion separately, and will they need more training and professional development as you enact the policies you create? Look for allies who can serve as conduits to get people involved. For instance, adjuncts are often on campus for limited amounts of time during the day or might work elsewhere during typical business hours. When you are scheduling conversations, be mindful that you will need to create opportunities for those who work nontraditional schedules to be involved.
Generative AI represents questions for the whole institution, well beyond how it will be treated and used within classrooms. The administrative side of the campus will be affected by generative AI. Vanderbilt University, for example, issued an apology after it was discovered that a 300-word email (sent from the university's Peabody Office of Equity, Diversity, and Inclusion regarding the Feb. 13, 2023, mass shooting at Michigan State University) was composed using AI.Footnote15
Some questions to consider for other entities at your institution may include the following:
- Will the institution's upper management consider using generative AI to surveil employees' work to detect efficiency or generate employee evaluations?
- Should your policymaking efforts account for this scale of institutional use?
- How will AI be addressed in human resources, especially the recruiting and hiring process?
- In what ways will the community outside the campus be impacted by your policies?
- Do you have community partnerships in which you offer students an opportunity to put their theoretical learning into practice? Do you have an obligation to educate your students on the uses of generative AI as part of that collaboration?
- Will your institution be viewed as being a policy and position leader on the subject of generative AI?
- What will be reasonable and equitable means of challenging outputs by generative AI?
Finally, one of the most visible and yet frequently overlooked communities is the students. The value of engaging students in determining this policy cannot be overstated. To craft a policy without their input could result in a policy that feels out of touch and irrelevant to them. At the NERCOMP workshop, the highlight was hearing at the student panel how they demonstrated their own deep and sophisticated thoughts about generative AI and its roles in their lives. In particular, the conversation should extend beyond plagiarism and also include student perspectives about how other institutional areas should be using it (e.g., faculty, marketing, communications). Many students are aware of and interested in how generative AI is currently being employed in a wide variety of industries. Students might be reading about the way Paul McCartney used generative AI to restore John Lennon's voice to create a "new" Beatles track,Footnote16 or they may be interested in how generative AI is being used to dramatically improve recycling programs.Footnote17 These interests might be general curiosity (to be encouraged!), but they might also tie into students' future employment goals.
The consequences of not including a diverse array of groups in these conversations far outweigh any inconvenience you may experience when trying to find times and ways to gather input throughout the policy development process. Our workshop included people from many of the groups mentioned, and a key takeaway was just how much we learned from each other. Many realized that while something may seem obviously important to you, it may not even be considered by someone working in a different position.
Models for Developing Policies
Depending how an institution is structured and how much its leaders want to include different groupings within the organization, any of several models can be deployed to develop policies around generative AI. The following options are useful approaches and, to some degree, can be mixed and matched to meet the institution's needs and structures.
Task Force Model
Put out a call to action to form an inclusive team representing all aspects of the institution to create a policy task force. Alternatively, it could be a smaller, nimbler team that works to engage with different stakeholders across the institution, creating a template policy for generative AI and adjusting and tweaking it for each area of the institution, in conversation and collaboration with relevant stakeholders.
For instance, the policy for students and faculty is likely to look different from that of the marketing department, but realistically all of these groups should have AI policies. This might seem self-evident for students, but there are important questions to consider even for marketing, such as the use of generative AI image tools to represent students. Would that adhere to the institution's ethos or marketing ethics? Those are the kind of questions that would be relevant to marketing that students and faculty may not need to be involved in.
Governance Model
Some institutions have governance models in which faculty and staff play pivotal roles in the development and creation of institutional policy, particularly around policies that directly impact the classroom and students. This can be a useful model to gather a range of voices throughout the organization and have a clear pathway and record (e.g., committee notes) of how decisions are made and implemented. Yet these processes may encounter challenges in moving at an effective pace to make decisions that are timely and responsive to a changing environment. Due to the rate of change in generative AI, committees might create policies for assumptions around generative AI that are no longer valid.
Design Sprint Model
Commonly used in IT and project management fields, the design sprint approach is another successful model. We successfully used this model in our NERCOMP 2023 workshop, and these steps can be adapted for institutional or department use. An effective design sprint includes six steps: Understand, Define, Sketch, Decide, Prototype, and Validate.Footnote18 This format of developing a policy based on feedback from the community is more democratic in nature, allowing staff, students, and faculty to chime in with their ideas, thoughts, and recommendations.
- Understand: The first step is to understand how generative AI is currently being used (or isn't) at your institution, the concerns of the community, and the ideal state for AI use. This can be achieved through a listening tour, lighting talks, surveys, or structured conversations.
- Define: Next, the policy team should review all the information gathered in the first step to define their main goals and desired outcomes that reflect the needs of the community.
- Sketch: Each individual on the policy committee should sketch out their own draft policy that meets the definitions outlined previously. The larger committee can then review all policy drafts and then narrow down the ideas of each draft into a finalized Solution Sketch.Footnote19
- Decide: The policy committee will then review all Solution Sketches to decide which version they want to use to move forward with the process.
- Prototype: The prototype can be considered the first draft of the policy to be shared with the community.
- Validate: Return to the community members from the first phase of the design sprint process, gather feedback based on your policy draft, and make relevant edits. This is the last stage of the design sprint process, which will involve crafting the final version of the policy ready to be shared with your institution.
Consultant Model
Some institutions may seek consultants to research, discuss, and implement new policy based on developing industry standards arising across academia.Footnote20 The value of a consultant can be that an external voice with relevant knowledge and experience provides a valuable perspective that helps folks think differently about the challenge. However, it can also have the opposite effect, wherein folks dismiss the consultant because of the costs involved, as well as concerns that an outside entity might not understand the context.
Exemplar Model
Another option is to follow the lead of institutions that are publishing their own policies. For instance, Lance Eaton and his students have proposed and posted policies for College Unbound.Footnote21 Starting with one of these policies, the institution can then review, share for input, and adapt as needed. This can save a lot of time in terms of coming up with the initial approach, but it will need to be tailored to the specific needs of the institution and its population. If the policy creation team is reluctant to borrow from another institution, they could use a generative AI tool to draft the initial policy and workshop it across the institution. We used ChatGPT to generate several examples of policies to help get you started crafting your own.
Recommendations for Gathering Voices
To create the clearest and most effective policy, all stakeholders' voices must be heard. Because recruiting volunteers for their insights in drafting a policy can sound tedious, consider using both formal and informal routes to get their assistance. Also important is conveying how each stakeholder may be impacted by such a policy and emphasizing that their experiences and observations are highly valuable to the process. Asking for help drafting a policy document can be overwhelming to many. These approaches might make this more engaging:
- Connect with people in mini roundtable discussions or small-group conversations
- Find help through social media—use polling features to generate feedback and locate allies
- Have one-on-one conversations with the people you see as your greatest supporters and greatest challengers in this process. Diverse perspectives are necessary to have a policy that works for everyone.
- Allow for enough time in discussions. It can be easy to rush through things, but these discussions will include deep topics and issues that need time to be fully processed.
- Provide collaborative documents for participants to add text, and use the comment feature for questions or additional thoughts.
Institutional knowledge is both an advantage and a disadvantage in this process. Some stakeholders will be obvious, and some people will quickly volunteer. Others may be more challenging to reach because of their disinterest or because they don't see the relevance to their work. These are important people to bring into the conversation, rather than making assumptions about their thoughts or possible contributions.
Framework for Generative AI Policy Creation
Below are the relevant sections that a generative AI policy should address to help institutions in their policy development processes. Each institution should have its own individual policy based on the needs of its community.
- Policy Audience: Whom is this policy for? Is it for the entire institution, faculty, students, staff, departments, third-party vendors, or others?
- Policy Timeline: What is the timeline for implementation? Should that timeline include a review and update cycle after initial implementation?
- Policy Tools: What counts as AI for this policy? Is it focused on all AI or only generative AI? Is it focused on all generative AI or just text-generating AI?
- Academic Integrity Guidelines: If the policy is related to student or faculty work, what are the integrity requirements to make sure academic integrity is upheld?
- Acceptable Use: If AI usage is acceptable, are there any limitations on the amount of usage (e.g., a certain percentage must be individually generated in certain contexts) or purposes for which generative AI may be used?
- Transparency: What practices are in place for communicating its usage throughout the institution?
- Security and Legal Considerations: What concerns need to be addressed concerning privacy, intellectual property, and proprietary knowledge around using external or enterprise generative AI tools? Does endorsing the use of generative AI conflict with any laws such as the General Data Protection Regulation?
- Ethical Considerations: What concerns or responsibilities does the institution have explicitly or implicitly within its mission that conflict with the environmental, human-exploitation, and bias issues related to generative AI?
- Institutional Resources: Which areas of the institution will be committed to supporting, responding to, and implementing uses of generative AI?
- Processes for Policy Violation: If the policy is violated, what are the steps for identifying and addressing it?
Conclusion
Institutional policy development can often be a methodical process that both informs and better unifies an institution's approach to a particular challenge. And yes, it can be tedious. Yet many institutions are often unprepared for technological developments, and the delays in catching up have a deep impact on students, faculty, and staff (e.g., the reactive response to the pandemic). Although the generative AI cat is out of the bag, there's still an opportunity to meaningfully guide—through a collaborative effort—how it can be best used by all the stakeholders. Such an approach will result in a better learning and working environment for students, faculty, and staff. Taking an iterative approach to AI policy might feel unnatural for those who operate best with exhaustive policy documents, but it just might be the key to successfully navigating this new technological reality.
Notes
- Susan D'Agostino, "GPT-4 Is Here. But Most Faculty Lack AI Policies," Inside Higher Ed, March 21, 2023. Jump back to footnote 1 in the text.
- Daniel E. O'Leary, "Gartner's Hype Cycle and Information System Research Issues," International Journal of Accounting Information Systems 9, no. 4 (December 1, 2008): 240–52; Jackie Wiles, "What's New in Artificial Intelligence from the 2022 Gartner Hype Cycle," Gartner, September 15, 2022. Jump back to footnote 2 in the text.
- Miles Klee, "Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers," Rolling Stone, May 17, 2023. Jump back to footnote 3 in the text.
- Sam Levine, "Vanderbilt Apologizes for Using ChatGPT in Email on Michigan Shooting," The Guardian, February 22, 2023; Josh Moody, "The ChatGPT Commencement Address," Inside Higher Ed, June 29, 2023; Lauren Coffey, "Harvard Taps AI to Help Teach Computer Science Course," Inside Higher Ed, June 30, 2023. Jump back to footnote 4 in the text.
- David Adamson, "New Research: Turnitin's AI Detector Shows No Statistically Significant Bias against English Language Learners," Turnitin Support Center, October 26, 2023. Jump back to footnote 5 in the text.
- Geoffrey A. Fowler, "Detecting AI May Be Impossible. That's a Big Problem for Teachers," Washington Post, June 2, 2023. Jump back to footnote 6 in the text.
- Andrew Myers, "AI-Detectors Biased Against Non-Native English Writers," Stanford HAI, May 15, 2023. Jump back to footnote 7 in the text.
- Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23, FAccT '21, New York, NY, USA: Association for Computing Machinery, 2021. Jump back to footnote 8 in the text.
- Celeste Kidd and Abeba Birhane, "How AI Can Distort Human Beliefs," Science, June 22, 2023. Jump back to footnote 9 in the text.
- Bernard Marr, "Green Intelligence: Why Data And AI Must Become More Sustainable," Forbes, March 22, 2023. Jump back to footnote 10 in the text.
- David Danelski, "AI Programs Consume Large Volumes of Scarce Water," UC Riverside News, April 28, 2023. Jump back to footnote 11 in the text.
- Karen Hao and Deepa Seetharaman, "Cleaning Up ChatGPT Takes Heavy Toll on Human Workers," Wall Street Journal, July 24, 2023. Jump back to footnote 12 in the text.
- Billy Perrigo, "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic," Time, January 18, 2023. Jump back to footnote 13 in the text.
- Jürgen Rudolph, Samson Tan, and Shannon Tan, "ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education?" Journal of Applied Learning and Teaching 6, no. 1 (January 24, 2023). Jump back to footnote 14 in the text.
- Levine, "Vanderbilt Apologizes for Using ChatGPT in Email on Michigan Shooting." Jump back to footnote 15 in the text.
- Nadia Khomami, "AI Used to Create New and Final Beatles Song, Says Paul McCartney" The Guardian, June 13, 2023. Jump back to footnote 16 in the text.
- Kayla Vasarhelyi, "AI Robotics in Recycling," Environmental Center, University of Colorado Boulder, April 6, 2022. Jump back to footnote 17 in the text.
- See "Design Sprint Methodology." Jump back to footnote 18 in the text.
- Ibid. Jump back to footnote 19 in the text.
- Kevin R. McClure, "Arbiters of Effectiveness and Efficiency: The Frames and Strategies of Management Consulting Firms in US Higher Education Reform," Journal of Higher Education Policy & Management 39, no. 5 (October 2017): 575–89. Jump back to footnote 20 in the text.
- See "Proposal of Usage Guidelines for AI Generative Tools at CU." Jump back to footnote 21 in the text.
Esther Brandon is Manager of Learning Design & Technology Adoption at Harvard Medical School.
Lance Eaton is Director of Faculty Development & Innovation at College Unbound.
Dana Gavin is Director of the Writing Center at Dutchess Community College.
Allison Papini is Assistant Director/Manager of Research & Instruction Services at Bryant University.
© 2023 Esther Brandon, Lance Eaton, Dana Gavin, and Allison Papini. The text of this work is licensed under a Creative Commons BY-NC-SA 4.0 International License.