Given the potential ramifications of artificial intelligence (AI) diffusion on matters of diversity, equity, inclusion, and accessibility, now is the time for higher education institutions to adopt culturally aware, analytical decision-making processes, policies, and practices around AI tools selection and use.
The use of artificial intelligence (AI) tools in higher education is fraught with tension. Ever since OpenAI launched the public version of ChatGPT, generative AI has been compared to COVID-19 in terms of its rapid onset and disruptive impact on society. Generative AI has been compared to the calculator in terms of its potential to change teaching, learning, writing, and searching practices and to fire in terms of its potential to be either a net asset or a net liability to society. Between the hype about its promise and the panic about its impact on academic integrity and knowledge production, many higher education professionals were caught in a wait-and-see moment in 2023. Will policies govern the use of AI in teaching and learning? How will faculty and staff develop their AI literacy and skill sets, especially as generative AI tools evolve? How should or shouldn't generative AI be used at colleges and universities?
People generally recognize that AI products currently perpetuate the inherently biased content used to train them. In August 2023, The London Interdisciplinary School released a video essay about the ways in which AI image generators, such as MidJourney and Dall-E, perpetuate harmful representational biases in images generated for roles such as nurses, drug dealers, CEOs, and even terrorists. Similarly, the Brookings Institute published a series of tests that uncovered a political bias in ChatGPT output.Footnote1 When given a prompt that includes a reference to a cultural heritage, ChatGPT includes harmful stereotypes among its output. GPT detectors have also been found to perpetuate linguistic bias. In 2023, researchers from Stanford University reported that GPT detectors misclassified essays written by non-native English speakers as being AI-generated significantly more frequently than they did for essays written by native English speakers. The Modern Language Association (MLA) and Conference on College Communication and Composition (CCCC) Task Force published a working paper that placed concerns about AI bias in AI-generated written content in an academic context, acknowledging that this topic will continue to evolve, and more guidance is needed.Footnote2
At the same time, many academic technology providers have updated their products to include AI, regardless of whether their educational partners have asked for such additions. For example, TurnItIn deployed its AI detection feature in the spring of 2023, forcing users to turn off the feature rather than opt in to it. At its 2023 annual user conference, Instructure, the parent company of Canvas Learning Management System, announced a suite of AI features in its experimentation lab and on its immediate roadmap. And, in early August 2023, Zoom updated its terms of service to include language suggesting that user data would be used to train its AI features. Customer response was "swift and angry."Footnote3 Within days, Zoom clarified that it would not use user data without first obtaining consent. The company currently promises that it "does not use any customer audio, video, chat, screen sharing, attachments, or other . . . customer content (such as poll results, whiteboard, and reactions) to train Zoom's or its third-party artificial intelligence models."Footnote4
Regardless of whether explicitly AI-native tools are adopted as part of an academic technology infrastructure, AI features are making their way into the tools that higher education institutions already use. Those who make decisions about adopting AI tools extend beyond academic technology administrators, learning designers, and teaching faculty. The widespread availability of AI in various products means that individuals can make context-based decisions about how and when to use these tools. The result is somewhat of a conundrum. Currently, generative AI is pervasive, whether it's wanted or not. Many students are using it, whether instructors want them to or not. Using generative AI perpetuates representational biases and raises questions about authenticity and privacy, but avoiding it places administrators, faculty, staff, and students at risk of being deficient in AI skills and literacies.Footnote5
At its core, the current challenge surrounding AI is the widespread availability of AI models that don't require procurement and are designed for general use. Given the potential ramifications of AI diffusion on diversity, equity, inclusion, and accessibility, now is the time for higher education institutions to adopt culturally aware, analytical decision-making processes, policies, and practices around AI tools selection and use.Footnote6 One way to navigate this complex landscape is by adopting an individualized orientation toward emerging technologies such as generative AI and large language models (LLMs)—something that entails a deep, reflective examination of the arguments for and against integrating these technologies. Such introspection can help higher education institutions to ensure that AI implementations align with their values and goals, ultimately fostering a more inclusive and equitable environment for all stakeholders.
Bias Is Human Generated
Though anthropomorphizing AI technologies is widely critiqued, many of the relationships humans experience throughout their lifetimes mirror individuals' relationships with AI. Observing just how closely AI interactions echo human interactions is fascinating.
In addition to the known biases that are perpetuated by AI technologies, another type of bias emerges from human attitudes toward AI. This bias revolves around how people perceive and judge AI capabilities and AI technology applications.Footnote7 For instance, AI empowers individuals to seemingly be in two places simultaneously. Consider a scenario in which a faculty member is scheduled for a mandatory meeting and a prearranged training session at the same time. An AI meeting assistant can bridge this gap by enabling the faculty member to attend both events virtually, capturing the crucial aspects of the meeting while participating in the training. However, the societal valuation of in-person experiences as "superior" reveals an underlying bias. The assumption that "value is added" to in-person engagements can inadvertently overshadow the benefits and efficiencies that AI offers, fostering an unconscious bias against fully embracing AI-powered alternatives.
Reflection and Guidance
As interactions with AI impact various aspects of life—from personal tasks to professional endeavors—understanding how to navigate the complexity of AI is essential. AI algorithms influence decisions, shape experiences, and even affect societal dynamics. Now more than ever, pausing and engaging in thoughtful introspection are essential. A self-reflective process allows higher education leaders to navigate the complex capabilities and limitations of AI, enabling decision-makers to harness its potential while safeguarding against its unintended consequences. For example, Amazon tried building an AI tool to help with the hiring process, but the company discovered the tool demonstrated bias against female applicants.Footnote8 To embed AI responsibly, ethical consideration and self-awareness are required. If applied responsibly, AI tools can enhance learners' lives while preserving personal and professional values and human connections. This exploration delves into the pivotal role of personal reflection in shaping a beneficial rapport with AI, fostering a future where technology and humanity coexist harmoniously.
Application, Benefits, and Content (ABC) Reflection Questions
The reflection questions that follow are designed to help higher education leaders navigate and foster a healthy relationship with AI as it becomes an increasingly pervasive part of life. Some of the questions invite personal reflection, while others consider the larger organizational and educational context. All of the questions apply a culturally aware perspective rather than a traditional edtech adoption perspective (though the traditional perspective is another useful lens to evaluate this moment of rapid AI integration).
Application
- Has the AI application been thoroughly assessed for potential biases?
- Compared to the accessibility and efficiency benefits, are the biases in the AI application acceptable or concerning?
- Are these biases causing harm or inequity in the outcomes of the output or recommendations of the AI?
- Are adequate steps being taken to mitigate biases and ensure fairness in the output of the AI?
- Might marginalized or underrepresented groups be disproportionately affected by any biases in the AI application?
- Does the AI application contribute to a more equitable and inclusive environment, or does it inadvertently reinforce existing biases?
Benefits versus Biases
- How critical is the application of the AI software for the current needs, and does it significantly enhance individual productivity or the user's experience and intended goals?
- Has the AI software been thoroughly evaluated for potential biases and the impact of the AI on decision-making?
- Could alternative AI solutions achieve similar benefits while minimizing biases?
- What are the ethical implications of using AI software that may contribute to biases or negative impacts on certain groups or individuals?
- Does the application of the AI software align with the leader's personal values and ethical standards, or should the leader's beliefs about AI and biases be reassessed?
- Does the application of the AI software align with the stated values and ethical standards of the institution?
Content and Context
- In considering the use of AI for production tasks, have specific processes or tasks been identified that can be automated to increase efficiency?
- Is the AI tool suitable for providing accurate explanations and insights in areas that might be challenging to grasp?
- Is the output of the AI reliable and trustworthy in terms of ensuring the quality of the final product or result?
- Are there scenarios in which AI can help optimize resource allocation, such as scheduling, resource utilization, or cost management?
- How can AI be leveraged to provide personalized recommendations for equitable skill development and continuous improvement?
Developing an AI Approach
Many higher education leaders have expressed apprehension about adopting and leveraging AI at their institutions. One reason for this is that little support is available to help decision-makers develop options for integrating AI tools.Footnote9 Many pathways exist. For example, leaders might take a receptive approach, meaning they are willing to integrate AI but have concerns about its uses and long-term outcomes. Or, they might take a proactive approach, thinking more broadly about the role AI may play at their institutions. Further, the decision to adopt a fixed approach or a flexible approach can significantly influence the success of integrating AI into various domains. While a fixed strategy includes well-defined parameters, a more flexible approach accommodates change. Each avenue offers distinct benefits. To make informed decisions about AI in education, leaders can begin by focusing on one specific area where AI could drive transformation. The following sets of questions can help leaders as they consider each approach.
General Considerations
- How can the institution provide comprehensive education and training programs to empower individuals with a solid understanding of AI concepts and technologies?
- How can the institution promote collaboration and interdisciplinary learning among individuals with diverse backgrounds and expertise?
- What specific courses, workshops, or resources can be offered to develop AI proficiency among people in various roles and industries?
- What measures can be implemented to ensure that individuals understand and prioritize ethical considerations when working with AI technologies?
- What mechanisms can be put in place to support ongoing skill and knowledge development as AI advances?
Receptive Approach
- How can the institution creatively consider a wide range of perspectives and input when developing or utilizing AI tools to ensure they promote inclusivity and equity?
- How might the institution gather and leverage user feedback to continuously enhance AI solutions?
- In what imaginative ways can the institution cultivate an adaptable mindset that encourages the seamless integration of new AI technologies and workflows?
- How can the institution foster a culture that embraces change and experimentation with AI (without resistance or hesitation)?
Proactive Approach
- How might the institution proactively identify and seize opportunities to innovate and adapt AI strategies in anticipation of the ever-evolving AI landscape?
- What innovative approaches can the institution adopt to gather and analyze data on emerging AI trends to facilitate proactive decision-making in response to evolving AI capabilities?
- What proactive measures can the institution take to nurture and develop skills that are challenging for AI to replicate, such as critical thinking, creativity, and emotional intelligence?
- What creative approaches can the institution explore to harness the dynamic influence of AI to enhance education?
Fixed Approach
- Have leaders clearly defined the purpose and objectives of integrating AI in their chosen areas?
- What specific outcomes do leaders aim to achieve through the AI implementation?
- Have leaders developed a comprehensive plan that outlines the role of AI, workflow integrations, data requirements, and expected outcomes?
- Has a roadmap with milestones been laid out for the AI integration?
- Have the necessary steps been taken to ensure that the institution's data is clean, relevant, and easily accessible for the AI integration?
- What measures have been implemented to enhance the accuracy and effectiveness of AI through data preparation (i.e., who is most likely to use AI and in what fashion?)
Flexible Approach
- Has a flexible framework been established that can adapt to changes as AI technology evolves?
- How can leaders ensure that the framework can quickly adjust to emerging AI trends?
- Can leaders describe the process for testing AI solutions, gathering feedback, and making incremental adjustments to improve outcomes over time?
- Can leaders explain how they have prepared to adjust their approach to ensure that the AI tool aligns with their values and is used ethically?
- Can leaders explain how the institution is prepared to adjust its approach to AI to encourage ethical use?
- How does the commitment to continuous learning influence the AI strategy?
Conclusion
By thoughtfully considering an approach and focusing on one specific area for AI integration, decision-makers can navigate the complexities of AI adoption intentionally. Whether opting for a well-defined, fixed strategy or a dynamic, flexible approach, aligning AI implementations with objectives and adapting as AI technology evolves are key. While the hype surrounding generative AI and LLMs may subside into another AI winter, in the current moment, leaders must understand and recognize how AI is being embedded into education, engage in discourse about generative AI, and offer informed guidance.Footnote10 Leaders can make informed decisions about AI technologies by thinking reflectively, asking critical questions, and applying a culturally aware perspective.
The adoption and application of AI in higher education has long-lasting implications. For college and university leaders, the first step is deciding how AI will be integrated into the institutional culture. Decision-makers must connect with various stakeholders, including faculty, students, DEI departments, technology departments, and teaching and learning centers, to explore the possibilities of AI and its long-term position at the institution. Decision-makers can use the questions in this article as points of departure for improving the culture around AI at their institutions.
Notes
- The London Interdisciplinary School, "How AI Image Generators Make Bias Worse," YouTube video, 8:16:00, August 1, 2023; Jeremy Baum and John Villasenor, "The Politics of AI: ChatGPT and Political Bias," Brookings (website), May 8, 2023. Jump back to footnote 1 in the text.
- Weixin Liang et al., "GPT Detectors Are Biased Against Non-Native English Writers," Patterns 4, no. 7 (July 2023): 1–4; Antonio Byrd et al., "MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations," (working paper, Modern Language Association of America and Conference on College Composition and Communication, July 2023). Jump back to footnote 2 in the text.
- Suzanne Smalley, "Zoom Amends Terms of Service After Pushback on Using Calls to Train AI Models," The Record, Recorded Future News, August 8, 2023. Jump back to footnote 3 in the text.
- "Turnitin's AI Writing Detection Available Now," Turnitin (website), 2024; Instructure, "Instructure Reveals Major Product Innovations across Four Strategic Areas," news release, July 27, 2023; Smita Hashim, "How Zoom's Terms of Service and Practices Apply to AI Features," Zoom Blog, August 27, 2023; Smalley, "Zoom Amends Terms of Service," August 8, 2023; Adam Janofsky, "Zoom Revises Terms Again to Say It Does Not Use Customer Data to Train AI Models," The Record, Recorded Future News, August 15, 2023. Jump back to footnote 4 in the text.
- Elizabeth Costa and David Halpern, The Behavioural Science of Online Harm and Manipulation, and What to Do About It, research report, (London, UK: The Behavioral Insights Team, 2019). Jump back to footnote 5 in the text.
- Kyootai Lee and Kailash Joshi, "Understanding the Role of Cultural Context and User Interaction in Artificial Intelligence Based Systems," Journal of Global Information Technology Management 23, no. 3 (July 2020): 171–175; Octavio Kulesz, Culture, Platforms and Machines: The Impact of Artificial Intelligence on the Diversity of Cultural Expressions, (research report, Intergovernmental Committee for the Protection and Promotion of the Diversity of Cultural Expressions (Paris, France, November 2018). Jump back to footnote 6 in the text.
- Amitai Etzioni and Oren Etzioni, "Incorporating Ethics into Artificial Intelligence," The Journal of Ethics 21 (March 2017): 403–418. Jump back to footnote 7 in the text.
- "Amazon Scraps a Secret A.I. Recruiting Tool that Showed Bias Against Women," CNBC, October 10, 2018. Jump back to footnote 8 in the text.
- Milan Dordevic, "How Artificial Intelligence Can Improve Organizational Decision Making," Forbes, August 23, 2022. Jump back to footnote 9 in the text.
- "AI Winter: The Highs and Lows of Artificial Intelligence," History of Data Science (blog), Daitaiku September 1, 2021; Charles Hodges and Ceren Ocak, "Integrating Generative AI into Higher Education: Considerations," EDUCAUSE Review, August 30, 2023; Lance Eaton and Stan Waddell, "10 Ways Technology Leaders Can Step Up and In to the Generative AI Discussion in Higher Ed," EDUCAUSE Review, October 3, 2023. Jump back to footnote 10 in the text.
Courtney Plotts is Founder at Neuroculture.
Lorna Gonzalez is the Director of Digital Learning at California State University Channel Islands.
© 2024 Courtney Plotts and Lorna Gonzalez. The content of this work is licensed under a Creative Commons BY-SA 4.0 International License.