AI Procurement in Higher Education: Benefits and Risks of Emerging Tools

min read

As artificial intelligence becomes embedded in the technology ecosystem, clear guidelines and practices for selecting and implementing AI products and features and working with vendors are vital for ensuring alignment with institutional goals and culture.

2 boxes with shopping carts on the side sitting on top of a keyboard
Credit: Katunes pcnok / shutterstock.com © 2025

An AI Snapshot

Lila Peterson is reviewing tomorrow's calendar before leaving for home. Lila is the CIO of Middlevale College, and her work days are filled with meetings and myriad projects to keep track of. Between looking for a new ERP solution and helping the president and provost develop an AI strategy, she's excited about helping Middlevale navigate enormous changes. She looks up to see Hector Chavira, the chief information security officer, peering into her office. "Do you have a minute?" he asks. "I need your advice. And your help."

The minute turns into over an hour. Hector is worried about all the interest in AI among faculty, staff, and students. He's heard that a lot of people are starting to use generative AI and other AI products in their work, but very few are coming to his office to review the products to ensure they comply with Middlevale's cybersecurity and privacy requirements. He checked with the Office of Procurement, and they want more guidance on whether AI products need additional reviews. Hector is especially concerned about faculty and staff providing institutional or student data to AI products they're using. Without any agreements in place, those data could potentially be released to users in other organizations.

Hector has also learned that the latest update to Middlevale's productivity suite includes new AI capabilities. The vendor never informed the college, however, and Hector considers it risky not to know exactly what the productivity tools might be doing with Middlevale's data and interactions. It turns out that Lila has been wondering whether Middlevale's procurement policies and processes are sufficient for reviewing technologies that include AI capabilities. Hector and Lila are proud of the data governance they have in place, and they think they have a pretty good relationship with Procurement. They decide, though, that they should ask a couple of their staff to do some research to see what other institutions are doing and whether any frameworks or tools are available that focus on risks related to AI and that they could look into using.

Lila is also worried about costs. A college committee looking into how AI can transform teaching and learning is interested in having Middlevale invest in some new tools the faculty think could improve coursework and curriculum design and give students some great experience with AI, preparing them to use it after they graduate. The problem is that the cost to license those tools for all students and faculty is prohibitive. At Lila's previous institution, a prestigious R1, she found it easy to negotiate special deals because vendors loved being able to promote working with her university. But Middlevale just isn't a draw for solution providers, and she's not sure what to do.

In the rapidly evolving landscape of higher education, colleges and universities face complex challenges that require transformative solutions. Artificial intelligence (AI) and machine learning are emerging as powerful tools that can help institutional leaders address those challenges. To help provide guidance at the intersection of higher education leadership and AI, the American Council on Education (ACE) partnered with EDUCAUSE to interview 12 chief information officers, other IT leaders, and procurement professionals in October and November 2024. In addition, findings from recent EDUCAUSE research on AI in higher education provide quantitative detail and context for AI procurement.Footnote1 This article explores how colleges and universities can leverage the potential of AI, focusing on how the acquisition of AI tools has impacted procurement processes. We provide an overview of AI and procurement, potential applications of AI in higher education, and perspectives to consider as institutions navigate acquiring a variety of technology and AI products from different vendors. As AI becomes increasingly integrated into academic, administrative, and operational activities, leaders must address the risks that AI products may pose and ensure that AI tools align with institutional values, priorities, and governance frameworks.

Extending Technology Procurement to AI

Technology procurement today is a complex and highly collaborative activity. The IT department (including cybersecurity), procurement office, general counsel, privacy office, and additional stakeholders such as enterprise risk management all contribute to assessing and selecting technologies. The consultation and review required to ensure that decisions conform with the institution's requirements and compliance commitments can be frustratingly slow. Technology procurement can be highly centralized or highly decentralized. Decentralized procurement may enable nimbler decision-making, but it introduces risk to the institution unless all areas conform to institutional guidelines and policies. Even so, individual faculty and staff can and do still bypass institutional processes to procure technology products and services, often using institution-provisioned or personal credit cards.

AI procurement refers to the processes and considerations involved in acquiring tools, systems, and services that incorporate AI. The scope of AI procurement includes purchasing new AI-enabled tools, screening existing software for added AI capabilities, partnering with AI companies to develop custom solutions using institutional data, engaging consultants and other third parties that may use AI during the engagement, and even developing proprietary AI models. Throughout this article, the term "AI products" refers to all of these activities.

As AI becomes increasingly integrated into academic, administrative, and operational activities, institutions must address the risks AI products might pose and ensure that AI tools align with institutional values, priorities, and governance frameworks. As one CIO we interviewed stated, "AI is an accelerant. It's like gasoline in terms of growth, but it's also gasoline in terms of some of these risks."

Institutions might currently be making no or few changes to procurement processes to account for AI products. Even though existing procurement processes might largely work for AI product purchases, the incorporation of AI in products amplifies certain existing risks and could introduce entirely new considerations for procurement decisions. How or even whether a product uses AI is not always obvious. Some institutions have updated their procurement process to better understand how AI is used in products. Potential questions to incorporate into the process include whether the solution provider has AI models, uses responses derived from AI, or uses the institution's data in any way to improve AI models. Additional questions arise for AI products that do use the institution's data (including students' data), such as whether the results are reserved for the institution or may potentially be released to other organizations and users, and whether and how the product anonymizes data. The interplay between institutional data and users and an AI model also raises questions about ownership of the model, questions that may be contentious and complex.

Some of the "black box" issues with analytics and algorithm-driven products also apply to AI. Bias, ethics, and transparency should all be considered. What might be new with AI is that it can be difficult to actually know whether a product or service is using AI.

Similarities and Differences across Institutions

Fully half (50%) of institutions are increasing access to AI tools, 38% are adapting generative AI models for internal use, and 37% are funding faculty, staff, or student licenses for AI-powered tools. A few institutions are creating technology infrastructure to run generative AI models on-premise or as infrastructure as a service (18%) and/or using application programming interfaces (APIs) to connect institutional data to generative AI applications (16%).

At some institutions, managing the risk of acquiring AI products highlights existing gaps in procurement processes. Those gaps could include poor coordination between IT and procurement or procurement processes that pay insufficient attention to technology- and data-related risks. The sheer volume of institutional data at play and the number of cloud-based applications that institutional constituents can easily download and use on their own already exceed the capacity of procurement staff and others responsible for pre-purchase reviews.

Institutional size influences experiences buying AI products. The people we interviewed perceived that vendors seem to be privileging larger institutions to make more money from single contracts and elite institutions to burnish product reputations. Smaller institutions might encounter difficulty engaging solution providers at all or negotiating an affordable pricing model for enterprise-wide use. One workaround would be to use APIs to connect institutional data to commercial or open-source generative AI applications.

Mid-sized institutions have challenges of their own. Often such institutions are complex enough to need large applications with sophisticated features but are too small to have sufficient resources to configure and manage such applications. Public institutions might be required to adopt statewide technology and procurement policies and processes, which can slow down the procurement process. Moreover, those statewide policies might be more or less stringent than a particular institution's risk strategy and innovation goals.

Broad Repercussions of AI on Campus

AI could prove to be the most transformative technology higher education has yet adopted, particularly generative AI (which creates new content) and agentic AI (which makes decisions and acts on behalf of users). Moving beyond automation, analysis, efficiency, and making sense of data and information, today's AI is capable of generating novel content and solutions. AI's potential is powerful, strategic, and ambiguous, fueling enthusiasm about how it could augment faculty and staff but also arousing anxiety that it might be seen as a replacement for them.

Investments in AI should advance institutional strategy and goals, as well as business strategy. For major AI investments, leaders should require a value proposition that outlines costs, risks, benefits, and return on investment. That is not easy to do now because AI's value proposition remains immature and difficult to measure. Many institutions (39%) are beginning cautiously, choosing to focus on short-term pilot projects to gain experience with AI and learn from peers making more expansive investments, waiting for the market to mature.

Institutions with effective IT and data governance in place are at an advantage. Those governance processes can be applied to AI projects to ensure such investments advance institutional strategy and comport with institutional risk management, enterprise architecture, and relevant policies. Incorporating AI procurement into existing policies and governance can accelerate decision-making and avoid investments that won't fit the institution. As leaders gain experience with AI, they can adapt IT and data governance as needed. More than one-third (38%) of institutions are currently implementing or improving data governance processes to support AI use cases.

Policies also need to be adapted. Use of AI can introduce privacy and ethical issues. Some institutions are developing policies to emphasize the ethical use of AI and student and employee privacy when AI is used for any official business or product released by the institution. These policies can be supported by guidelines that explain and outline an AI product's features and where the product may and may not be used. Almost half (49%) of institutions are implementing or improving data privacy policies or guidelines.

AI procurement also spotlights the limitations of existing cybersecurity policies and guidelines, and 44% of institutions are implementing or improving cybersecurity policies or guidelines to address the use of AI products. Only 9% of survey respondents reported that their institution's cybersecurity and privacy policies sufficiently address AI-related risks to the institution, with a plurality of (42%) reporting that their policies are only "somewhat" adequate. Specific areas of concern include data security, end-user behavior, and the data collected by third-party tools.

Cautions, Risks, and Considerations

Not surprisingly for such a new and transformative technology, AI procurement comes with several significant downsides, the foremost of which might be risk. Much about AI is still poorly understood, and that obscurity makes it difficult for leaders to confidently understand and assess the risks of whether and how to approach AI. Liability issues are unclear, particularly when adopting autonomous AI—products or features that use AI to make decisions and perform tasks without human intervention.

AI products' lack of transparency is a major risk. Solution providers might not be disclosing the AI within their products or alerting customers when AI capabilities are added. In some cases, solution providers offer product owners the choice of turning AI components on, giving decision-makers the opportunity to assess risks. Issues to explore include learning whether institutional data will be subject to machine learning and whether and how the vendor will use the institution's data to improve the vendor's AI model or just the institution's version of it.

Simply understanding how AI is being used within a product is another aspect of transparency. If sales teams use jargon or hyperbole in their pitches (which would not be new to AI), institutional staff can find it difficult to understand the context in which AI is actually being used within the product. "Beware of bright shiny objects," one of our interviewees said.

Pricing and contract terms are also opaque. Many vendors don't publish pricing for licensing software, enabling them to negotiate different prices, terms, and conditions—and confidentiality agreements—with each institutional customer. This can prevent institutions from sharing information that could be helpful for other purchasers. This, too, is not unique to AI, but it is another obstacle to making good decisions about AI investments.

As climate change worsens, the need to adopt sustainable energy policies and practices becomes more urgent—14% of survey respondents reported that mitigating the impact of AI computing on the environment is an element of their institution's AI strategy. AI is notoriously energy intensive, and yet the magnitude and growth of AI's energy footprint is difficult to estimate accurately because much of the environmental impact is due to the need to build and expand data centers to run AI.Footnote2 This uncertainty can make it difficult for an institution with sustainability goals to understand whether and how an AI investment could affect those goals.

Decision-makers are also looking for assurances from solution providers that the outputs of an AI product will have consistently high quality and accuracy, which may be very difficult to prove and which vendors may be very reluctant to consider doing.

AI product decisions need to balance potential risks against opportunities for innovation. AI uses data, and data are subject to privacy and cybersecurity regulations. Decision-makers need to know that an AI product is using data appropriately and that it doesn't cause the institution to inadvertently violate laws and regulations, including the Federal Educational Rights and Privacy Act (FERPA), the Health Insurance Portability and Accountability Act (HIPAA), and the National Institute of Standards and Technology recommended requirements for protecting the confidentiality of controlled unclassified information (NIST 800-171). At the same time, AI might be critical to advancing institutional innovation or productivity, so a stringent risk-management approach could work against strategic progress.

Of course funding and financial sustainability need to be considered in AI procurement decisions. "Stuff needs staff," as the saying goes, and in addition to needing to incorporate AI skills into existing role descriptions, additional staff will likely be needed, particularly for ambitious AI initiatives.

Funding AI investments is a major challenge. Although 46% of survey respondents are implementing AI-focused initiatives to improve institutional use of AI, only 19% are budgeting for the anticipated costs associated with long-term AI use. Only 2% of respondents said that their institution is using new sources to fund new AI-related costs, and about one-third of executive leaders (34%) thought their institution has probably underestimated AI-related costs. Cost-sharing is an option: 45% of executive leaders in the survey reported partnering with external sources (government agencies, donors, corporations, foundations, alumni, or other institutions) to share AI investment costs.

Don't expect today's purchases to be long-term choices. The people we interviewed stressed that AI technologies, practices, and use cases are evolving too rapidly to have confidence in the stability of the current market space or institutional needs and opportunities. As one interviewee said, "Whatever we do with AI, we need to look at it as a very iterative process and more of a marathon than a sprint." Decision-makers may want to avoid long-term (e.g., three or more years) contracts and instead choose a solution for a shorter period of time, learn from the experience, and then consider a larger investment based on the knowledge gained. One interviewee pointed out that some large institutions with significant AI ambitions are choosing to partner with multiple solution providers as a way to spread the risk.

Some institutions simply can't afford to license AI products for enterprise use (including students). For AI to scale throughout higher education and to be available to students, product license costs will need to decrease. An alternative might be to use free tools, which, although more limited, are getting more sophisticated.

An Astonishing Rate of Change

AI is going places, and higher education's use and expectations of AI products will only increase. Several new developments are particularly exciting for higher education:

  • AI agents (interactive chatbots or helper applications)
  • Small language models (streamlined AI models specialized for specific tasks)
  • Uses of AI reasoning models focused on math, physics, and logic. Reasoning models can simulate reasoning by breaking complex problems into a succession of easier problems and by pivoting to a new approach when the original approach isn't successful.Footnote3
  • Uses of AI to accelerate discovery in the natural sciences

Institutional AI support teams need to help demystify AI with faculty, staff, and students. Misconceptions and assumptions about what AI does and doesn't do are widespread. Cultivating a better understanding of AI is an important step in safely integrating the technology into education, research, and administration. Greater AI literacy will probably increase demand for AI tools, but it will also improve AI procurement decisions.

The market and technology are moving very fast, creating many ongoing unknowns. Those unknowns are not so much about procuring AI products but about AI's capabilities and how people want to use these tools. It's still early days for AI as a technology and for higher education's adoption of it. Because of this uncertainty and churn, the regulation of, legal framework for, and ethical and responsible use of AI remain immature. All of these aspects should continue to evolve until AI has become more stable and commonplace, as happened with the internet and cloud computing.

As noted above, existing and effective institutional IT and data governance can help institutions make better AI procurement decisions. Gartner, a leading technology research and advisory firm, advocates for developing AI governance operating models, policies, and controls that are connected to but apart from other governance processes. They explain, "AI is difficult to govern because enterprises must meet the demands for safety and value under conditions that involve complexity, ambiguity and rapid technology evolution."Footnote4 Gartner estimates that AI governance is still emerging and projects it will mature in the next two to five years.Footnote5

What would help institutions make better AI technology decisions? External standards or frameworks, and product evaluations and recommendations. Our interviewees asked for a framework with a standard checklist of AI procurement considerations covering security, privacy, enterprise architecture, integration data, identity, accuracy, equity, and sustainability. Several interviewees mentioned the EDUCAUSE Higher Education Community Vendor Assessment Tool (HECVAT) as an example of a tool that is helping improve technology procurement decisions. The HECVAT is a spreadsheet that crosswalks functional and technical cybersecurity and privacy compliance requirements across multiple regulatory frameworks. Institutions ask prospective solution providers to complete a HECVAT for a product and then use the completed HECVAT to assess the extent to which the product conforms to the institution's cybersecurity and privacy policies and practices. The new HECVAT 4 has added a set of AI-specific questions that cover six different still-pioneering AI frameworks, which can be refined as AI regulations and standards evolve.Footnote6

Our discussions also uncovered a desire for products that better fit higher education. Academic advising, for example, is an area of particular importance to helping students succeed. Deeper collaborations between solution providers and institutional decision makers could help vendors develop better products.

Considerations for College Presidents and Campus Leaders

As colleges and universities continue to adopt AI and educational technologies of different sizes from vendors, college presidents and other campus leaders need to consider the full range of benefits and risks from AI in relation to their own institution's needs. To help higher education executives fully understand the benefits, considerations, and risks in procuring new AI tools, we asked interviewees multiple questions about guidance and further information they wanted regarding AI and procurement. This included asking what information they wanted to share directly with their campus president, as well as what help they wanted from education associations such as ACE.

  • Campus presidents focus on supporting initiatives that help their institutions achieve their goals, while also understanding risks. Higher education administrators, including those in academic and career services, enrollment management, chief information officers, and procurement individuals, can support their institution's leadership by providing clear portryals of the benefits and risks of AI, as well as sharing what other institutions are doing. One interviewee emphasized this by explaining that their institution's president would appreciate the ability to understand the perspective of other presidents, use cases, and what other presidents see as efficient AI implementation across different categories.

  • In addition to understanding the positives and drawbacks of implementing new AI tools, campus presidents can benefit from understanding differences in how to govern AI at institutions. Presidents and other campus executives may want to ask the following when meeting with other institutions: How is this AI governance framework different from the one at my institution? Are there any gaps? Is there anything we should be doing differently?

  • Multiple chief information and IT officers mentioned that the lack of transparency is by design, saying that this approach keeps institutions from sharing information that could be helpful. Presidents and other leaders should encourage partnerships between institutions, especially in promoting product recommendations, contract terms, prices, and other vendor details.

  • College presidents need to understand the resources, capacities, and infrastructure necessary to implement particular AI solutions. This involves considering perspectives from multiple campus stakeholder groups—including, but not limited to, leaders in technology, curriculum and instructional design, student affairs, and general counsel—and, more importantly, engaging those who will be using the tools.

Planning for the Road Ahead

Technology as groundbreaking as AI has the potential to disrupt or fundamentally transform higher education. AI might become an institutional differentiator, providing some colleges and universities a competitive advantage. Decisions about whether to acquire AI, when to do so, and which specific products to select might be more consequential than other technology decisions, and yet much of the information needed to make good decisions is unknown—and, perhaps to a certain extent, unknowable—due to the "black box" nature of AI models.

To optimize AI product procurement, decision-makers will need to have good governance in place, an AI and technology strategy that is driven by institutional strategy, the ability to balance risk with innovation, a flexible funding model, and sources of expertise about AI. AI investments will be expensive and risky.

Higher education leaders should consider how to collectively assess the efficacy of AI, develop detailed risk assessments and mitigation strategies, and advocate as an industry for effective and safe AI products. This is also a time to pursue collaborations in and among systems, consortia, and other networks with institutions eager to move in a shared direction. The alternative may be a widening institutional digital divide that leaves many smaller, under-resourced colleges and universities permanently behind.Footnote7

Acknowledgments

This article would not have been possible without the contributions from the leaders we interviewed or the support from our colleagues at the American Council on Education and EDUCAUSE. From the American Council on Education, we thank Hironao Okahana, Vice President and Executive Director of Education Futures Lab; and Liz Howard, Content Strategy and Operations Specialist. From EDUCAUSE, we thank Keturah Young, Program Manager of Communities; Mark McCormack, Senior Director of Research and Insights; and Jenay Robert, Senior Researcher.

  • EDUCAUSE Logo
  • ACE | American Council on Education

Notes

  1. Jenay Robert and Mark McCormack, 2025 EDUCAUSE AI Landscape Study: Into the Digital Divide' research report (Boulder, CO: EDUCAUSE, February 2025). The 788 respondents included faculty (17%), as well as technology and teaching and learning leaders (39%) and staff (36%) at a variety of U.S. (86%) and international institutions who completed a survey in November 2024. Jump back to footnote 1 in the text.
  2. Casey Crownhart, "AI Is an Energy Hog. This Is What It Means for Climate Change," MIT Technology Review, May 23, 2024. Jump back to footnote 2 in the text.
  3. James O'Donnell, "Why OpenAI's New Model Is Such a Big Deal," MIT Technology Review, September 17, 2024. Jump back to footnote 3 in the text.
  4. Svetlana Sicular, Peter Krensky, and Saul Judah, Artificial Intelligence Requires an Extended Governance Framework, Gartner, April 4, 2024. Jump back to footnote 4 in the text.
  5. Afraz Jaffri and Haritha Khandabattu, Hype Cycle for Artificial Intelligence, 2024, Gartner, June 17, 2024. Jump back to footnote 5 in the text.
  6. The six frameworks in the AI part of the HECVAT are National Institute of Standards and Technology Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1), Open Worldwide Application Security Project (OWASP), Mitre Labs Adversarial Threat Landscape for Artificial-Intelligence Systems (MITRE ATLAS), Re-usable Automation Framework (RAFT), Cybersecurity and Infrastructure Security Agency (CISA), and National Institute of Standards and Technology Security and Privacy Controls for Information Systems and Organizations (NIST 800-53). Jump back to footnote 6 in the text.
  7. Robert and McCormack, 2025 EDUCAUSE AI Landscape Study: Into the Digital Divide. Jump back to footnote 7 in the text.

Susan Grajek is Vice President for Partnerships, Communities and Research at EDUCAUSE.

Kathe Pelletier is Senior Director of Community Programs at EDUCAUSE.

Austin Freeman is Research Associate at the American Council on Education.

© 2025 EDUCAUSE and the American Council on Education. The content of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.