SPONSORED CONTENT: Lenovo

Mastering the AI Moment in Higher Education

min read

We're in a period of unprecedented AI evolution and innovation. Opportunities abound in higher education, but data security is paramount.

intel + Lenovo (logos)

Getting artificial intelligence (AI) right means keeping data well managed and secure. But so much of what makes AI magical makes information security especially difficult. So, how can decision-makers move forward with confidence? It will require changes in both technology and best practices. Protecting the opportunity requires securing the data.

Maximizing Information Risks and Rewards with AI

Higher education campuses have always brought together complex systems built around technology, people, and their data. Early on, this work was straightforward. Most databases were single-purpose, and most work wasn't done in real time, relying instead on "batch mode" processes.

But digital transformation has always been about data transformation. The growth of new AI and generative AI (GenAI) applications is no different. It has driven, and been driven by, an increase in real-time data volume and variety, all of which must be kept secure and compliant.

Mastering the AI moment in higher education requires vision and commitment, starting with securing the data that brings AI to life.

Understanding the Stakes

Chronicling a Moment that Contains Multitudes

Academic and research institutions built the technology and theoretical foundations that made the current "artificial intelligence" moment possible. But nearly seventy years since Dartmouth's John McCarthy first coined that term in a computer science paper, we're in a period of unprecedented AI evolution and innovation.Footnote1 So, what's next?

In the near term, three primary categories of AI use cases are emerging in higher education, with benefits for both the users and the systems that keep courses and campuses moving.

Elevating Human and System Intelligence

First, we're already seeing AI transform the student experience with new tools for personalized learning. With AI, coursework can dynamically adapt to the student's needs, enabling faculty to offer new support resources.Footnote2 These efforts not only increase the chances of student success but extend the reach of overworked faculty and staff.

But AI is not just changing how students and faculty work today. New skills and competencies will be required to succeed in an AI-shaped future, and both students and staff expect higher education to deliver. It's easy to see AI leadership becoming a critical factor in student and staff retention going forward.

AI also has the potential to dramatically modernize the physical and digital systems behind today's institutions. Key opportunities include automating systems management to increase resource utilization and extending data-assisted decision-making to new domains, both of which will be powerful force multipliers.

While higher education has been shaped by waves of successive technology innovations, the AI moment is demonstrably different. As private-sector adoption accelerates with each new tool, how prepared is higher education?

Taking the Temperature of the Moment

AI has been a top-of-mind concern for higher education leaders for more than a year. Even as tool and technology options quickly multiply, institutions are balancing exploration with concerns about risk. Several large surveys have probed leaders about their current posture, with some revealing results.

Lean In, Ready or Not

Bringing AI out of the research labs and into the real world has been transformational. But the tension inside institutions couldn't be stronger as leaders prepare for a change that's already happening.

A global survey of college and university students revealed that over 86 percent of students are already using AI in their work. But when higher education administrators, faculty, and trustees were asked about the AI readiness of their institutions, a striking 77 percent said they weren't ready for the coming change from GenAI.Footnote3

Security and Compliance Worries

According to the 2024 EDUCAUSE AI Landscape Study, concerns about data quality and governance have overtaken traditional IT worries like implementation and integration as the top challenges in AI adoption.Footnote4

In fact, data security was the top worry of over half (55 percent) of respondents. The four answers that followed were also about quality and security: ethical governance, impact of biases, quality of data, and compliance with federal regulations.Footnote5

Slow and Ongoing Data Preparation Progress

Fundamentally, AI readiness starts with data readiness. While the hard work of managing tomorrow's AI data pipelines will be ongoing, getting started is critical. Unfortunately, only about one in three institutions (33 percent) have either begun or completed their efforts, and a quarter have plans to start.Footnote6

Getting AI Right

The hard work of securing and managing AI data must happen in the face of near-total opacity around the technology and nonstop uncertainty about where rules and expectations are headed next.

LLMs: A Magical Black Box

It's impossible to secure what can't be observed. Institutions enabling AI inside their IT environments still find large language models (LLMs) largely opaque. This means security experts can protect every other technology touchpoint involved and still not engineer all the risk out of the process.

We already know the private and public sectors are facing a significant AI talent gap, making security even harder.Footnote7 As programs grow to support multiple models and platforms, this opacity and complexity will only intensify.

AI's Dark and Risky Corners

While institutions are carefully considering an AI road map, users aren't nearly so patient. As with shadow IT, where individual teams buy and operate technology solutions outside centralized control, shadow AI accelerates both discovery and risk across the institution.Footnote8

Students using AI outside of institutional policy conflicts with rules around scholarship and research ethics. Departments running independent AI programs may be placing system data or IP at risk. This lack of coordination is also very expensive, potentially putting all AI efforts at risk.Footnote9

Quickly Adapting and Evolving Regulations

Higher education is already seeing multiple compliance frameworks overlap, and the data-focused growth of AI is turning up the pressure. Compliance with federal regulations, including HIPAA and FERPA, was a top concern cited in the EDUCAUSE AI Landscape Study.Footnote10

So far, top-down regulatory guidance, such as the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, is mostly advisory. However, the 2023 Executive Order does call for specific student protections, and some U.S. states are considering their own rules.Footnote11

Luckily, we know that even with these obstacles, institutions are already finding ways to transform their mission and business models with AI and GenAI. If success starts with mastering data, how are these AI leaders getting ahead of the challenge?

Getting Data Security (and AI) Right: 4 Steps

American-Canadian writer William Gibson famously said, "The future is already here—it's just not very evenly distributed." Inside higher education, some institutions are boldly leading, while others are cautiously lagging. So, how are the current AI "winners" working to keep efforts secure?

Collaborate Around Principles and Best Practices

The most important AI conversations aren't about technology but adjacent challenges around deeper issues such as security, privacy, and the impact of bias. While these are complex dilemmas, addressing them early will make AI advocacy and enablement easier going forward.

  • Student, staff, and faculty outcomes must all be represented in the institutional vision and strategy.
  • Internal risk, technology, legal, and operations must work together to craft compliance and security controls.

As we at Lenovo have worked to embed AI inside our business, we've relied on collaboration to stay aligned with principles that inform the processes and best practices we use when working inside the new hybrid AI paradigm.

  • We're driven by responsible AI principles that govern the entire solution design, manufacturing, and deployment process.
  • This lets us stand behind an AI policy that makes important promises to customers and partners about how we use AI in the solutions we sell.
  • We're also committed to ongoing advocacy for responsible AI, working with industry groups and NGOs to help create alignment around responsible AI for all.

Unsurprisingly, many of the AI conversations we have with customers don't start with technology but with these same fundamentals. Our partners at Intel are already deeply involved in both AI and higher education and are leading some of these same conversations.Footnote12

Design for Hybrid AI

As AI accelerates, higher education ecosystems will grow in two ways: Continued academic and commercial innovation will deliver new foundational AI models and agents, and institutions will develop (or refine) supplemental models, agents, and applications. At Lenovo, we call this mix of foundational modes and customized agents and applications hybrid AI.

  • Multiple AI systems will draw from shared data sources as both inputs and outputs—a lot of information will be created and moved.
  • The interconnectedness of AI—security, governance, availability, and economics—means everything must be managed in collaboration inside the hybrid environment.

This means nearly every hardware endpoint will eventually manage AI workloads. In collaboration with technology leaders like Intel, Lenovo designers are developing the next generation of AI-ready devices across the entire ecosystem—from desktops to data centers to the cloud.

As Intel CEO Pat Gelsinger said, "Lenovo and Intel are co-engineering the AI PC, delivering AI experiences, and enabling a strong AI ecosystem."Footnote13 These devices raise the bar with enterprise-grade security, manageability, and optimized performance to meet the new demands of AI, ensuring faster response, higher throughput, and exceptional power efficiency.

Find and Use Benchmarks

The opacity of LLMs and other AI platforms makes risk assessment difficult, starting with the selection process. Luckily, vendors and the higher education community have made progress on consistent assessment frameworks that ask the right questions about potential solutions.

  1. Is the tool appropriate for the task?
  2. Can we get the performance we need and want?
  3. Most importantly, can we secure information coming in and out of the model?Footnote14

Build Sandboxes to Enable Maximum Discovery with Minimum Risk

The EDUCAUSE AI Landscape Study found that 41 percent of higher education leaders characterized current AI policies and procedures as "extremely or somewhat permissive."Footnote15 Depending on how AI is deployed inside the institution, this is either good or bad news for security teams.

Where users utilize off-the-shelf AI tools, this "sandbox" can be an acceptable use policy defining approved tools and versions. Many schools only grant access after students complete training.

Where users are building or optimizing existing models, more sophisticated controls are required. The ideal platform gives users maximum tool freedom while securing data in and out. The Harvard AI Sandbox strikes this balance with development and production environments that are consistently protected.

The Answer to the AI Question: Secure Data

The AI moment is packed with nervous excitement and lingering uncertainty. What will teaching and learning look like in ten years? How will changes in the larger world drive transformation inside higher education? How can we continue to support student success?

Every institution must carefully plot its own path forward, identifying where AI can best add value and then building a strategy around that. But even in the face of so many unique opportunities, consistency—in collaboration, evaluation, and implementation—will take institutions the farthest in their AI journey. It all starts with the data.

EDUCAUSE Mission Partner 2024EDUCAUSE Mission Partners

EDUCAUSE Mission Partners collaborate deeply with EDUCAUSE staff and community members on key areas of higher education and technology to help strengthen collaboration and evolve the higher ed technology market. Learn more about Lenovo, a 2024 EDUCAUSE Mission Partner, and how they're partnering with EDUCAUSE to support your evolving technology needs.

Notes

  1. "Artificial Intelligence Coined at Dartmouth," Dartmouth (website), n.d., accessed September 27, 2024. Jump back to footnote 1 in the text.
  2. EAB, From Caution to Curiosity: Higher Ed Success Staff Weigh in on AI's Role in Student Success, survey report (EAB, 2024). Jump back to footnote 2 in the text.
  3. Digital Education Council Global AI Student Survey 2024, research report (Digital Education Council, 2024); Digital Learning Pulse Survey, research report (Cengage, 2023). Jump back to footnote 3 in the text.
  4. Jenay Robert, 2024 EDUCAUSE AI Landscape Study, research report (EDUCAUSE, 2024). Jump back to footnote 4 in the text.
  5. Ibid. Jump back to footnote 5 in the text.
  6. Ibid. Jump back to footnote 6 in the text.
  7. Alexandria Ng, "Businesses Are Counting on AI, But Skilled Labor Is Lacking, Survey Finds," EdWeek Market Brief, March 12, 2024. Jump back to footnote 7 in the text.
  8. Mary K. Pratt, "10 Ways to Prevent Shadow AI Disaster," CIO, July 8, 2024. Jump back to footnote 8 in the text.
  9. Emma Keen, "Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept by End of 2025," Gartner, July 29, 2024. Jump back to footnote 9 in the text.
  10. Robert, AI Landscape Study. Jump back to footnote 10 in the text.
  11. Joseph R. Biden Jr., "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," October 30, 2023; "Artificial Intelligence 2024 Legislation," National Conference of State Legislatures, September 9, 2024. Jump back to footnote 11 in the text.
  12. "Technologies to Enable Artificial Intelligence (AI) in Higher Education," Intel, n.d., accessed September 27, 2024. Jump back to footnote 12 in the text.
  13. "Lenovo Tech World Shanghai: Achieving 'Smarter AI for All' with Hybrid Artificial Intelligence," Lenovo StoryHub, April 24, 2024. Jump back to footnote 13 in the text.
  14. Thomas Woodside and Helen Toner, "Evaluating Large Language Models," Center for Security and Emerging Technology (website), July 17, 2024. Jump back to footnote 14 in the text.
  15. Robert, AI Landscape Study. Jump back to footnote 15 in the text.

© 2024 Lenovo.