Three Corporate Voices: Artificial Intelligence and Machine Learning

min read

The EDUCAUSE Enterprise IT Program asked three corporate members to share their advice about the ways AI and machine learning fit with higher education, as well as ethical considerations when using these technologies.

two human figures in front of a computer monitor, one with a large red light bulb and the other with a magnifying glass
Credit: VectorMine / Shutterstock.com © 2020

The EDUCAUSE Enterprise IT Program publishes materials focused on the program's five themes of analytics, governance and relationship management, technology strategy, IT cost and value, and business process management. At many institutions, analytics and business intelligence is a current concern. We therefore set out to gather expert insights into how institutions might better view administrative systems data as a strategic institutional asset that can help them

  • answer important organizational questions,
  • assess progress on institutional goals, and
  • improve their institution's ability to make information-based decisions.

We reached out to three corporate members—HP, Oracle, and Signal Vine—and asked leaders within them for their guidance, advice, and ideas about how institutions might better understand how artificial intelligence and machine learning are being incorporated into higher education and what the future might hold for these technologies. Their answers to some key questions follow.

Uses of AI/ML in Higher Education

What higher education challenges do artificial intelligence (AI) and/or machine learning best address, and what implications or possible outcomes do you see from their use in higher education?

Jeff Chen: AI and machine learning can deliver great value in higher education when a large amount of data can be captured in context (e.g., audio/video lecture recordings) and can then be analyzed to produce insights without an overreliance on personally identifiable information (PII). A great example is Penn State Teaching and Learning Technology Group's pilot project Spectrum (more information available at Webinar: Funding AI to Support Data Empowered Learning [https://grantsofficeevents.webex.com/grantsofficeevents/lsr.php?RCID=4f4d78b407db477baacd92ddff78c6b6]). This project analyzes audio recordings of lectures using Google's natural language processing model, BERT, to extract valuable insights for instructors without compromising student privacy. The tool tracks the coverage of learning objectives for each lecture, measures lecture time spent on administrative communications, and identifies portions of each lecture with Q&A interactivity. Further, when students replay the recording of a lecture, the tool can help them visually find portions of the lecture where concepts of interest were discussed, using concepts—instead of exact words or phrases spoken in the lecture—as search terms. Tools such as this help instructors increase their effectiveness and efficiency without impinging on student privacy.

Nicole Engelbert: The current conversation around AI and ML in higher education typically centers on applying these technologies to routine or highly predictable transactions with a well-bounded set of potential outcomes. A charming chatbot, often branded as the college mascot, offers to help students complete their application, select campus housing, or register for class. It delivers a more personalized, on-demand service for students and scales efficiently for institutions.

However, these technologies are growing more sophisticated, at an accelerating pace. Chatbots are being relegated to the dustbin in favor of true digital assistants capable of identifying anomalies and recommending potential corrective actions. For example, a digital assistant could recognize that a first-year Classics major who registers for "Geology of the Adirondacks" might not be able to graduate in four years and would suggest—at the point of registration—that "Homer's Iliad" might be a better choice.

Undoubtedly, the proactive capabilities in this example have the potential to influence better registration choices. It is important, however, for colleges and universities to recognize the need to shift their perception of AI and ML from a tool facilitating discrete transactions to a "persona" delivering institutional services. As digital assistants begin to manage the high-volume, time-dependent tasks of advisement, such as course selection during registration, the role of the human academic advisors evolves to focus on other questions—in the example noted above, questions such as whether the student might actually prefer a degree in geology.

Brian Kathman: It's well known that today's higher education professionals are busier than ever. The COVID-19 pandemic has only added to the already overwhelming workloads of higher ed professionals. Now more than ever, faculty and staff are doing more with less funding and fewer resources.

At the same time, today's students are changing. As consumers, they demand their favorite brands to offer them a deeply personalized experience.1 This can be a challenging demand to meet with so many students and so little time. This is where AI can truly save the day.

AI and the data that students are willing to give their institutions for personalized experiences should work in tandem with those in higher education to enhance the student experience. AI enables higher ed to meet students' demands for personalization by turning data into action. For example, when a student communication platform is powered by AI, personalized and targeted outreach results. This not only meets the demands of today's students but does so without adding to staff workloads. When AI works with data, incredible outcomes can result.

What AI/ML Don't Do Well

What areas of higher education are AI and/or machine learning ill-suited to address and why?

Chen: AI and machine learning would not yield reliable results when the data used to train the model is not representative of the population for whom the predictions or decisions are targeted. If data quality and fair representation cannot be assured, high-stakes applications such as admissions and financial aid decisions should not depend on machine learning predictions alone. In addition, a model's predictive power degrades over time due to changes in the environment in which it operates. This is called "model drift." If a machine learning solution cannot be constantly calibrated and validated to minimize model drift, problems can arise.

Engelbert: The increasing sophistication AI and ML makes it unlikely that any area of higher education would be unable to benefit from these technologies. That said, technology is no substitute for human judgment or empathy—this is not a story of labor arbitrage with technology replacing the financial aid officer, the faculty member, or the controller. Instead, AI and ML will extend and enhance the capabilities of these individuals, and others, so that they are more effective and impactful in their roles.

Using triggers and intelligent routing, the financial aid system will streamline the application, packaging, and disbursement processes, freeing the financial aid officer to spend more time applying professional judgment with ambiguous cases. Leveraging AI and ML, the learning management system will identify alternative content to help a potentially at-risk student master key learning objectives so that the instructor is able to structure the class as a symposium rather than a lecture. Using a virtual assistant, the financials system will code, approve, and pay routine expenses automatically, enabling the controller to focus on strategic management instead of chasing down tardy submissions. All of these examples are available and used by institutions today. The question is whether higher education will use AI and ML solely for efficiency gains or will instead choose to transform itself.

Kathman: I think the most important takeaway regarding AI is that it will never fully replace humans, especially not in higher education. Even if institutions choose to use AI to help staff communicate with their students, at the end of the day, students want to know that real humans are there to support them. While AI can certainly step in and answer simple questions, such as when spring registration begins, the second that students have deeply personal issues, such as food or housing insecurities, they want to know a human is ready to help them.

This partnership is key in the successful implementation of AI in higher ed.2 It should never be used as a way to replace staff members. Rather, it should be used to support staff members as they juggle caseloads and workloads that constantly increase in size. AI is simply not prepared to show students the empathy and concern that they need to succeed.

The Ethics of AI/ML

What, if any, ethical issues or concerns do you see with incorporating AI and/or machine learning into higher education?

Chen: Education is a key determinant of future success and, for most students, is a major financial investment. If an educational or administrative process incorporates AI or machine learning, the institution should be open about the reasons for such use, explaining how the algorithms work, their inherent limitations, and why using AI/ML delivers a better result. When AI/ML solutions are deployed in critical decision loops, humans should be able to review and override the decisions. An AI/ML solution in production should aspire to continuous monitoring and maintenance because the state of the art for machine learning is constantly advancing and machine learned models drift over time. Machine learning models are not explicitly programmed as a series of logical decisions as in traditional software but instead rely heavily on the quality of the training and validation data sets. If your institution believes the data used to train the model has inherent bias, the institution should be transparent and document such shortcomings and work to correct them.

It is also important to establish a clear data privacy policy with a mechanism for students to make an informed choice to opt-in, instead of opt-out, whenever possible.

Engelbert: Independently, technology is neither good nor bad. It is simply a tool, not unlike a hammer. And like a hammer, it can be dangerous when used inappropriately or for nefarious purposes—I learned this the hard way as a child after handing my three-year old brother the hammer! AI and ML are no different. At the most basic level, they work by identifying patterns in large volumes of data. Consequently, the unique risk of these technologies is their ability to re-create these existing or historic patterns with blinding efficiency and scale.

While abhorrent, it is not difficult to imagine a situation where a bias exists toward advising Pell-eligible students to enroll in less academically demanding courses. We might instinctually know that this is occurring, but the advisement can be so nuanced as to make such bias difficult to identify in the data through traditional analytical techniques. But because AI and ML use more powerful techniques, it re-creates the bias, perhaps harnessing the virtual assistant to nudge lower-income students subtly into medical assisting rather than premedical courses. It is therefore crucial for institutions to flush out potential subconscious biases before leveraging AI and ML as well as remain vigilant after the implementation.

Kathman: I believe the biggest ethical issues that come from the implementation of AI actually come from the student data that power its use. The collection and safeguarding of this data should be of highest ethical concern to institutions. Students are willing to trust their colleges and universities with their personal data, so it's crucial to collect the required data the right way once and to use it ethically and responsibly. Otherwise, students will lose the trust they have in their institutions, and unfortunately, this trust can be hard to regain once it is lost.

Moving AI/ML Forward

What additional recommendations or advice do you have for institutions working to increase their AI or machine learning maturity?

Chen: Although machine learning can achieve high performance for narrowly defined tasks, on the whole, it is relatively brittle. Institutions should absolutely experiment with machine learning but should be realistic about the goals they set and be prepared to rapidly iterate and course-correct. It is essential to have a plan for recognizing errors caused by AI/ML and remedying them. The old saying about "garbage in, garbage out" applies to machine learning just as it does to traditional software. However, with machine learning, it can be even more challenging to debug the model due to the large number of learned parameters. Having the right hardware and software tools for developing and interpreting machine learning systems is very important. Institutions might consider keeping the compute resource close to the data to reduce latency and simplify data security. Especially in the development phase, it would be advantageous to have an edge-compute system with powerful CPUs and GPUs to enable faster iterations of data cleansing, exploratory data analysis, model architecture tweaking, hyperparameter optimization, model training, and validation.

Engelbert: While we can debate when and if singularity will occur, the fact remains that today, there is no SkyNet (however much we love Keanu Reeves), and institutions remain the force behind how, when, and for what purpose they use AI and ML. Consequently, getting the most value out of these technologies depends on a high level of intentionality. The foundation to any AI and ML strategy depends on a deep understanding of and broad consensus around the answers to questions such as what best practices are with advisement, how the student journey should unfold, or when to make an academic intervention. Without these answers, the technology becomes a solution looking for a problem.

I would also recommend that institutions do their due diligence when selecting a solution provider. While AI and ML are now widely used, particularly in the consumer market, the technology continues to innovate rapidly and we are still in the early days of understanding its impact. Institutions must be confident not only that their provider has an exceptional solution but will also be a true partner. This would include communicating highly technical information in ways that a general audience can understand, providing direction and insight on best practice, and offering full transparency on how they use and secure data.

Kathman: I know I've reiterated this idea quite a bit, but my main advice in this situation is for institutions to choose AI-powered services that complement the work that is being done by higher education staff. While comparatively mundane tasks, such as answering students' frequently asked questions, can be handed off to AI, I think it's worth stating again that student-facing staff members cannot be fully replaced by AI. If AI technology promises to decrease the number of staff members versus alleviate existing staff members' workloads, it is likely that institutions may face backlash from the students who require interaction with a human to succeed. The importance of a partnership versus a replacement cannot be overstated.

Additional Resources

Chen: Access related resources from HP at Data Science and ZCentral.

Engelbert: To see a collection of resources related to the topics discusses in this article, please see the Oracle AI and Data Science Blog.

Kathman: We have many resources on the topic of AI in higher education on Signal Vine's website. Notably, we published an ebook, Humanizing AI in Higher Education, which gives higher education leaders ideas on the best uses for AI as it relates to their field. A few months ago, we invited Diana Oblinger, President Emeritus of EDUCAUSE, and CIO of Ivy Tech Community College, Matthew Etchison, to join us in a discussion on how humans and AI can coexist to revolutionize and improve higher education. You can watch a recording of the webinar, How AI Is Transforming, Innovating, and Simplifying Higher Education.

Notes

  1. "College Students and Their Data: What They Expect of Institutions," Ellucian, 2016.
  2. Brian Kathman, "AI And Higher Ed Pros Are Partners, Not Competitors," Forbes, December 5, 2019.

Andrew Clark is Enterprise IT Program Manager at EDUCAUSE.

Jeff Chen is Head of Research Partnerships, WW Education, PS Strategy & Solution, at HP Inc.

Nicole Engelbert is Vice President of Higher Ed Development at Oracle.

Brian Kathman is CEO of Signal Vine.

© 2020 Andrew Clark, Jeffrey Chen, Nicole Engelbert, and Brian Kathman.