Prediction is very difficult, especially if it is about the future.
—Niels BohrLet us put our minds together and see what life we can make for our children.
—Sitting Bull
Dawn breaks on a gorgeous morning in summer 2025. Peter has been waiting for this for months. It's New Student Orientation Day at My University, Peter's customized version of four formerly separate independent colleges that functionally merged several years earlier. He readies himself quickly and by 8:30 a.m. carefully opens the box he recently received from Student Life. He takes out the virtual reality (VR) visor, adjusts it, and touches the switch. Instantly, he is on campus, being welcomed by Perpetua, his personal orientation leader. VR has made it possible for each student to have a fully individualized campus that is populated with other students (also in VR mode) who may be physically located anywhere but whose lived experience is in the VR university. Perpetua escorts Peter to the session, where she introduces him to several other students. All of them will get to know each other well, as they all will be enrolled in the same set of courses and will participate in many other activities together. Peter's small group joins other students for the program. Several skits and presentations that cover the usual topics for new student orientation provide virtual interactive roles for new students to play. After the orientation session, Peter and his friends are told to check their Personal Learning Space for important updates. Peter discovers that while he was orienting, the smart registration and student information system had identified him through a visor-based retinal scan, analyzed his academic file, and already handled registration and filed all his course materials in his personal cloud storage. Bidding Perpetua and his new friends goodbye, Peter leaves the VR campus and goes about his day.
Two days later, Peter is ready to attend class. He finds that the learning experience is equally immersive. Through state-of-the-art real presence, each student interacts with other learners and the faculty member. It's as if Peter's living room has turned into a holodeck. Course content is fully immersive, so he is virtually present during the building of the pyramids, or the Gettysburg battle, or Martin Luther King’s "I Have a Dream" speech. In a real sense, the classroom has become his green screen. Peter's competency evaluations will also take advantage of VR when possible, and so they may entail running virtual what-if scenarios of alternative outcomes to famous historical events, or doing careful real-time comparisons of artifacts that reside in different museums, or performing with musical players in virtual orchestras at a level of sophistication that the composer Eric Whitacre can only dream of today. Peter often attends class while standing or sitting on a mat that allows him full range of motion to physically engage in the learning by walking around each virtual set.
The experience of Peter's faculty in 2025 is equally transformed. Georgia Tech's nearly decade-old experiment with a robotic teaching assistant based on the IBM Watson platform has morphed considerably. Only the most diagnostic data about each student is transmitted to faculty on dashboards organized by course. Artificial intelligence (AI) systems handle the routine interactions. Monitoring student achievement is more akin to monitoring key telemetry in an intensive care unit or a jumbo jet: underlying smart analytic systems process the data and report only a handful of key essentials. Faculty can measure the level of difficulty of the content in real time depending on how quickly students master the material. (One can almost imagine an individualized Star Trek Kobayashi Maru test within each discipline or course.) Faculty concentrate their individualized interactions with students on a combination of teacher/tutor, mentor, and case manager communications.
Farfetched? Impossible to achieve in the next eight years? Maybe. But consider that computer-based teaching assistants were already in limited use in 2016. The New York Times distributed cardboard VR viewers as a way to immerse its readers into a storytelling experience. My vision of such rapid innovation, far from being the result of some blue-sky process gone wild, is grounded in experiences we have already had and approaches already adopted. Let me explain.
For the most part, what we have today in our emergent digital learning environment is roughly a digital analogue of what we have always had, but more of it: (more) original source material and a core text or equivalent; (more) samples of student work; a (login-based) attendance monitor; (threaded) discussions of key points; and so on. This is not all that surprising, given that the underlying design must maintain contact with processes and approaches faculty already know and do well. Furthermore, any radical design or process change must first have a bona fide proof of concept before moving to substantive qualitative change. We must show that we can turn formerly discrete pieces of tangible data into a digital, sometimes real-time data flow and that the transformed approach is equally reliable and valid to the approach we desire to retire.
For example, all of us can relate to how we moved, in financial record-keeping, from paper processes to our present reconceptualized approach. First, we duplicated the by-hand processes and forms into a digital format, including all the checking and rechecking by people. Only when we had convinced ourselves that a digital world was possible, that it was reliable, and that the output was valid did we rethink the process of what needed to be done and for what purpose. Only then did we redesign the approval process, for instance, to include human checking only when required by best practice under audit standards. At that point, true reconceptualization (reengineering, disruption) occurred.
That's where we are now with the student experience. In 2017, we are deep into the proof-of-concept stage. We are either on the verge of true transformation (if you are a believer) or on the edge of the abyss (if you are not). Now that we have put the proof of concept through repeated testings in numerous learning platforms and content delivery modes and have shown that it remains unbroken in student data analytics and early warning systems and that it handles all forms of electronic content, let the alchemy begin. Let us spin the 2017 question—"Can we create the tools for (and can we trust) a true virtual learning environment?"—into a 2025 question: "What do we do with the gold that results?"
To me, the most important "2025 gold" that will be created from the "2017 base elements" is the use of technology to create fully immersive learning environments. This will entail moving from a still largely passive online learning environment in 2017 (i.e., students log in, watch captured videos, write notes, take quizzes and exams, upload papers, participate in threaded and generally asynchronous discussions) to an environment in which student-faculty and student-student interaction occurs for all students at any time due to the AI-based platforms underlying the learning environment. Each student has his/her personal teaching assistant to provide coaching and intervention, along with other support functions. Joint projects are completed in immersive simulated labs that provide true sensory feedback (again, think flight or surgical simulators). Neuroscience research and centuries of pedagogical explorations clearly indicate that interactive methods in general, and those that add the body back into the process whenever appropriate, result in better learning.
At present we remain constrained, even trapped, by the need to stay close to traditional approaches. Intellectually, we understand the need to bend the innovation curve that implements cognitive neuroscience and related research to a nearly vertical climb, but we are nervous about doing so based on the pointed and determined resistance from accreditors, evaluation systems, and/or internal constituencies. Collectively, our behavior is at times reminiscent of toddlers who want to head off on their own and explore but who stop every so many feet and turn to check in with an adult to make sure that things are still okay. It takes us a while to get comfortable heading off into the greater unknown, even when we know we must.
The sooner we let imagination become, as the Walt Disney Company would put it, imagineering, the better off we will be. Immersive learning will surpass active learning, which in its day surpassed passive learning in effectiveness. Campus leaders should support bold, visionary efforts at creating new learning models. Imagineered innovation will not always work perfectly, and sometimes it will not work at all. Niels Bohr’s point about prediction is well taken. But vastly improved student learning is at stake. Using our base elements of today, we can make gold and improve the lives of our children by 2025. That would make Sitting Bull proud.
John Cavanaugh is President and CEO of the Consortium of Universities of the Washington Metropolitan Area. Previously, he served as Chancellor of the Pennsylvania State System of Higher Education, President of the University of West Florida, and Provost and Vice Chancellor for Academic Affairs at the University of North Carolina at Wilmington.
© 2017 John Cavanaugh. The text of this article is licensed under the Creative Commons Attribution 4.0 International License.
EDUCAUSE Review 52, no. 1 (January/February 2017)