Artificial Intelligence + Real Faculty = Future of Teaching (?)

min read
picture of robot with blog's title overlaid on it

Two months ago I sat in a conference (the ELI Annual Meeting 2017) with several hundred teaching and learning–focused higher ed professionals listening to Satya Nitta, director of the Cognitive Sciences and Education Technology research department at IBM Research, talk about IBM’s next generation super-computer, Watson.

In the course of illustrating Watson’s uses, Nitta described how one university employed Watson as a teaching assistant for a writing course. The students did not know one of their TAs was a computer. They only knew that Jill Watson was giving them feedback on their writing. I was skeptical. I pride myself on being a decent writer and I know that it is difficult to learn the craft well. During his presentation, Dr. Nitta provided samples of the types of feedback ‘Jill’ gave to the students, and I have to admit I was impressed. It was good quality, cogent feedback that I might have given a student. It would definitely help them be better writers, if they followed the advice. Predictably, this touched a nerve with the faculty in the room. It’s one thing to talk about jobs being replaced by computers, but it’s different when we’re talking about your job. I think everyone left the session amazed and a little unsettled.

And well we should be. Artificial intelligence (AI) might be another tool we can use to press down on all three sides of the iron triangle at once, achieving access at scale, affordability at scale and by the looks of it, quality at scale. Think about what it might mean to emerging countries without significant educational infrastructure. Think about what it might mean to students of the lower and ever-shrinking middle classes in this country. Usually when we push down on two sides of the iron triangle, the third side resists; there is no shortage of wonderful methods to increase access or affordability, but they usually present formidable challenges to quality. So the hope raised by AI is meaningful.

At the same time, the fear AI raises is also palpable. What does it mean for teaching assistants and adjuncts? To most of them it looks a lot like a pink slip. What does it mean for learning? Yes, the quality of the feedback was good, but there is something about writing — the part about capturing the human experience and expressing it in uniquely beautiful and moving ways — that surely a computer just can’t know. And in fact Nitta did say as much. The key to meeting this fear, in his mind, is to be very clear about Watson as a tool with limits, and to imagine teaching of the future as a field where the uniquely human elements required for learning are paramount.

I think Nitta is right. Daniel Pink, in his first book, A Whole New Mind, talked about the unstoppable trend of outsourcing jobs to cheaper countries via technology and the impacts of those decisions. His conclusion was that we should focus ourselves on the right side of the brain — the source for our imagination, creativity, ideas and emotions. These cannot be outsourced to someone cheaper, whether that someone is in another country or inside the lab at IBM.

What would teaching grounded in imagination, creativity, ideas and emotions look like? If Jill can teach the structural part of writing, it would enable teachers to focus on higher order concepts — the power of writing and how it gives voice to communities and ideas that can shape the trajectory of society, for example. This is the highest use of our faculty. It might enable faculty to hold longer office hours because they don’t have to work all night to grade papers. It might mean that TA’s get to learn the craft of teaching higher order concepts because they too can engage students in small-group conversations, not just review test answers. And the need for research will remain (if not increase) to add to the corpus of Watson’s knowledge. (There are many points for human input in the process of training up Watson [https://www.youtube.com/watch?v=_Xcmh1LQB9I], not the least of which is what goes in and what stays out of the algorithm, a hugely important subject. But let’s save that for the next post on Catherine O’Neill’s book, Weapons of Math Destruction.)

In the end though, it can’t be humans versus machines. It will have to be humans with machines; this trend, like outsourcing, is unstoppable. So let’s plug into the right side of our minds and imagine how that might look.


Holly Morris does higher ed strategy/innovation consulting and leader/team development work in Seattle, Washington.