Opening the Black Box of Adaptivity

min read
image of front of box drawer with blog title overlaid on it

Adaptive courseware has raised both the hope of personalized learning at scale and the fear that the systems will replace faculty in the learning process. While some proponents have talked about the promise of “a robot tutor in the sky,” others like Candice Thille at Stanford University have spoken out about how we may be making a “dangerous mistake by letting companies take the lead in shaping the learning-analytics market.”

Neither hype nor hysteria are helpful during this critical phase of development in this nascent industry. Buyers are eager to learn what works and what doesn’t, and suppliers are searching for markets to test their ideas and innovations.

What professors and providers need is a common framework to organize the discussions about the most effective and efficient ways to create useful adaptive courseware.

In basic terms, the goal of adaptive courseware is to provide the right lesson to the right student at the right time. This deceptively simple definition is devilishly difficult to do. For instance, if a student gets an assessment question wrong, is their mistake due to a lack of attention to the instructional materials provided in the lesson, or a deeper misunderstanding of a core concept covered in a previous section? Development of systems that can answer complex (and often ill-defined) questions of this nature can take years or even decades to perfect.

Where is the best place to start when faced with rapidly changing products and confusing marketing materials? Having worked with a half dozen systems while at ASU, I have developed a framework to help me make sense of what’s happening inside the ‘black box’ of adaptive learning systems.

It is guided by two questions: What is adapting to the student? What is guiding the adaptation?

What is adapting to the student?

In the systems we have worked with, the ‘moving parts’ that are adapting are the Lesson Sequence and Content Selection. Adapting the lesson sequence to each student is a way to recognize their different knowledge levels and respond to their unique learning needs. For example, when a student takes an initial math assessment in Khan Academy, the system uses that information to create a personalized path through the course. Some learners may require extensive remedial assistance, while others may demonstrate mastery of a significant portion of the curriculum before the first day of class. Adapting to this reality provides a powerful way to help each student progress through the content in an effective and efficient manner.

Content selection is an adaptive sub-process that occurs within a lesson. In commercial applications, search engines like Google recommend different content to us depending on the information they have gathered about us. Adaptive courseware can use similar techniques to recommend different instructional resources to students depending on various factors. If students perform better on an assessment after reading a text in comparison to watching a video, then adaptive systems can track that data and use it to inform future recommendations. This learning loop of content selection, utilization and evaluation can be repeated millions of times in large-scale adaptive courseware like McGraw Hill’s ALEKS system to identify patterns that help determine what instructional resources are working best for the students.

What is guiding the adaptation?

This second question goes to the heart of the matter of what is happening inside the ‘black box’ in adaptive courseware. I have identified four techniques which have been used by vendors to decide which lesson sequence and content selection is most effective.

  • Algorithm (analytics) – recommendations
  • Assessment – rapid remediation
  • Association – decision tree
  • Agency – student chooses

Algorithms have drawn the most attention (both positive and negative) from people working with adaptive systems. Approaches such as item response theory and knowledge space theory are well documented. Those are often used in combination with proprietary algorithms to analyze data on a student’s activities, assess their learning, match that with a predictive model, and make recommendations on what they should do next.

Fields like marketing have pushed the frontiers of this technique, while education is developing more slowly due to the difficulty of compiling large, integrated data sources that can be used to identify the beneficial learning patterns. Of the adaptive systems I have worked with, only the math coursewares (MyMathLab, ALEKS, Khan) have demonstrated the ability to collect, analyze and apply algorithmic logic models. Each of those products have millions of users and well defined concept charts, which make the adaptive analyses and recommendations possible. In other domains where we have worked with adaptive courseware such as biology, psychology and history, there is no agreement among faculty on the relationships between concepts — nor have we achieved the scale necessary to have sufficient data to model student behaviors. That will most likely change as those products mature and reach larger groups of students.

In all cases, it is critical to remember that algorithms are not value neutral. They are the products of people who have biases and blind spots that can affect the logic behind them. There are many stories in other fields, such as social media algorithms at Facebook and elsewhere being designed to manipulate users’ behavior. We should all be concerned. Educational systems must be held to a higher standard of transparency and trustworthiness. While there is no clear answer on how to do that at this time, perhaps one of our higher education associations such as the EDUCAUSE Learning Initiative can facilitate a dialog with the parties involved in this debate to develop proposals for safeguarding our students.

All the excitement about algorithms overshadows the fact that assessment-based adaptivity is a much more common technique in systems we have explored. This is a simple way to provide rapid remediation. A student gets a formative assessment question wrong and is immediately presented with instructional resources to learn the correct answer.

While that recommendation could be based on an algorithm, in many cases the relationships between question and remedial resources are currently hardwired by the courseware developers to ensure accuracy. Some might consider this technique as unworthy of the moniker adaptive, but I’m just describing what the vendors are doing (and marketing) as adaptive courseware.

Personally, I consider this to be a first step on the road to algorithmic adaptivity. It’s a form of prototyping that can help students right now. Even some learning management systems (LMS) have seen the value of developing features such as adaptive release. Before a system has sufficient data to identify learning patterns, a subject matter expert provides the crucial knowledge about the relationship between lesson sequence and content selection that will help students learn.

What really matters is that it is working for the students: we see improvements in both preparation for class discussions and test results, which indicate that assessment adaptivity can help students learn efficiently and effectively.

Association is the unheralded component of adaptive courseware that is crucial to every system. Before you can apply an algorithm, you currently have to have a concept chart (knowledge map, domain model, etc.) which identifies associations between the lessons you want students to learn. You can think of it as a syllabus on steroids. All of the connections between prerequisite and corequisite concepts must be documented to create a referential framework for the adaptive analyses.

A good example of the product of this process can be see at the Khan Academy Knowledge Map. You can trace the math domain from counting with small numbers to advanced concepts like differential equations. The caveat here is that math is one of the unique fields where there is some agreement on the sequence of these lessons. In other disciplines, there is no such agreement. In fact, diversity in the approaches to organizing knowledge in some subjects is often a strength, as faculty experiment with how to help students learn complex (and often rapidly changing) scientific concepts.

That is why the concept chart is so important. If the underlying relationships are not clearly defined in a way that faculty can view, verify and possibly modify them, then there is a risk that the courseware will be counterproductive to the students. We had this experience in an introductory biology course for non-majors. The first instructor designing the adaptive system had taught the course for 30 years, and stated that the best way to learn biology was to structure the concepts from macro to micro (from biomes to DNA). We built that concept chart and ran the course for a year. Then, a second instructor was put in charge and insisted that the best way to learn it was from micro to macro. This necessitated a complete restructuring of the courseware to ensure the students received the right lesson at the right time in the right order. This is why adaptive association matters, and open access to the concept chart should be part of the discussion with every adaptive courseware vendor you consider.

Finally, we get to agency. This is a way to incorporate students into the discussion of how the courseware should adapt to them. Rather than assuming that subject matter experts know it all or that algorithms can predict it all, we ask students if they think they learned the lesson before we assess whether they did. Agency challenges the students to use their metacognitive skills to decide whether they need more instructional resources to learn a concept. “Knowing what you (don’t) know” is the kind of higher order thinking skill our students will need to succeed in the 21st century workplace.

This is a relatively new but powerful addition to the portfolio of adaptive techniques. Systems such as McGraw Hill LearnSmart and CogBooks have been early adopters of the approach. Students are asked ‘how well did you understand’ the concept at the end of every lesson. If they say they get it, then they move on. If they don’t, then additional resources are provided. Of course, their opinion must be validated by an assessment. That allows the systems (and faculty) to use this metadata to determine the best way to help the student learn.

The information gathered from agency is useful in two ways: it can help instructors identify students who lack confidence in their understanding, and it can help them identify lessons that students think are not adequately explaining the concept. For example, if one student is consistently asking for more resources and doing poorly on the assessments, then the faculty member might want to offer help to that individual. On the other hand, if every student is asking for more assistance on a lesson, then that lesson might need to be improved for everyone. In both cases, agency gathers the voice of the student in a way that supports their learning and is highly valuable to helping faculty continuously improve their course.

Final Observations

The ‘black box’ of adaptive courseware does not have to be magical or menacing. It is simply another tool for faculty and students to improve learning. However, like any tool it can be wielded for good and bad purposes. It is incumbent upon both providers and professors to join the discussion about how to ensure adaptive courseware is being used in an ethical and effective way.

The four techniques used inside the box provide us with one way to parse out different aspects of adaptive courseware and organize productive discussions. For example, rather than focus only debating the ethics of algorithms, what if we started with a discussion of ways to help faculty collaboratively identify key associations between concepts? Adaptive systems can be integral to proving an open process where faculty can present, debate and refine their ideas for the concept charts for each discipline.

The goal should not be to create a standardized model of knowledge in each field. Rather, it should be to democratize this discussion, which has historically been controlled by publishers and a small group of authors. If professors in Asia, Latin America and elsewhere have new ideas for concept charting or lesson sequencing, then they should have a place to present those ideas for peer review. The need for this type of information exchange as a foundation for adaptive courseware should encourage us all to collaborate in those discussions and apply the rigor of a ‘research approach’ to teaching and learning.

While artificial intelligence and machine learning might seem like they are a long way off in education, every day on the way to work in Scottsdale, Arizona, I see Uber testing its self-driving vehicles on the city streets. Inside is an engineer who evaluates the adaptive driving systems (and possibly takes the wheel), making sure they are performing as expected. Though teaching and driving are different processes, we should expect professors to be involved in the design of the adaptive systems and be in a position to ‘take the wheel’ to ensure students are learning. Anything less would be an abdication of our responsibility to the students who will use adaptive courseware in the future.


Dale Johnson is Adaptive Program Manager at Arizona State University