Exploring Meaningful Measures of Accountability

min read
Ask More Questions

Assessing the quality of and accounting for the value of a college degree are hot topics in US higher education policy. Both the introduction of the EQUIP program and the recent proposed rule for federal loan forgiveness demonstrate that the government wants more assurance from higher education that we are doing what we say we are doing.

What's the problem? A quality education, according to one proposed bill, is only as good as its measurable student outcomes. Clearly measurable quality and value matter, yet the accreditation process can be too focused on inputs over outputs.

Over the last year, the two of us have had numerous conversations around accountability in education. We are each driving the work of assessment in separate, distinct spaces at Davidson College: one focused on accreditation, and the other on innovation. Where the former emphasizes outcomes-based assessment, the latter rejects defining outcomes, by design.

Meaningful measures of accountability in education, we believe, lie somewhere in the middle.

Like any organization, higher ed institutions need to be accountable in terms of what is and isn't working for our students. Students deserve to know how we are being thoughtful and intentional in supporting their learning and growth. In our experience, educators aren't resistant to this notion of accountability, but rather to the more rigid constraints and top-down manner in which it can be designed and implemented.

Some faculty argue passionately that much intellectual growth in the classroom simply exceeds measure. There are multiple domains of learning that we cannot capture through conventional learning outcomes. And, some of the best learning happens in open, emergent exploration.

We agree. But while we cannot measure all learning, we can measure some aspects of learning. Achievement on specific learning outcomes is only ever a partial measure of student growth, but that doesn't mean we shouldn't assess at all or reject all assessment as an infringement on academic freedom.

If outcomes-based assessment doesn't account for other domains we value, is there a method that can?

Recognizing that digital technologies are changing the ways we learn [http://www.elearnspace.org/Articles/connectivism.htm], Davidson College is providing the R&D time and space for students and faculty to experiment with new models of education - models that intentionally question our structural assumptions about learning and explore how digital technologies can help us do what we are currently unable to do.

R&D is a space for experimentation in uncertainty. When you cannot predict what will work, you embrace openness to learn from failures. There is no checklist or recipe to follow. Innovation cannot be measured by pre-determined outcomes because by definition, the outcomes are unknown. What's needed, according to Dave Snowden, are "ways of measuring success without knowing in advance what that success may be."

How do we account for innovation?

Accounting for anything is fundamentally about aligning your assessment appropriately to the domain you are measuring. Drawing from the Cynefin framework, our focus is on understanding learning within the complex domain.

Assessing complexity graphic

Assessing complexity is essentially managing for emergence, within flexible constraints. A promising digital tool for managing innovations, called Sensemaker, combines complexity theory with narrative research and has been used in industry with interesting results (scroll to ‘Cases & Examples’).

Sensemaker is a 'probe-sense-respond' methodology that looks at what is and isn't working at the system level. The tool collects the stories from R&D experiments and learning environments more broadly, while empowering the storyteller — rather than a professor or an administrator, looking for predetermined values — to interpret meaning. These stories reveal what might be termed the weaker signals within the noise. The resulting mix of statistical and narrative data from these stories will help us manage innovations in real time.

How can innovation accounting inform accountability in higher ed?

Our work in accreditation and R&D has given us a fresh perspective on accountability. We know our students benefit from clear assignments and well-defined outcomes where appropriate. We also know that other domains of learning mirror the innovation process and require spaces for emergence, where the outcomes are unknown. Bridging these methods, we will pilot the Sensemaker methodology across the institutional landscape, including select courses, programs and institutional structures, to address these questions:

  • How might we experiment with new forms of assessment that complement outcomes and add a layer of data for understanding open, emergent learning?
  • How might new forms of assessment help institutions adapt to uncertainties in the changing education landscape, and design the optimal system structures that amplify open, emergent learning?

If we define the value of a college degree as only what we can measure in outcomes, then we risk designing the student experience for the domains of learning that can be measured, to the exclusion of everything else.

If, however, we look at higher education as a complex human system, we may come closer to designing a framework for accountability that aligns appropriately to the system being measured. Ideally, the measures gained from statistical and narrative data at the systems level help us be accountable at every level in the ways we find most meaningful.

We will continue to blog openly about our research on our R&D site. If you’d like to follow our progress, join us here!


Kristen Eshleman is Director of Digital Learning Research & Design (DLRD) at Davidson College. DLRD is an R&D initiative focused on the design and research of experiments that explore new models of a liberal arts education in the digital age. DRLD provides a safe-to-fail space where risk-taking is encouraged and design-based research informs Davidson's digital strategy.

Shireen Campbell is Professor and Chair of English at Davidson College and co-director of the Writing Center. With a strong focus on the research of teaching and learning, she has been working with departments and programs to hone outcomes assessment practices on campus as part of Davidson”s reaccreditation process.