Key Takeaways
-
Almost since the learning management system was established as a product category, there have been those who have chafed at its boundaries and constraints.
-
If the transformation to an NGDLE implies a disaggregation of the "services formerly known as the LMS," in order to effect their more flexible recomposition, factors other than technical feasibility and pedagogical desirability come into play.
-
Participation in a diverse community, coupled with the natural affordances for customization provided by open-source software, creates rich soil for innovation and problem solving.
In the re-imagined Battlestar Galactica, humanity's cyborg adversaries, the Cylons, have an often repeated line: "All of this has happened before, and it will all happen again." To participants in the conversations around a next generation digital learning environment (NGDLE) who have been around a while, the phrase might sound oddly appropriate. Almost since the learning management system (LMS) was established as a product category, there have been those who have chafed at its boundaries and constraints. Past attempts to create more flexible environments — whether those at SUNY just over a decade ago, as outlined in the July/August 2017 EDUCAUSE Review by Michael Feldstein,1 or JISC's E-Learning Framework and E-Framework — however influential, rarely got past the "ideas" stage. Why was this, and what has changed? In 10 years' time, will this conversation need to "all happen again"?
Earlier initiatives failed to gain traction for several reasons. At an institutional level, the lack of a clearly articulated transition path from the LMS to a potentially more flexible, component-based successor was undoubtedly a significant factor. In more general terms, we simply had far less experience in component-based architectures and open application programming interfaces (APIs) that allow the interconnection and communication of systems and components. At the time, the combination of an apparent need to rip out existing enterprise systems wholesale and replace them, coupled with technical challenges — particularly of orchestrating composite services, or "making the pieces work together" — proved daunting.
So what has changed? Principally, higher education has 10 more years of experience and is finding different routes to the same — or a similar — destination. The experience of working with practically implementable APIs has addressed part of the "making pieces work together" problem. The emergence of common specifications, such as IMS Global Learning Tools Interoperability (LTI), has made it easier to make applications, tools, and services work together in standard — and therefore more easily repeatable — ways. The LTI effort has focused on practical steps to connect individual, and often specialist, tools to larger environments such as the LMS, and potentially to connect one LMS to another.
At the risk of digression, it is important to recognize two points about LTI before proceeding. Firstly, LTI is a specification, and a very useful one, but it isn't magic. A mention of LTI and a wave of the hands disguises the serious effort required to perform any integration using the specification, let alone perform an elegant or rich integration. (Making LTI-enabled integration significantly simpler is one of the objectives of the Apereo Tsugi project). Secondly, LTI is a specification with a dual nature. LTI allows launch of, and communication with, an external tool or service from within an LMS by helping establish a trust relationship between applications. Think of this as tool consumption. It also allows the LMS to offer such a tool or service externally to another environment. Think of that as tool provisioning. Most LMS' implement LTI in one direction only — consumption — a point worth noting when evaluating any LMS.
LTI's full capability and direction suggests a potential scenario for the future of the LMS: the gradual transformation of the LMS into a hub for orchestrating tools and services and enabling their connection to other enterprise systems. The flexibility this suggests, including the ability to support an ecosystem of discipline-specific tools, is highly congruent with the emerging vision of an NGDLE. The workshops run by ELI that fed the initial NGDLE paper suggest that this is an idea that has found its time, or, at the very least, that has a significant level of interest and buy-in from the instructional technology community.
Yet if the transformation to NGDLE implies a disaggregation of the "services formerly known as the LMS," in order to effect their more flexible recomposition, factors other than technical feasibility and pedagogical desirability come into play. The LMS doesn't exist outside a marketplace, and that marketplace not only has a rich variety of software offerings but also a variety of software offerings built around very different development and sustainability paradigms, including those built around open-source licenses. Software license types carry a number of implications for sustainability, and to a considerable degree predicate the economic models and organizational forms sustaining the development and maintenance of the software itself. Socio-economic factors are therefore potentially of particular significance to the NGDLE conversation, but are all too often not adequately represented or are reduced to a simplistic (and unsustainable, unless an infinitely expandable market is assumed) model of counting new LMS adoptions.
Software has a tendency to aggregate features over time. For software produced under a commercial/proprietary license, these features develop as "unique selling points" that are ultimately about selling more software licenses than a competitor, and they tend to be tightly bundled with existing software for that reason. This drive for feature parity or supremacy tends to lead such software in one direction only to what might be termed software inflation. From time to time tactical unbundling may occur, particularly if this represents an opportunity to increase licensing revenue. Such unbundling usually occurs at the level of a marketable "suite" of products rather than at the level of more granular, composable, and broadly interoperable components or services as envisaged by an NGDLE.
So how might the tendency toward tightly bundled aggregation play out for the "LMS-as-a-hub" scenario? One variant of the scenario might be termed "Pike and Minnows." In this variant the LMS remains a relatively monolithic bundle of services, surrounded by smaller specialist and more agile tools and services capable of being "plugged in" via LTI in order to fill niches the LMS currently doesn't fill. The problem with this variant is that the tendency to aggregate functionality will at some point drive the Pike to eat an LTI-enabled Minnow, in order to enhance functionality or deny that functionality to competitors. At which time — probably in the name of "richer integration" — the Minnow's LTI capability might well be "augmented" in proprietary ways that make it unavailable elsewhere.
A tendency toward tightly bundled feature aggregation is not, of course, unique to commercial-proprietary software,2 but it is apparently a pervasive aspect of the software marketplace. To understand why requires understanding not only the licensing regime any piece of software operates under, but the impact of that choice of licensing regime on the way that software is produced and maintained.
Alternative Approaches
So where might we look to balance this tendency toward the monolith? What forces might act to help "keep the market honest"? It's perhaps instructive in this context to note Conway's Law. First articulated in 1968, it states: "Any organization that designs a system … will inevitably produce a design whose structure is a copy of the organization's communication structure."3 In assessing this "mirroring hypothesis" in 2008, researchers at Harvard Business School commented, "In all of the pairs we examine, the product developed by the loosely-coupled organization is significantly more modular than the product from the tightly-coupled organization."4
The open-source world contains many different organizational models and communications structures supporting sustainability and governance. Linux and Apache, for example, provide extremes of polarity in terms of governance. Linux remains highly hierarchical and ultimately focused around a pyramidal structure with one individual at the apex, while Apache has a more collegiate, distributed, and meritocratic distribution of authority. No judgment is implied here, by the way, on the quality of the software. The needs of an operating system differ from those of a more forward-facing application or applications, and Apache has over 350 software projects producing applications.
Assuming a community of sustainable size, Conway's Law suggests that a diverse and distributed community producing software is more likely to produce software that will provide modularity and extensibility. But software produced by a diverse and distributed community — particularly a community of a size to sustain the development of major systems — carries with it a greater overhead than software produced by several people in the same building. Nonprofit foundations such as the Apache or Apereo Foundations exist to reduce that overhead by providing a set of core services for a number of software projects or communities, thus obviating the need for those communities to provide — or duplicate — those services themselves.
Co-creation of open-source software by a diverse and distributed community has other key qualities. Erich von Hippel has written extensively on the relationship between customization and innovation, describing open-source communities as "horizontal innovation networks."5 Not every user of open-source software will choose to customize it, but for those who do, customization can provide an on-ramp to innovation. For example, when NYU first deployed Sakai, faculty dissatisfaction exceeded 24 percent. By listening to a faculty committee and working with the community to add requested items while improving usability, NYU significantly improved satisfaction rates: dissatisfaction dropped to 18 percent in one year and 12 percent the second year. With software that is not open source, the university is stuck with the features that the vendor chooses to deliver. So, sometimes you get what you request, but more often, you don't. Having control over the source code has vastly improved the teaching and learning experience for faculty and students at NYU and given technologists confidence that they can innovate and deliver a satisfying experience.
The NYU example offers a vision not simply of collaborative co-creation with like-minded partners outside the institution, but of partnership, engagement, and co-creation with internal constituents to collaboratively solve problems and innovate. Open-source development allows for technology to be done with instructors and learners rather than to them. Together with software modularity, and the fact that open-source software is the best guarantor of open standards, this is a critical element in developing an NGDLE within an institution and ensuring a sense of ownership by its constituency.
In terms of feature aggregation, collaborative communities are clearly not oblivious to the market, or separate from it, but they tend to operate around a very different set of drivers than a single commercial entity with customers. "Features" are identified at an institutional level initially, clustered and discussed at a community level, and created or refined by a mixture of shared and hired human talent. Co-creation is disintermediated and more likely to meet institutional need than the alternative "make a feature request, and if enough customers make a similar request, we'll think about it." The challenge facing open-source communities centers on the aggregation of resources to meet development objectives. A commercial/proprietary vendor borrows funding, setting that borrowed funding against potential future revenue; an open-source community pools present capacity to create a sustainable future.
By its nature an NGDLE is not something that can simply be purchased. Academic institutions need to own an NGDLE as it develops and shape it to their institutional context. Open-source software, with its rich affordances for innovation and range of support options, can play a significant role in this shaping. Participation in a diverse community, coupled with the natural affordances for customization provided by open-source software, creates rich soil for innovation and problem solving.
Notes
- Michael Feldstein, "What Is the Next Generation?" EDUCAUSE Review, July/August 2017.
- Open-source alternatives exist in the marketplace, but they are not always quite what they seem. The success of open-source software has led to the phenomenon known as "openwashing" — labeling a product as open source, but with a proprietary "catch" of one sort or another. This might involve a commercial entity effectively operating a dual license strategy, with an open-source public version and a cloud offering built around that version but with an added "secret sauce" (or perhaps, "secret source") to enable cloud hosting. Despite the "open" label, a commercial entity using this model tends to work in exactly the same way as a traditional commercial-proprietary vendor.
- Melvin E. Conway, "How Do Committees Invent?" Datamation, April 1968. Harvard Business School provided significant empirical evidence to support Conway's hypothesis as recently as 2008. See Alan D. MacCormack, John Rusnak, and Carliss Y. Baldwin, "Exploring the Duality between Product and Organizational Architectures: A Test of the Mirroring Hypothesis," Working Knowledge, March 27, 2008.
- MacCormack, Rusnak, and Baldwin, "Exploring the Duality."
- Eric von Hippel, Democratizing Innovation (MIT Press, February 2005).
David Ackerman is chair of the Apereo Foundation Board.
Ian Dolphin is executive director of Apereo.
© 2017 David Ackerman and Ian Dolphin. The text of this article is licensed under Creative Commons BY-NC-ND 4.0.