Making the Sausage; or, How CDS Establishes Maturity Index Validity

min read

EDUCAUSE is continuously improving its Core Data Service maturity indices to ensure that they contain valuable, useful, and believable data for use by higher education institutions.

Desk with paperwork, calculator, magnifying glass, glasses, phone and pen
Credit: vladwel / Shutterstock.com © 2019

The EDUCAUSE Core Data Service (CDS) is a multifaceted tool used by hundreds of higher education institutions each year to make decisions about IT spending, staffing, and service provision. The EDUCAUSE community relies on CDS to drive strategic, operational, and learning-related technology services.

CDS includes a set of maturity indices across a variety of IT service areas, including digital learning, disaster recovery, business continuity, analytics, and student success technologies. Each maturity index consists of twenty to thirty items. These items are organized into groups that make up dimensions, which represent the main functions or tasks necessary for providing a given IT service. Scores for each dimension are mapped to a rubric, providing CDS users with context for their own score and for what could help improve the score in a specific dimension.

The maturity indices allow institutions to evaluate their digital capabilities, benchmark their level of maturity in a given technology area against that of their peers, and track their progress toward becoming a mature provider of that service.

EDUCAUSE often receives questions about CDS in general and about the maturity indices in particular. The English-to-psychometrician1 translations of these questions are shown in table 1.

Table 1. An English-to-Psychometrician Translation of Questions about CDS Maturity Indices

What Users Say

What Researchers Hear

What This Means

Why should we believe these items really measure maturity in this service area?

Do you have evidence to establish construct validity?

Construct validity is the degree to which a survey measures what it claims to be measuring.

Why should we believe these items represent these dimensions?

Do you have evidence to establish content validity?

Content validity is the degree to which a measure (i.e., a dimension in the maturity index) represents all facets of that construct.

Why should we believe this survey is measuring maturity in this service area?

Does this instrument (maturity index) have evidence of face validity?

Face validity is the degree to which a procedure, such as a survey, appears effective in terms of its stated goals.


To answer these questions and confirm validity, EDUCAUSE researchers decided to start with just one maturity index: Student Success Technologies (SST).

The Student Success Technologies (SST) maturity index is meant to help institutions evaluate the extent to which they are prepared to provide the technologies and services needed to succeed with student success initiatives.

The 2017 and 2018 SST maturity indices had five dimensions comprising twenty-nine items:

  • Process and Decision-Making: The extent to which student advising, support services, and opt-out options, as well as the use of analytics to effect improvement and predictively inform student success initiatives, are provided and maintained by the institution. (five items)
  • Defined Outcomes: The extent to which student success goals, metrics, and measures are clearly documented, understood, and transparent and are aligned to the measures and institutional-level student success outcomes developed by the institution (four items)
  • Leadership Support: The extent to which senior leadership is committed to the success of student success initiatives and technologies; the extent to which the two areas are aligned; the extent to which institutional leaders are aligned; and the extent to which funding is secured and continuously reviewed in support of student success initiatives and technologies (five items)
  • Collaboration and Culture: The extent to which collaboration around student success goals, metrics, and measures is encouraged and fostered by the institution; the extent to which students, staff, faculty, and senior leaders are included in discussions around the use and collection of student success data and analytics; and the extent to which multiple stakeholders across the institution are continuously included in discussions related to the retention and support of ongoing student success initiatives and technologies (eight items)
  • Technology: The extent to which technology, tools, data sharing, and training related to supporting the use of student success technologies, including advising and alignment of institutional-level goals and initiatives, are provided (seven items)

In the spring of 2017, EDUCAUSE led a group of Student Success Technologies (SST) subject matter experts (SMEs) through a content development process to help evaluate the existing SST maturity index and to make any revisions needed. With SME feedback in hand, researchers began to consider changes that included removing irrelevant content, rewriting confusing or poorly worded items, rearranging items within dimensions, and even revising the dimensions themselves. During this process, EDUCAUSE also updated the rubrics for understanding the scores. If changes to a survey instrument are being considered, it is important to weigh the value of the revision against the cost of potentially losing the ability to compare items over time after the wording of an item or dimension has been altered.

Using the updated SST maturity index, researchers conducted field-tests and cognitive interviews with SMEs (in which respondents shared their thoughts on the survey in real time) before releasing the index later that summer. The field-tested responses and interviews provided the evidence needed to establish face validity and helped confirm construct validity and content validity as well.

Next, using data collected by the 2017 CDS annual survey (administered from July to November 2017), EDUCAUSE took three steps to analyze and evaluate the results of the revisions to the SST maturity index. The goals of the analysis outlined in the steps below are to establish evidence that the maturity index measures what it purports to measure (construct validity) and that the items grouped together within a dimension are actually related to that dimension and to each other (content validity).

Step 1. The first step was to compute Cronbach's alpha. This statistic estimates the internal consistency of the maturity index. Cronbach's alpha ranges from 0 to 1, with higher values signaling a close relationship among the items. In other words, it's an indication of how closely related a set of individual items are as a group. Here, the construct is SST maturity. Cronbach's alpha for the 2017 SST maturity index was 0.95 (25 items and n=414 responses), which is an excellent indication that the items on this index all measure SST maturity when combined with the evidence of face validity.

Step 2. The next step was to employ a statistical method called Exploratory Factor Analysis (EFA) to identify the underlying relationships between the survey items. This analysis reveals the factors/dimensions (known as latent constructs) underlying a battery of items and sets the stage for the next step.

Step 3. The third step was to perform Confirmatory Factor Analysis (CFA), which is a procedure that evaluates the adequacy of how items have been assigned to dimensions. Statistically, this can be evaluated by testing whether the data fit a hypothesized model of measurement. Researchers used CFA to test the hypothesis that a relationship between the observed variables (survey items) and their underlying latent construct (dimension) exists. The analysis showed that the model for SST maturity is on the right track but that there was some room for improvement in terms of moving items and dimensions around.

The results from steps 2 and 3 allowed an examination of the nature of the relationships among the items on the survey, identification of items belonging to the wrong dimension (and where they should go instead), and discovery of items that are redundant and should be removed (who doesn't appreciate a shorter survey?). The results also led to a disclaimer, and it's not trivial: the model fit for this analysis was significantly improved when the analysis was limited to institutions with more than 4,000 FTE. This implies that the CDS maturity indices likely model larger institutions more accurately than smaller ones. These results were confirmed the analysis was re-run on 2018 SST data. Findings like this go a long way in helping EDUCAUSE tailor the maturity indices to account for different types of institutions.

As a result of the three-step analysis, several items were shifted around, and two dimensions were found to be so highly related that they were combined to form a larger dimension. While the statistical results pointed to these revisions, content should always trump statistics if they conflict. It turned out that according to the SMEs' content review, the changes made a lot of sense too. The 2019 rubric has also been updated to reflect these changes.

The revised 2019 SST maturity index has four dimensions (rather than five), comprising twenty-nine slightly changed items (one item deleted, one item created):

  • Student services (four items)
  • Defined Outcomes (four items)
  • Leadership and Culture (eleven items)
  • Technology and Systems (ten items)

So although the length of the SST maturity index did not change for 2019, its accuracy should be much improved.

EDUCAUSE will continue to revise and improve its CDS maturity indices to help institutions feel confident in the quality of their data as well as the data they benchmark against. This cyclical process, using input from SMEs and quantitative analysis, helps to ensure that the conclusions higher education institutions draw from CDS data are valuable, useful, and believable. This is just one of the ways CDS can help to make data goals become a reality.

Participate in the EDUCAUSE Core Data Service (CDS) to gain access to these and other important higher education IT benchmarks. For more information and analysis about higher education IT research and data, please visit the EDUCAUSE Review Data Bytes blog as well as the EDUCAUSE Center for Analysis and Research.

Note

  1. A psychometrician is someone who practices the science of educational and psychological measurement—or, in other words, testing.

Leslie Pearlman is Senior Director of Psychometrics and Data Analysis at Smarter Balanced Assessment Consortium at UC Santa Cruz. Previously she was Senior Psychometrician and Researcher at EDUCAUSE, contributing to the Core Data Service.

© 2019 Leslie Pearlman. The text of this work is licensed under a Creative Commons BY-NC-ND 4.0 International License.