Establishing a Quality Review for Online Courses

min read
A formal review of online courses measures their quality in key areas and reveals changes needed for improvement, if any

Since the inception of online learning in the 1990s, innovative technology and pedagogy have broadened access to higher education. Many colleges and universities remain concerned about the issue of quality for online educational programs, however, especially compared to face-to-face delivery. Quality issues often manifest as discussions on teaching effectiveness, faculty-to-student ratios, attrition rates, student satisfaction, and institutional resources invested in online delivery.1 Distance or online education programs must develop and maintain quality educational options to successfully compete with conventional academic offerings—institutions cannot maintain a competitive edge solely from innovation of the online delivery format. The quality of online programs lies at the heart of the effort to attract more learners to online learning and to provide them with comparable—if not better—education quality than they can get by attending classes on campus.

A quality educational program begins with the development of quality courses. This article describes a pilot project conducted by the Centre for Teaching and Educational Technologies (CTET) at Royal Roads University (RRU) to define quality for online courses and to create a review practice that ensures continuous improvements based on reliable data. The pilot project endeavored to answer the following questions:

1. How does CTET ensure that online courses meet certain quality standards?
2. How does CTET ensure that course quality is maintained through multiple course revisions, changes in the curriculum, and personnel shifts?

This article focuses on the process of establishing and conducting a quality review based on a proposed framework for examining all aspects of the quality of an online course. The pilot project, its instruments, and its review procedures were established as an internal exercise to address those aspects of quality directly under the control of CTET. The pilot serves as a good case study to explore the feasibility of a formal quality review of online courses and how the review results can be used to improve course quality.

Background

RRU was established in 1995 in Victoria, British Columbia, Canada, to provide graduate professional degrees to adult learners. The university uses a blended learning model, where learners participate in short-term residencies on campus and take online courses between residencies. In the first year, students spend three to four weeks on campus and nine to ten months taking online courses. Students in the second year follow the same schedule but add a practicum or research project.

The curriculum is based on learning outcomes. The philosophy behind this approach is to ensure that RRU degree programs are “applied” and provide practical experience to meet the needs of career adult learners.

CTET is the central department for course design and production at RRU. Instructional designers and Web developers work with instructors in the academic units to design and develop online components for residencies and online courses. The center hosts all the courses on an in-house, Web-based learning platform, and center staff advise on the use of other educational technologies. This puts CTET in a unique position to oversee the quality of courses in concert with the academic unit responsible for curriculum and teaching. The large number of courses makes keeping design elements consistent and ensuring quality a challenge.

Defining Quality Issues

Research into online and Web-based learning has probed quality issues from several perspectives. For example, effective online learning is described in theories such as situated cognition, cognitive flexibility theory, and Web-based instruction.2 Chickering and Ehrmann’s seven principles of good teaching3 also received significant attention in the late 1980s and was adopted for online course design and delivery in the 1990s. These theories and principles became pivotal guidelines for academics and course designers.

In addition to pedagogy-oriented research, another trend emerged: quality assurance became critical not only at the course level but also at the programmatic and institutional levels.4 The Institute for Higher Educational Policy published the report “Quality on the Line: Benchmarks for Success in Internet-Based Distance Education” in 2000,5 and in 2002, California State University published “Rubric for Online Instruction,” providing guidelines to identify exemplary online courses.6 Barker7 and Herrington et al.8 also published reports on e-learning standards. All these publications include criteria in one or more of the following areas:

  • Institutional support
  • Course development and instructional design
  • Teaching and learning
  • Course structure and resources
  • Student and faculty support
  • Evaluation and assessment
  • Use of technology
  • E-learning products and services

Note that these criteria incorporate the theories on effective online learning and expand the guidelines beyond pedagogical issues to redefine quality as learning and service experiences.

Despite these efforts in defining and examining quality issues concerning online courses, a systematic, formative methodology to measure and ensure quality is lacking. The most common tools for gauging quality are surveys and course evaluations in which instructors, learners, or sometimes administrators provide their perceptions, opinions, or experiences. Data collected from surveys or course evaluations only touch on some aspects of a course’s quality—mostly issues related to teaching and learning, such as how an instructor performs in class or how the learning experience affected learners. Often, aspects not obvious to faculty or learners are ignored, such as instructional design, course development, and the use of technology. To define the quality of an online course, therefore, requires a comprehensive framework to identify these issues and appropriate guidelines, as well as to devise an instrument and method for measuring the hidden aspects of quality.

Online Course Quality Framework

The following framework illustrates a comprehensive coverage of online course quality from RRU’s perspectives and serves as a blueprint for addressing the issues using a systematic approach. The framework consists of six independent but interconnected components. Missing one piece means missing part of the puzzle that, when complete, provides an overview of quality issues in online courses. (See Figure 1.)


Click image for larger view.

Curriculum design deals with the content, which dictates the learning outcomes for an educational program. In turn, the specified learning outcomes are incorporated into courses in the program. This incorporation of learning outcomes into courses based on the chosen content serves as the foundation for quality because it addresses the interests and needs of learners. RRU developed a Curriculum Quality Assurance Policy in 2004. This policy sets out specific criteria and a process by which a program curriculum is designed, reviewed, and approved. Academic units ensure the curriculum meets quality standards for content and learning outcomes.

Instructional design deals with the connection among learning outcomes, course activities, teaching strategies, and the use of media and technology. The highly collaborative working relationship between instructional designers (from CTET) and instructors (from academic units) ensures shared responsibility for sound instructional design for a course.

Web design is important because learners interact with content, the instructor, and other learners through the interface of the course Web site. Web design deals with usability issues, especially those that affect learning. Thus Web design and instructional design must mesh in the development of an online course. Poor quality in Web design can frustrate learners and hinder their progress. CTET, which produces all the online courses on the RRU learning platform, ensures the quality standards in Web design.

Teaching and facilitation is the art of carrying out the curriculum and instructional design plan. It encompasses the instructor’s knowledge and skill in guiding learning and occupies the forefront of quality issues because it directly impacts learning experiences. In other words, quality issues in teaching and facilitation determine how well an instructor helps learners learn. At RRU, academic programs use interim formative surveys and final course evaluations to help assess the quality of teaching and facilitation.

Learning experience constitutes another dimension of quality, as learners are the ultimate beneficiaries of the desired learning outcomes. Learning experiences are closely tied to teaching and facilitation, but other factors come into play: learner pre-requisite knowledge, learning styles and preferences, and the dynamics of a learning community. Quality courses aim to foster a positive learning experience. Again, the interim survey and final course evaluation assess learning experience.

Course presentation is the final component of the quality framework, covering presentation of the course materials in a professional manner. Specifically, course presentation addresses functionality, consistency (for example, in font size and layout), grammar, and look and feel of the course. Too often it is neglected. Glitches in course presentation can create the perception of general poor quality that overshadows the quality achievements in specific areas. CTET serves as the final checkpoint during course production and therefore guarantees quality in course presentation.

An underlying concept in this framework is the distinction between the components outlined in the framework and the processes used to carry out the tasks to achieve quality. Components describe “what” and processes describe “how.” For example, instructional design components assess the structure of a learning activity, such as an online debate. The process to reach the design of an online debate can be collaborative (between an instructional designer and an instructor) or didactic (an instructor conceives the design).

The course presentation component offers another example. One of the course presentation criteria states that course materials should be free of grammatical errors. The process to achieve this quality standard can be to place the onus solely upon the course writer or to have an editor proofread the materials at predefined stages of development.

This distinction is important when it comes to creating and implementing quality standards. Separating the “what” issues from the “how” issues provides clarity and ensures the measures we take to achieve quality standards indeed address the issues in the appropriate manner. In general, components can be evaluated using a set of criteria in a formal review. The pilot project initiative addressed the components, not processes.

The Pilot Project

CTET launched the Online Course Quality Pilot Project in 2004. With this project, data about the quality of an online course can be quantified, compiled, and analyzed.

Rationale and Objectives

Consistent with CTET’s mandate and functions, the pilot project addressed online course quality in three of the six components described in the quality framework (instructional design, Web design, and course presentation). The project excluded quality issues associated with processes, as those issues may be better dealt with in future projects.

Specifically, this pilot project aimed to achieve the following objectives:

  • Establish quality standards for assessing online courses with respect to instructional design, Web design, and course presentation.
  • Review a representative sample of online courses.
  • Identify strengths and weaknesses of the courses.
  • Make recommendations for improvements.

In addition, the review process itself was under investigation because the procedures for conducting a review needed testing. CTET expected the pilot project to determine the usefulness and feasibility of a quality review for future implementation as a regular practice.

Quality Review Team

CTET assembled a team to tackle the project: an instructional designer, a Web/multimedia developer, and an editor. Each of us brought expertise to researching and drafting the quality standards, as well as evaluating the courses. The team approach was essential, not only because each quality component required specific expertise but also because the pilot project process will have implications for large-scale implementation within CTET.

Creating Quality Standards

The review team created standards that articulate criteria in instructional design, Web design, and course presentation. We consulted several sources of external literature and internal documentation, including the quality guidelines mentioned earlier, research-based usability guidelines,9 and the CTET “Style Guide for Online Course Materials.”

It is important to understand that we wrote the standards within RRU’s context: outcome-based learning and a collaborative approach to course design and production. Other institutions could follow the same steps to create standards appropriate for their own use. The following sample of instructional design criteria reflects the standards we want to achieve and the alignment of these criteria with RRU’s teaching philosophy:

  • Course learning outcomes and competencies align with the program’s outcomes and competencies.
  • Course learning outcomes and competencies use clear assessment criteria.
  • Instructional strategies used in activities and assignments align with the stated learning outcomes.
  • Performance expectations regarding participation in online discussion are clear.
  • Authentic activities are used—real-life tasks that allow learners to apply knowledge and skills (both prior and newly acquired knowledge and skills).
  • Authentic assessment is used—grades are based on the authentic activities required of learners. The instructor’s role and methods of providing feedback are clearly indicated.
  • The number of activities and assignments are appropriate, so the workload is reasonable.
  • Asynchronous discussions are structured appropriately to maximize learning in the course activities.
  • Selected readings and resources reflect and fit the subject and course learning outcomes.
  • Other technological tools are incorporated appropriately based on the content and outcomes of the course.

As mentioned, RRU uses a Web-based learning management system (LMS), and Web design criteria are part of the quality standards. It is worth noting, however, that our LMS imposes limitations in terms of the navigation features and the look and feel. The criteria we created in this pilot project address the design elements permitted within the confines of the LMS. Figure 2 illustrates the presentation of course content in our system.


Click image for larger view.

Sample Web design criteria include:

  • Course navigation menus are organized in proper sequence.
  • Fly-down menus are meaningful and relevant.
  • Fly-down menus contain a balance between the number of menu items and the number of levels.
  • Long scrolling is minimized or aided by anchor links.
  • All pages are formatted to prevent horizontal scrolling.
  • Links are descriptive and labels are consistent with the destination headings and content.
  • Interactive multimedia items are designed to maximize user control (for example, control over play and stop for a video clip).

Course presentation criteria speak to the professional presentation of course content. These criteria are used to evaluate both the print course package and the course Web site. We established the following sample criteria in this pilot project:

  • Course materials are free of typos and grammatical errors.
  • Language use and flow is consistent throughout the course (for example, verb tense, first versus third person, sentence structure).
  • Online readings and resources are properly linked.
  • Links to external readings and online resources open in new windows.
  • Bullet lists are consistently formatted.
  • Dates in the schedule correspond to dates in descriptions and assessment grids.
  • Fonts (style, color, and size): content fonts are consistent throughout the course; headings are consistent; and heading fonts identify the level of heading appropriately and include no underlining.

The biggest challenge in controlling course presentation quality is the fairly short turnover time for editorial scrutiny and the multiple stakeholders involved in creating the final course product. The editing and proofreading work often occurs at the end of the development and production cycle, so time is always tight. Course authors (instructors), instructional designers, and content reviewers all write and edit the course content. Then the instructional designer, media specialist, and Web developer work on the course Web site, either transferring the existing content or creating new content. Inconsistency inevitably occurs. Therefore, a focused review of quality standards for course presentation is especially important for CTET to ensure that the end product is error free and presented professionally.

Sampling and Reviewing Procedures

Establishing the sampling and reviewing procedure for the pilot project took two stages. First, we chose three online courses to evaluate in order to validate the instrument—that is, the quality standards and the rating system. Feedback was collected from the entire CTET team (three additional instructional designers and three additional Web developers took part in this phase) to fine-tune the criteria statements and the rating scale. The scale was defined as:

  • 1 = unsatisfactory—needs significant improvements
  • 2 = somewhat satisfactory—needs targeted improvements
  • 3 = satisfactory—discretionary improvement possible
  • 4 = very satisfactory—no improvement needed

During the trial evaluation of the three courses, we three team members rated each course against all the criteria and then wrote comments as a way of justifying the score assigned and as a qualitative measure to supplement the quantitative rating system. The qualitative comments turned out to be useful in making meaningful recommendations for improvements. The comments also served to identify courses with innovative and exemplary design elements.

The second stage involved rating an additional 15 courses from a pool of courses that ran from September to December 2003. At the time of the pilot, the courses were recent enough to provide an accurate snapshot of current course quality. We sampled at least one course from each of the academic programs. The 18 total courses selected for the quality review represented approximately 25 percent of the total courses offered during the four-month pilot period.

Although the courses were selected randomly, consideration was given to planned course revisions. Courses cancelled for the following year or undergoing substantial curriculum change were not included in the pilot review.

Results

After debating how to treat the data—what would be meaningful and useful in describing the quality of the online courses—we decided that averages and frequencies seemed best. Averages provide an overview of the quality of a course in relation to other courses and in relation to the “ideal” course (a course achieving a score of 4), while frequencies give more detail of the strengths and weaknesses of individual areas within the courses.

Figure 3 compiles the averages in the 18 courses for the three components we evaluated: instructional design, Web design, and course presentation. Overall, courses scoring higher than 3 meet the quality standards, while those averaging below 3 do not. We felt it necessary to calculate separate averages to determine whether a course falls short in a given area. For example, a course might score very well in instructional design but need significant improvement in course presentation.


Click image for larger view.

The averages show that the majority of courses meet instructional design quality standards, although two of 18 courses (11 percent) need improvement. All courses meet the quality standards for Web design. Seven of 18 courses (39 percent) do not meet the quality standards for course presentation. Figure 3 demonstrates that course presentation quality has the greatest variation among the courses and also the greatest room for improvement.

We also examined the frequencies on the four-point rating scale to reveal the strengths and weaknesses of the courses. We did this by counting the first occurrences of the courses that scored between 1 and 3 in at least two criteria. (Each course is only counted once. If a course scores a 1—needing significant improvements—it is excluded in calculating the frequencies for scores 2, 3, and 4.) The remaining courses are the ones that received high scores (high frequencies in score 4) and can be identified as exemplary courses.

Figure 4 indicates that for CTET to improve the quality of online courses, those that stand to benefit the most are the 44 percent that require significant improvement and the 18 percent that need targeted improvement in course presentation. Also, 11 percent of courses need significant improvement and 50 percent targeted improvement in instructional design. In Web design, 56 percent need targeted improvement.


Click image for larger view.

Compared to the averages, which only give a broad overview of course quality, these numbers give us a better sense of how to prioritize improving course quality and how CTET can best allocate resources to implement improvements. For example, a glance at the averages seems to indicate that all 18 courses could benefit from at least discretionary improvements for instructional design. The frequency calculation, however, indicates that 33 percent (6 of 18 courses) were well designed—their scores reached CTET’s threshold for the exemplary quality standard. The qualitative data for those courses indicated exemplary

  • use of authentic case studies accompanied by video resources to meet multiple learning styles,
  • selection of Web resources to help learners pursue further study, and
  • practice of learners forming dyads to practice their coaching skills.

Discussion and Recommendations

The quality standards and review process established and tested in the pilot project complemented and advanced CTET’s mandate to provide high-quality courses.

Benefits

The criteria serve as the benchmark for effective course design. Having a set of criteria statements for reviewing purposes can also affect the development of a course. These criteria can be shared with the instructor, and the design team can use the standards as a checklist at an early stage of the course development (when learning outcomes and the design blueprint are discussed). This strategy has the potential to improve course quality dramatically and consume fewer institutional resources over time, as courses will need less revision to correct weaknesses.

Reviewing courses is a fruitful exercise. The pilot project rating gives an assessment of the elements of instructional design, Web design, and course presentation, which are often carried out at different stages of the development cycle. Moreover, the results from the review provide data and insight (qualitative comments) for making decisions about revision. Emphasis on improvement is the most important outcome of the review. Therefore, the review must offer clear indications for possible improvements to the courses. The four-point rating system employed in the pilot project serves this purpose well:

  • An element that scores 1 or 2 suggests must-do fixes.
  • A score of 3 leaves flexibility in decision making.
  • A score of 4 helps identify exemplary design elements.

Separating the three quality components also helps pinpoint a problem area. A course can excel in instructional design, for example, but fail in Web design or course presentation. Such indications of inconsistent quality can trigger a full and detailed examination, which may or may not involve consultations with instructors or program areas. Regardless, an instructor’s input is instrumental in course revisions to instructional design.

Once the review has identified the strengths and weaknesses of an online course, the instructional designers can share the results with the appropriate instructors to best arrive at a decision about course revision. Conventionally, instructors receive feedback from learners through course evaluations. CTET’s quality review is another source of information to inform decisions on possible course improvements.

Limitations

A quality review can only check the static design and presentation of a course, not the processes. Quality issues affected by processes (for instance, accurate importation of learner accounts or incorrect dates for an assignment drop box) are not covered. Properly designed, however, a quality review can improve processes by correlating process improvements and quality components.

Another limitation is the scope of the pilot, which only covered three of the six quality components in the quality framework. The other three quality issues—curriculum design, teaching and facilitation, and learning experience—are just as vital. Furthermore, correlation between the quality outcomes and processes, and between the review’s results, learner/instructor evaluations, and curriculum review, would paint a more complete picture of online course quality.

Even though the nature of the review is formative, making improvements to online courses may still need to be negotiated with an instructor (or whoever has a stake in the course design). Mutual agreement is needed before changes to a course can take place. In short, the review provides a great deal of information, but the reality of collaboration among CTET, instructors, and programs will determine the actual difference the review makes on quality.

Time and Effort Required

One of the pilot project’s goals was to explore the feasibility of quality review as a regular internal practice for CTET. Thus, we kept records of time and effort needed to review a course. On average, each team member spent two hours to rate a course plus an additional hour to enter and compile the data. Therefore, each course took an estimated nine hours of staff time for review.

A team approach to conducting the quality review is essential because expertise in instructional design, Web design, and course presentation are needed to formulate the standards and carry out the course evaluation. In the future, the quality standards may be customized to meet the needs of the academic programs and better align discipline-specific requirements with RRU philosophy.

A Better Model?

This pilot project was a one-time effort. Organizing and conducting a quality review as a regular practice at CTET remains an exercise for the imagination. A couple of factors must be taken into account before choosing the optimal way to implement a quality review:

  • A review takes time—eight to nine hours per course. From CTET’s perspective, it becomes a question of resource allocation and workflow configuration. Additional time and effort will be needed to follow up with instructors and academic departments to make decisions about course revisions and improvements.
  • The goal of improving the quality of online courses has to take priority. The communication channel with key stakeholders who hold the decision-making power in curriculum design and program delivery must be effective.
  • The course revision process should adopt an approach and strategy that best uses the resources of CTET and the academic departments.

A one-time periodic review of online courses creates a concentration of energy and synergy through teamwork even though team members have to juggle the project with their regular work. An annual review might make the project unmanageable by a three-person team unless part of their time was formally dedicated to the task. A more frequent, say a semiannual or quarterly, review might disrupt the reviewers’ workload beyond recovery without appropriate staffing adjustments. Furthermore, a periodic review might sacrifice effectiveness by missing the optimal timing to communicate with the key stakeholders. The ideal time to share the review results with a program director or an instructor would be before they make decisions about what course revisions are necessary for the next offering of the course.

An alternative is to integrate the review procedures into the course production cycle. Individual courses could be reviewed upon completion of production. Data can then be compiled at the optimal time to report the results to the stakeholders—while they still have time for revision before instituting the course or before the course is offered again. This integrated review process does not change the workload issue for the reviewers, and it can create logistical complexity in the course development process. For example, crosschecking courses requires training for everyone involved to produce consistent and reliable data and a greater degree of coordination. These are not insurmountable problems with proper advance planning, however.

Conclusion

CTET’s quality review pilot lasted five months from conception to fruition. The stated goals were accomplished: establishment and testing of quality standards and reviewing procedures, and inclusion of recommendations for course revision in the review results. Most importantly, reflection on the pilot project extracted the review team’s experience in order to envision a large-scale implementation of quality review that will benefit the university as well as the instructors and learners it serves.

More work and challenges lie ahead for quality review of online courses to become a regular practice at CTET. At least now we have a glimpse of those challenges and an idea how to address them. Other institutions facing the same challenges in implementing a formal review of course quality can learn from the lessons of this pilot project to ensure an efficient, effective approach.

Endnotes
1. J. Bourne and J. C. Moore, “Elements of Quality Online Education,” The Sloan Consortium, 2004, <http://www.sloan-c.org/publications/index.asp> (accessed March 6, 2006).
2. J. S. Brown, A. Collins, and P. Duguid, “Situated Cognition and the Culture of Learning,” Educational Researcher, Vol. 18, No. 1, 1989, pp. 32–42; R. J. Spiro and J. C. Jehng, (1990) “Cognitive Flexibility and Hypertext: Theory and Technology for the Nonlinear and Multidimensional Traversal of Complex Subject Matter,” in D. Nix and R. J. Spiro, eds., Cognition, Education, and Multimedia: Exploring Ideas in High Technology (Hillsdale, N.J.: Lawrence Erlbaum Associates, 1990), pp. 163–205; and B. H. Khan, Web-based Instruction (Englewood Cliffs, N.J.: Educational Technology Publications Inc., 1997).
3. A. W. Chickering and S. C. Ehrmann, “Implementing the Seven Principles: Technology as Lever,” 1996, <http://www.tltgroup.org/programs/seven.html> (accessed March 6, 2006).
4. A. F. Rovai, “A Practical Framework for Evaluating Online Distance Education Programs,” The Internet and Higher Education, Vol. 6, No. 2 2003, pp. 109–124.
5. The Institute for Higher Educational Policy “Quality on the Line: Benchmarks for Success in Internet-based Distance Education,” April 2000, <http://www.ihep.com/Pubs/PDF/Quality.pdf> (accessed March 6, 2006).
6. California State University, Chico, “Rubric for Online Instruction,” 2002, <http://www.csuchico.edu/tlp/onlineLearning/rubric/rubric.pdf> (accessed March 6, 2006).
7. K. C. Barker, “Canadian Recommended E-Learning Guidelines (CanREGs),” January 2002, <http://www.col.org/newsrelease/CanREGs%20Eng.pdf> (accessed March 6, 2006).
8. A. Herrington et al., “Quality Guidelines for Online Courses: The Development of an Instrument to Audit Online Units,” in Meeting at the Crossroads: Proceedings of the Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education, 2001, ED 467943 in the ERIC database.
9. See the Web page Research-Based Web Design & Usability Guidelines, posted by the National Cancer Institute, <http://usability.gov/guidelines/> (accessed March 1, 2006).
Tracy Chao ([email protected]) is an instructional designer, Tami Saj is a learning support associate, and Felicity Tessier is a Web developer in the Centre for Teaching and Educational Technologies at Royal Roads University in Victoria, British Columbia, Canada.