Leveraging Feedback Experiences in Online Learning

min read

Feedback is a powerful construct in the design of quality online learning, and quantified dimensions of learners’ feedback experiences can be leveraged to improve effectiveness, increase efficiency, and maintain appeal in online courses.

laptop surrounded by science and tech symbols and icons
Credit: JuliRose / Shutterstock.com © 2020

Providing high-quality, affordable, and enjoyable online learning opportunities for adults has never been more competitive or important. As more learners turn to technology-assisted methods of gaining competencies, the availability, affordability, and appeal of online learning continue to be major considerations of instructional designers, faculty, and academic leaders. Furthermore, in the employment marketplace, the need for lifelong access to learning opportunities continues to increase. Academic leaders in postsecondary institutions are squeezed amid the competing priorities of learners, accreditors, federal financial aid policy makers, and employers.

In response to this competitive landscape, it is more important than ever that instructional designers are able to balance the three elements of instructional design—effectiveness, efficiency, and appeal.1 Effective designs provide evidence of learning outcome mastery, efficient designs lead to quick results, and appealing designs are enjoyed by the learner. One challenge of addressing these tensions is that most higher education institutions lack the kinds of metrics (lead measures) that would allow leaders to make timely decisions related to curriculum and instruction. Often, by the time academic leaders participate in a program review in which data are analyzed and decisions are made, many students have graduated from their programs and begun sharing impressions (good or bad) with colleagues and friends.

What if real-time data could provide proactive insight into a student's experiences? What if mentors and leaders could monitor experiences and intervene to remedy negative learning situations? Are there online-learner behaviors (rather than algorithms, which can be biased) that mentors and leaders could observe as early-warning signs of problems with the effectiveness of the instruction, the efficiency of the design, or the overall appeal of the courses? What are the key ingredients of learning, and could they be measured and monitored on a large scale? Feedback is a powerful construct in the design of effective instruction, so it seems logical that feedback-delivery technology could be leveraged to increase efficiency by delivering immediate feedback, improve quality by delivering accurate feedback, and maintain appeal by being user-friendly. Many of these points of data are at least partially tracked by today's learning management systems (LMSs) and adaptive learning courseware technologies.

One Research Study

This hypothesis was tested in a correlational study in which I compared the feedback experiences of learners with their achievement on standardized exams. Secondly, I compared the feedback experiences of learners with their satisfaction as reported on end-of-course surveys. To evaluate the learners' feedback experiences, I gathered data from the last three courses they took before completing their academic degree programs. I wanted to learn about the cumulative effects on a student who received, for example, great feedback from Professor A but less effective feedback from Professors B and C. At the same time, would learners who simultaneously had three great experiences with feedback be more likely to learn and enjoy their learning?

The students in this study were adult, online learners at Indiana Wesleyan University. The average age was 34, and they represented demographics that closely mirror the general college-going population of the Midwest (i.e., gender, race/ethnicity, veteran /active military, and self-reported disability). The sample size included more than 200 students and was evenly distributed between students in associate's, bachelor's, and master's degree programs in business.

Studies produced since 2010 tend to view feedback as a process rather than a concept. A new paradigm of education recognizes that learner-centered instruction (more active, more authentic, and more socially connected) often hinges on designing effective feedback processes. Yet, many questions remain. For example, is the ideal formula for feedback simply that it is timely and relevant? If that is the case, artificially intelligent feedback delivery systems should be universally more effective than humans in situations where it is possible to program accurate feedback delivery for 100 percent of a student's attempts. Are there instructional design situations where human feedback is more effective than programmed feedback? My research begins to address some of these questions by examining data from one slice of the online learning experience.

The research question guiding my work is this: Are there correlations between learner achievement, learner satisfaction, and several selected dimensions of feedback? If certain feedback experiences are followed by high learner achievement and satisfaction, it might be possible to measure feedback experiences—wholistic evaluations of several dimensions of feedback—as one proxy for a high-quality learning experience. The dimensions of feedback that were quantified for this study are listed and defined in table 1. Imagine that each of these dimensions is quantified and reported (using artificial intelligence and/or machine learning) for each online course offered at an institution. It would be immediately evident where instructional designers, faculty, and administrators should be targeting time and energy. Before we start using these data as indicators of quality in online learning, though, do we have evidence that they are effective proxies?

Table 1. Four dimensions of feedback2

Dimension

Description

Timeliness

The number of responses/grades on submitted assignments that are provided within seven days of submission, and learners’ questions that are answered in 24–48 hours. The assumption is that timely feedback is best.

Frequency

The number of feedback interactions between the learner and peers and between the learner and the instructor. The assumption is that higher frequencies of feedback are best.

Distribution

The extent to which feedback interactions are dispersed across the weeks in a course. The assumption is that distributed interactions are best (as opposed to massed feedback right before a major exam).

Individualized and Content-Specific

The degree to which feedback is specific to the individual learner’s goals, strengths, needs, or questions. Feedback either provides the learner with next steps to correct misunderstandings or prompts the learner to extend their learning in some new and novel way. The assumption is that individualized and content-specific feedback is best.

 

Results

The results of this correlational study indicated that learners who received more individualized and content-specific (I_C) feedback per assignment scored higher on a standardized exam (not an exam created or graded by the instructor) and that they were also more likely to be satisfied with their learning experiences. Pearson correlation tests showed that when students received higher quantities of I_C feedback on the assignments, this correlated with higher scores on the end-of-program standardized exam.

Statistical analysis indicated that there were significant positive correlations between the amount of I_C feedback that learners received and learners' responses to end-of-course (EOC) survey questions. Learners who experienced higher ratios of individualized and content-specific feedback rated both the instruction and the curriculum higher on EOC surveys than students who experienced fewer instances of I_C feedback. Stated another way, students' satisfaction ratings were an accurate reflection of quality and timely feedback provided by the instructor. Although there were findings related to the other dimensions as well, in this article I address only the I_C dimension of feedback.

Course Design Recommendations

A growing body of research exists around the premise that online environments require some element of social presence, humanization, and critical interaction among people for effective learning to occur. An asynchronous course can easily become a "robotic set of instructions" rather than a "dynamic learning environment."3 Without social interaction and/or internal reflective dialogue, one might question how learning could occur at all. Intentionality around the ways learners receive and use feedback is essential in learner-centered instructional design. The primary barrier to high-quality feedback experiences is the time that it takes for human instructors to provide that feedback, coupled with doubt about the extent to which learners engage with the feedback. Both of these issues can be ameliorated, to some extent, through careful course design.

Special emphasis should be placed on providing opportunities for learners to receive individualized and content-specific feedback from their instructors. The most common challenge associated with the delivery of these kinds of feedback is that it is time consuming for the instructor. Meanwhile, some question the degree to which students read or use the feedback. Instructional designers and faculty should leverage automation, peer feedback, and adaptive learning technology to overcome these challenges. Just as the design of a building can facilitate or hinder interaction, the design of a course can do the same. Here are some design tips to increase the probability for success.

  • Structure the course so that there are opportunities for instructors and peers to provide formative feedback several weeks before final projects/papers are due. Require that students use peer and instructor feedback in their final submissions, and put the burden on the learners to describe how they incorporated that feedback into their final products (or identify which feedback they chose to ignore and why).
  • Identify key time frames in the course when instructors will be heavily engaged in providing written or video feedback that is individualized and moves the learning forward. During these time frames, plan activities that learners can complete independently. These are prime design locations for adaptive learning technology, group work, or e-learning modules.
  • Create a bank of content-specific feedback comments that instructors can use for common issues and errors. If multiple instructors teach the same master course, create a shared document where they can add to the one resource. Tools such as TurnItIn's Grademark provide comment banks that can be shared across teams of instructors.
  • If end-of-course survey evaluations are low, implement strategies to provide feedback that directly connects to learners as individuals. Keep a running record of a few salient details regarding each student you teach and work them into your feedback (e.g., in a liberal arts course, "I can see how you will apply your explanation of critical thinking in your future career as a nurse").
  • If you teach and grade papers in a professional discipline, provide feedback related to the course and program learning outcomes, and focus less on grammar and language usage. As a former English teacher, I find this one tough, but students are hungry for a better understanding of the discipline they chose to study (and not the rules of correct comma usage). If written communication is a key outcome for your professional discipline, consider requiring the use of robust grammar-checking software or a writing center.

Speaking of providing content-rich feedback, I adapted the following list of feedback starters from Making Thinking Visible.4 It might prove useful for instructors striving to provide individualized and content-related comments on student work:

  • "Can you identify any patterns from your classmates' submissions for this project?" or "I'm noticing a pattern in your work…."
  • "What generalization could you make from the specific examples you provided?" or "Thanks for supporting your generalization about ____ with ____."
  • "What alternative possibilities could there be to explain the result you identified?" or "A more relevant alternative would be…."
  • "From the evidence you presented, what is the most convincing piece?" or "Try supporting your argument with more reliable evidence."
  • "What is your plan to continue to grow in this area?" or "A protocol is a series of steps you follow to improve consistency. What protocol could you develop for yourself to improve your writing skills?"
  • "What are the primary knowledge claims you are presenting?" or "There might be an implicit bias or assumption that is impacting your thinking here."
  • "From what you have identified, what are the priorities and how do you know?" or "From what you wrote, it appears that your conditions for knowing something are ____. Is that accurate?"

If the prevailing instructional theories of the 21st century continue to espouse socially constructed and connectivist learning, instructional designers and faculty will need to become more adept at capitalizing on the efficiencies that technology offers so that humans can do what we do best: connect with one another.

Monitoring Quality

This article began with the conundrum of measuring and monitoring online course quality. Most current practices are lag measures at best and inconsequential at worst. If it is true that a high-quality feedback experience leads to high achievement and satisfaction, and if it is also true that an online student's feedback experience can be quantified in real time using LMS data, it follows that instructional leaders and educational technology providers could create dashboards using learning analytics that would, in real time, monitor and report the dimensions of feedback as a proxy for quality online learning. Dragan Gašević, Shane Dawson, and George Siemens rightly remind us that "learning analytics are about learning," and yet the analytics in the hands of most educators today have precious little connection to actual learning.5

Step into the shoes of a policy maker, philanthropist, grant maker, or an accreditor for a moment. All of them need methods to compare one online learning program with another for the purpose of granting funds or shaping policy that protects students from bad actors in the marketplace. What if the definition of a learner's feedback experience was so widely accepted that any entity could ask of an online learning provider, "Could you provide data around your learners' feedback experiences for each degree level and program? Could you also provide these data disaggregated by race/ethnicity, gender, and first-generation categorizations?" Side by side, both consumers and leaders would be able to see that Program A provides every student with feedback, on average, in less than six days on assignments. Program B, on the other hand, has a twenty-day average between a student's assignment submission date and a grade posting in the gradebook. Program A shows a ratio of 3:1 for individualized and content-specific comments per assignment submitted. Program B shows a ratio of 1:1 for comments to assignments. Program A shows feedback interactions that are well distributed throughout the courses, while Program B shows three times more interactions in the final two weeks of each course than in any other week. Clearly, decision makers would have rationale for investing in Program A over Program B. Accreditors would have meaningful direction to provide to the leaders of Program B—actionable steps they could take to improve the online learner's experience.

As educational technologies proliferate and automation becomes more affordable and reliable, educators, researchers, and developers need to work together closely to create a new reality that puts meaningful data into the hands of educational leaders. It is time for LMS data collection and reporting to evolve beyond page views, logins, and time on the page. Today's online learners are relying on us to create better methods for capturing and then increasing quality in online learning, and improving their experiences with feedback is an ideal place to start.

Notes

  1. Peter C. Honebein and Cass H. Honebein, "Effectiveness, Efficiency, and Appeal: Pick Any Two? The Influence of Learning Domains and Learning Outcomes on Designer Judgments of Useful Instructional Methods," Education Technology Research and Development 63 (August 2015).
  2. Table is adapted from Erin A. Crisp and Curtis J. Bonk, "Defining the Learner Feedback Experience," TechTrends 62, no. 6 (2018).
  3. Marianne C. Bickle and Ryan Rucker, "Student-to-Student Interaction: Humanizing the Online Classroom Using Technology and Group Assignments," Quarterly Review of Distance Education 19, no. 1 (2018).
  4. Ron Ritchhart, Mark Church, and Karin Morrison, Making Thinking Visible: How to Promote Engagement, Understanding, and Independence for All Learners, 1st ed. (San Francisco, CA: Jossey-Bass, 2011).
  5. Dragan Gašević, Shane Dawson, and George Siemens, "Let's Not Forget: Learning Analytics Are About Learning," TechTrends 59, no. 1 (2015).

Erin Crisp is Associate Vice President of Innovation at Indiana Wesleyan University.

© 2020 Erin Crisp. The text of this work is licensed under a Creative Commons BY-SA 4.0 International License.