Getting Better Feedback on Course Evaluations

Course evaluations are an invaluable source of information, particularly when your learners are professionals in their field. If your learners aren’t happy, that’s a problem.

Of course, the data is only useful if (a) you get a good response rate, and (b) you get helpful responses. A low response rate makes it hard to know how generalizable the data is, which is known as non-response bias: the smaller the sample, the greater the risk that the sample is not representative of the whole.

And a lack of thoughtful responses–those responses that provide guidance on what should change in a course and what should stay–is a problem in a couple of different ways. One, being told your course was awful isn’t helpful if they don’t tell you why (though knowing your course was terrible is better than not knowing; at least you can go back and start asking questions). Two, learners in a corporation have a responsibility to help improve courses for future learners (not to mention self-interest; the better the course is, the easier it will be to work with and manage subsequent cohorts).

Also, critical reflection on the course is good for learning.

At McGladrey, we’ve historically had high response rates to our anonymous electronic end-of-course evaluations, typically in the forty to fifty percent range. My read of the course evaluation literature is that this puts us in the range where significant non-response bias is unlikely. I attribute our rates to a couple of factors. One, we try to keep the evaluation from being onerous. We’ve whittled the number of questions down to the ones that really matter to us, and I generally resist proposals to add new questions.

Two, and probably more importantly, I believe high response rates reflect a belief from learners that we are listening and that their feedback makes a difference.

In the past couple of years, though, response rates have started to drift downwards, though, which concerns me. There are a number of possible culprits. It might be that our professionals are busier than they ever have been. It could be evaluation burnout.

It could be that as we’ve evolved our course evaluations that we are making learners work harder than they want to. A couple of years ago (right about the time or rates started to fall), we decided to kick off every evaluation with two qualitative questions: “What did you find to be the most effective, most valuable aspect of the course?” and “What did you find to be the least effective, least valuable aspect of the course?” Questions like these are high gain from my perspective, but they take more work than checking boxes.

I hope the falling response rates are not from a belief by learners that we are not listening.

We are delivering more credits in recent years via internal conferences. One suspect for falling response rates is burnout from getting a bunch of evaluations all in the same week. However, the response rate from conference-based courses is the same as the overall response rate, which would suggest conferences are not the problem.

One thing I do know is that I don’t want to raise response rates at the cost of response quality. A couple of years ago we tried a little experiment. We took a conference that was offered multiple times in quick succession. At one of the offerings we handed out and collected paper evaluations instead of sending electronic evaluations. Naturally, response rates were very high. I asked our experienced dedicated faculty whether, with so many more responses, they learned anything more than they did with fewer electronic responses. They didn’t. We drove up response rates, and created more work for ourselves, without learning anything new.

We did another experiment with a webcast just a couple of weeks ago, this one an attempt to raise the quality of responses. To half the participants we sent out the usual evaluations. To the other half we sent out a modified evaluation where we replaced weaker open-ended questions with strong ones in an effort to provoke more thoughtful responses. For instance we replaced a question “Any further comments about this instructor you’d like to share” with the question “Support your rating for this instructor. In other words, provide at least one example of something the instructor did especially well, to ensure he or she keeps doing it. And provide specific performance feedback for something he or she can work on.” The hypothesis here was that people just need a little push to provide thoughtful, actionable feedback.

Unfortunately, we found no significant difference in the quality of the feedback. It’s possible that a particularly low response rate masked any effects (I have a new hypothesis: that response rates for Friday afternoon webcasts are significantly lower than other times of the week). I hope to try again with a larger course soon.

I also have to accept that at some level, no matter how I ask the questions or what expectations I set, I may not get the level of feedback I’m hoping for. It’s a pulse check, not the complete story, and there are other levels of evaluation that tell different parts of the story.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s