At my firm we take our instructional materials–participant guides and leader guides–very seriously. For most courses, we take the developer’s slide deck and notes and process them into PDFs that have form fields for taking notes, background technical notes and links, embedded attachments, password-protected answer keys, and more. We have the processing of materials down to a science so we can produce them efficiently, an hour on average for every hour of content, and they add fantastic value to courses. They don’t get left behind like the old paper binders used to, they are easy to store and access, the embedded notes cut down on the need for participants to furiously take accurate notes, they provide a place for instructors to be verbose in place of cramming every word they can on every slide, and they project a professionalism that enhances the credibility of the instructors.*
That said, SME developers are busy people and don’t always get their materials in on time for processing, resulting in a few paper-based participant guides in our courses. These courses also generally don’t get a formatting pass, so they tend to be plagued with slides that have tiny type, formatting inconsistencies, and so forth.
A colleague of mine, a little frustrated at developers who miss deadlines, was wondering if the relatively poor quality of the materials has a material impact on participant satisfaction, so she had one of her people run the numbers against our course evaluation database for one of her conferences.
I love that she did this. Experimentation and data collection is the heart and soul of a design mentality.
What she found was only a very slight advantage in terms of satisfaction for courses that developers turned in on time and which received electronic participant guides and formatted slide decks.
I didn’t run the numbers through a stats program because even if the results were statistically significant, it would be hard to argue that they were good enough that the stakeholders would find them practically significant (for example, the difference in satisfaction for courses where developers met their deadlines was only 0.05: 4.31 versus 4.26 on a five point scale).
I still think having high quality participant materials positively impacts learners, but I’m not surprised they don’t have a material effect on overall satisfaction, at least in this small retroactive study. I’d expect the effects of the quality of the instructor and the relevancy and design of the content to overwhelm the quantitative impact of materials on measured satisfaction.
That’s not to say good materials aren’t a good investment (heck, compared to what we used to spend on printing paper materials, investing in electronic materials is a bargain). But it’s also true that if you need to measurably move the bar on participant satisfaction, investing in instructor training or course design is the best bet.
Or class size. While I was looking at my colleague’s dataset, I noticed that the top 10 courses at her conference in terms of satisfaction had an average class size of 38. The bottom 10 courses had an average class size of 64. Something I’ll definitely have to look closer at.
* When we first proposed going paperless for materials, the main concern we heard was that the invitation to use computers during courses is also an invitation to multitask. This objection has faded over time. We still have instructors who opt out of including detailed background notes in participant guides, but we also have a number of instructors who take great advantage of it.