I recently presented a webinar on 4 Ways to Jumpstart Your e-Learning Designs in 2014. In it I presented a fresh look at the concept of Instructional Interactivity and the necessary components Context, Challenge, Activity, and Feedback for designing effective e-learning interactions. (For more detail you can download the white paper Creating e-Learning that Makes a Difference.) We had great turnout and interest, but we weren’t able to respond to some really important points participants raised and so I thought I’d answer them here.
How do you address the fact that the interactive sample does not really ensure mastery? The user can keep trying until they get it right. Many of my clients are concerned with compliance and want to know when a student fails or struggles with a task. – Sherri
Many people confuse having a score with mastery. The kind of tests that your clients are familiar with don’t really measure mastery; rather they measure memory, test-taking ability, and oftentimes, chance. There are countless reasons a learner might earn a score on a standard test that has nothing to do with his/her mastery of the desired skills. When you build interactions that require performance in a realistic context, you move much closer to being able to assess the level of mastery achieved. With interactions, you can record exactly the parts of the performance where failure is occurring; the number of attempts and the steps that cause the most difficulty actually give you useful information about where a learner is struggling.
Also, what probably wasn’t clear from the brief demo is that when mastery is desired, these interactions are usually presented at least twice. The first time the learner is free to repeat and make mistakes and ask for help as much as needed. Then to prove mastery, they must do the task (or a similar one) from start to finish without asking for help or exceeding a set number of missteps. In this way, these interactions actually provide the organization with a far more reliable indicator of mastery than traditional multiple-choice tests.
Could you compare the value of narration vs. the use of visual conversation windows? It seems that laying down & synching audio is time consuming. – James
This is probably one of the hardest choices to make, simply because both evidence and experience are inconclusive. One basic impediment is the obvious one you’ve stated: it’s a lot of trouble, in terms of effort, technology, maintenance, and expense. To undertake narration, one would like to be confident that there will be a corresponding benefit. Let’s weigh the positives and negatives:
- On the positive side, some learners express a preference for audio. It is unclear whether comprehension is uniformly enhanced or not.
- On the negative side, audio greatly reduces learner-control, which I think is essential for engagement. With audio, the sequence and pace are prescribed for the learner which greatly reduces adaptability to individual needs and preferences. Also, narration often forces a linearity of content presentation that conflicts with the demands of interactivity. In most cases in my design experience, the negatives outweigh the costs and advantages.
So when building a “conversation-like interaction”, our designs generally allow the user to choose between written speech options without supporting audio. A notable exception, however, is when you are using the quality of oral communication to convey more than just content—that is when the underlying emotional components are critical to the challenge rather than just content—then it may be absolutely essential to use voice narration.
And finally, this issue should not be confused directly with addressing accessibility. The solution for providing access to interactive multimedia for the visually impaired is a far more complicated task than simply adding voice over narration to interactions designed for general usage.
What advice do you have for pushing back onto Subject Matter Experts (SMEs) when it comes to reducing content for valuable outcomes? — Steven
This is one of the most common problems I hear about—that SMEs continually insist on including too much content. There is no immediate solution to this issue; a lot depends on your specific environment and relationship to your SMEs and your ability to work cooperatively with them. But here are some best practices that over time will improve this situation:
- Work with your SME as a partner rather than an adversary. Your content meetings should not be a one-way information dump. Work together on the analysis, include the SME in frequent and early reviews of interactivity. Their role should be more than a proofreader of your writing.
- Do not work from existing course materials as the assumed basis for your course content. Existing materials are too content-centric. Put the manuals aside for your use later as a reference. In your conversation with the SME, demand that the course be defined by specific performance objectives. Answer the question, “What do you expect learners to be able to DO when they complete this course that they couldn’t do before?” Have a formal signoff and agreement on these performance objectives before you even begin selecting “content”―then use these performance objectives as your knife to cut unnecessary content.
- Start by designing interactions, not by writing content. That in itself will make it very clear how much of the traditional “content” becomes irrelevant and can be omitted.
Thanks to Sherri, James, and Steven for submitting these questions. I suspect that these issues are ones that nearly every e-learning designer has struggled with at some point.
The most important thing to remember is that you are creating an experience for your learners, so make your design decisions accordingly. Many of the tasks that designers hold as essential are more about what you do as a designer than actually what will challenge the learner.
Have other e-learning design questions for Ethan? Submit them in the comments section below!