The use of narration and audio in e-learning is a question facing most instructional designers, but one that I have yet to find much conclusive about in the literature; and that which we know definitely is also what we could have guessed without doing too much research. (For example, it’s been documented pretty well that listening to word-for-word narration of text on the screen is ineffective. It’s also something that is manifestly obvious to any e-learning user who has ever suffered through that.) And people tend to be surprised that many of the best e-learning modules Allen Interactions builds don’t include narration. This isn’t because narration is bad, but rather that a decision to use narration or not is the result of balancing a lot of sometimes conflicting considerations.
Because the issue seems to be at the forefront of many designers’ thinking, I thought I’d just take an opportunity to reflect on some of the common questions about audio narration that come my way. My comments here are not beyond question but based on practical experience as well as knowledge of research, but hopefully at least useful as a starting point for discussion of some of the issues).
- Don’t you need narration to accommodate “Learning Styles”—auditory vs visual learners?
It seems to me to be a gross oversimplification to suggest that just by providing the same content to be either read or listened to addresses these styles. There’s plenty of debate whether these styles are actually meaningful divisions, but in any case the use of one’s senses is not exclusive. Even a so-called visual learner will be able to process auditory information and vice versa (excepting, of course, actual sensory impairments). A visual learner would most likely benefit more from supporting images and animations than from simply processing the printed word visually. (To the extent that a learning styles discussion can be helpful, I think trying to accommodate the differing needs of linear vs abstract learners—rather than simple sensory channel—is more likely to suggest something useful.)
- Doesn’t narration create a more engaging learning environment?
It’s possible, but in general, I’d say it usually does the reverse. Listening to boring content is just as unbearable as reading boring content; and perhaps it’s more so. Narration removes the parameter of speed from the user’s control, making it an even more passive and frustrating activity. It also removes or makes difficult other important aspects of information processing. It requires a much greater burden on short-term memory as there’s no convenient way to review content delivered through narration, except in the very fundamental sense that one can always replay the audio. Our comprehension strategies for reading are a lot more sophisticated than that: rereading only specific words or phrases, speeding up and slowing down in highly personal and context-dependent patterns, impossible to reproduce with narration. But even more than this, the visual component of working on a computer is immense; even with narration, much design effort has to go into what the visual elements will be. Unfortunately, many designers use narration almost as a justification for doing nothing visually—and staring at a visually dull screen, no matter what is being done through audio, is not going to be very engrossing for the learner.
- But doesn’t narration add a warm, emotional element to the learning environment?
Again, it’s possible, but not automatic. In general, I find a disembodied voice to offer little useful emotion. Over-emotive narration tends to backfire; I dismiss as patently false a voice applying extreme affected inflection to content that doesn’t warrant it. Again, if the content itself has an emotional element, effective narration might heighten its significance, but it can’t elevate content that has been written simply as objective documentation.
- But doesn’t ADA Section 508C require narration?
The act requires that work done with Federal funding have modes of operation that don’t require vision, visual acuity, hearing, speech, or fine motor control. If this applies to your work, you do need to facilitate a way for content to be spoken; but this can be accomplished through various “reading” utilities to which you could provide a transcript, rather than implementing a mechanism that affects all users negatively. It doesn’t mean that the exact same program needs to do all this simultaneously. I think what it suggests is that you need to create different versions of your e-learning to accommodate these targeted impairments. I’ve yet to come across a single online lesson that adequately addresses these issues without a severe degradation in instructional effectiveness. Usually, two modes can suffice: full media version with graphics, high interactivity, and if narration, optional closed captioning or transcript availability; and then a low media version that eliminates the graphics, or at least integrates audio descriptions of graphical images, provides automated voice-reading technology if not actual recorded narration, and minimizes gestures that require fine motor skill (such as drag-and-drop, which would also generally also require vision, as well).
Overall, my internal guideline is that you should be able to articulate a clear instructional reason before you invest in narration. (And “Our standard specifies narration,” or “Are templates are built that way,” or “What will it hurt?” are not valid instructional reasons.) Once you have a good instructional justification, the technical impediments are usually mangeable.