No One Reads Online
by Linda Rening, instructional strategist
Life is full of disillusionment. I had thought I had gotten through all the disillusionment the day I realized my mom really didn’t have eyes in the back of her head, and when I learned firsthand that real children are neither as cute nor as well behaved as those on television.
I was wrong. There was a lot more disillusionment ahead.
Since coming to Allen Interactions almost 2 years ago, I’ve had to face a few other facts. These have been even harder to accept than the realities about moms and my own darling children. Just to give you a sense of the depth of my naiveté, let me tell you what I used to believe:
- Learners want to know what they will learn in an e-learning course, so all e-learning courses need to start with learning objectives. (Not true.)
- Learners appreciate learning objectives that start with verbs indicating observable behavior, like: Explain, Define, Demonstrate, etc. (Wrong.)
- If they get a question wrong in an online test, learners want to know what the right answer was and they always read the explanations. (Nope – not even close.)
- People pay attention when I am conducting a webinar. (This is actually true for the first 7 minutes, then they are checking email, ordering holiday presents online, or feeding the cat.)
- It would be really fun to have a T-Rex in an e-learning course. (The less said about that the better.)
But, the most difficult thing I’ve had to accept is this: no one reads e-learning courses. All that elegant prose, clever characterizations, and cogent explanations that I’ve written over the years? What about that? The sad truth is I have engaged in what amounted to writing exercises, pretty much for myself and the subject matter experts (SMEs) with whom I’ve worked.
At best, learners tolerated my verbiage; at worst, I annoyed them as much as most people who design e-learning annoy them.
How do I know? We’ve watched how people behave in user testing and interviewed countless learners about their experiences with online learning. We’ve asked them what works for them and what doesn’t. What we’ve heard is that mostly people do not read what is on screen.
Not to be argumentative, but I believe anyone who designs or develops e-learning already knows that. Evidence to support my assertion? Read on.
Think about your own behavior online. What do you do? The answer is found in the name of the delivery mechanism. We get online via a browser because that’s exactly what most of us do online: we browse. I used to think the term “browser” was clever marketing, now that my disillusionment is more complete, I know the term is an accurate description of online behavior.
What was the last thing you read online? This morning, I wanted to confirm that the plural of memorandum is truly memoranda. (It is, unless one uses the shortened versions, memo and memos respectively.) I actually read two full paragraphs on Wikipedia, after first scanning those paragraphs to see if I could just pick out the information I wanted. Since I couldn’t, I sighed, and read the paragraphs.
What do you read online? Actually, I can tell you the answer to that question. You read what you are interested in. Not what someone else thinks you should be interested in, like product specs or sexual harassment policies, but what you care about personally.
You read what you are interested in and, further, only when you are interested in it. Next week, if someone assigned me an e-learning course on the plural forms of nouns like memorandum, datum, curriculum, etc., I wouldn’t spend nearly as much time and energy on the topic.
So, we have a couple of clues about online behavior: it’s called a browser for a reason and none of us read unless we want to.
I think there is more evidence suggesting we know learners don’t read what we write. Think of all of the tactics we’ve used to make them read:
- Recording voice-over audio that reads every word on the screen to them
- Disabling the “next” button until every question has been answered – or even answered correctly
- Delaying activation of the “next” button for 3 seconds so learners have to read (I wouldn’t make that up)
- Video-taping the Vice President of Something telling learners how important the information is and imploring them to learn it
What have we accomplished with those shenanigans? We’ve annoyed our already besieged and weary learners a little more.
So, what’s the answer? Very simply, the answer is to create learning that matters to learners.
Dr. Michael Allen, Ethan Edwards, and others at Allen Interactions have written extensively on the topic of how to create learning that matters. We won’t go through all of it again here, but I do want to offer you a couple of reminders:
Follow the structure of Context-Challenge-Activity-Feedback(CCAF):
- Know your learners and identify desired behavior change
- Create learning that takes place in a true-to-life context and offers real-to-life challenges
- Structure feedback that follows the natural consequences of a choice the learner makes
- Let learners try something on their own and make mistakes. That will create motivation to learn the right procedure or the correct answer.
- Start with the challenge, and present information upon learner request or as part of the feedback
- Use humor, surprise, curiosity, sensory input and the like judiciously, but do use them. They serve to heighten interest and keep learners engaged.
The reality I have had to face is this: If you want to write elegant prose, keep a journal. If you want people to learn, stop talking and start creating learning that matters.
Dr. Linda Rening is studio executive for one of Allen Interactions’ Minneapolis/St. Paul based studios. While not coping with disillusionment, she works hard to enable clients to reach their business goals by helping design learning experiences that matter.