Saturday, September 12, 2015

10 ways to make badass INTROs in online learning (& ditch dull learning objectives)

So many online learning programmes don’t start well. They’re often dull, overlong or, worse, a boring list of learning objectives. We have to get over the idea that we’re putting textbooks on screen. This is the web folks and the rule is – you have 2 seconds to impress. Attention is a necessary condition for learning, so your job is to raise attention and curiosity, not bore them into submission.

1. First impressions matter
First impressions matter, so they say, but in online learning they really do matter. Ebbinghaus showed us, back in 1885, that memory has a tendency towards ‘primacy and recency’, a bias in which the first and last things are retained and recalled better than what is presented in-between. So pay attention to the intro. It is the door to the learning experience and they should want to push it open. Make it relevant and memorable.

2. Titles
Too many courses are have titles that seem designed to turn you off before you’ve even started. A great title will catch attention, intrigue, give an idea of the content and even set the tone or voice of the leaning experience. Write a list of titles, one word titles, two word titles, three word titles, Why..., How to... Is there a concrete image that can be used?  How about a play on words, rather than Use of Gamification in mobile learning' try 'Game of Phones'. Pick a title that excites. That’s what movie makers do and it’s a good practice. Rather than ‘Learning technologies 101’ try “From Gutenberg to Zuckerberg’. Be more imaginative. Question? Did the title to this article, or word 'badass' get you here in the first place? One can also do a little A/B testing to get this right - this is what tech and ad companies do as a matter of course.

3. Ditch learning objectives
Straight out of behaviourism, this practice lingers on and on in courses. In online courses, avoid this edu-speak and focus on an effort that gets the learners attention. Attention is important in learning and it is counter-productive to bore them with a list of dull objectives. For more on this see 7 reasons to kill up-front learning objectives

4. Avoid padding
Subject matter experts, perhaps because they’re used to writing textbooks or manuals, have the unerring habit of writing over-long pieces called ‘Introduction to…’ or ‘The history of…’ or ‘Background to…’ at the start of courses. This is rarely either necessary or desirable. Attention is your currency, don’t devalue it by turning your subject into a snoozefest. Look at how great books and movies grab you with their opening lines or scenes.

5. Focus on just one thing
Nothing raises attention and curiosity more than a suprise. Most great movie openings do this. They start with being wide open then bang, focus on a close-up or one great scene. Great courses start with these surprises – a great quote, shocking statistic, compelling image, poignant question, conundrum. Think long and hard about your singular intro, as it sets the scene for the whole course. Great movies have great opening sequences. Check out this opening sequence in one of The Hangover films.... 

6. Keep it short
There’s nothing worse than an interminable legal warning, disclaimer, video, overlong animation or boring text introduction to a course. Your learners may have come to the course with high expectations, even low expectations. It is your job to grab and excite them. That is rarely achieved with long opening sequences. Make it count but cut to the chase. If you do have to have this stuff, make it optional, like terms and conditions, from a button.

7. Interactive
Online learning is interactive, so don’t be afraid to start with participation. Try a question, a common misconception, something that wakens the learner up, raises attention. A good ‘hinge’ question can work well. On the other hand, whatever you do, don’t start with a long learning styles quiz (because they don't exist and it will be a waste of time) or some fatuous Myers-Briggs nonsense. More on the Ponzi scheme that is Myers-Briggs here. Be bold.

8. Humour
Doesn’t always work but when it does, it can do exactly what you want, raise a smile and, if relevant, make a great opening point. You can do a lot worse than raise a smile at the start of a learning experience. I made a programme for maintenance engineers once, where I deliberately made the screen go blank. Every engineer in the world leans forward to check the power supply then the lead connections. I then switched the programme back on and said, "That's what customers feel like when their service gos down...." It raised a laugh or two.

9. Skip on return
It can be annoying to see the same intro time after time. If the user returns to a course or module across many sessions, allow them to skip the intro or remove it altogether.

10. Movies and TV
Watch the openings to Movies and TV, then ignore the fact that you have to have credits. But do pay attention to the way they use smart titles, pose questions, make you think about what you’re about to see, show a fascinating clip that you’ll see later. They want to grab you before you switch to another channel. Plagiarism is a form of flattery. Here's a list of the Top 25 film openings.

Conclusion
To get off to a good start, attention should be your aim, not showboating with overlong sequences or dull objectives. There’s no silver bullet here, as each course needs its own unique introduction. Hopefully, these ten ideas provide some sort of stimulus when you’re faced with that blank piece of paper.

Other related pieces…..
10 bloody good reasons for using much-maligned text in online learning http://bit.ly/1KnJB2c 
10 essential online learning writing tips & psychology behind them http://bit.ly/1JnUo6J 
10 stupid mistakes in design of Multiple Choice question http://bit.ly/1JvMNCf 
10 essential points on use of (recall not recognition) open-response questions http://bit.ly/1PPjIXb 
10 sound pieces of advice on use of AUDIO in online learning http://bit.ly/1MccsXJ 

10 rules on how to create great GRAPHICS in online learning http://bit.ly/1iguKL4 

Friday, September 11, 2015

Hattie: Visible learning - the naked teacher - a primer

Almost every educational intervention has a positive effect, to a degree, but what matters is to select those evidence-based interventions that work well and can make a real difference. John Hattie set out to examine, over 15 years, a synthesis of 800 meta-analyses of over 50,000 pieces of evidence, categorise, then assess their impact, namely students achievement. If, he thought, we could determine effect sizes of interventions on student achievement, we can recommend good and better teaching practice. The end result was a table of interventions that could be used to guide policy and practice in schools. Hattie sums his central idea of bringing teaching and learning to the surface, through research, as Visible Learning, the title of his first book.
Visible Learning
Visible learning is the idea that teachers and, importantly, students, should make things ‘visible’.  The goal is to enhance the role of teachers through the evaluation of their own teaching. Hattie thinks that Visible Learning and Teaching occurs when teachers see learning through the eyes of students and when students, to a degree, become their own teachers. He is part of that tradition that wants a more evidence-based approach to both the profession and process. All of this is expressed in his three books; Visible Learning, Visible Learning for teachers and Visible Learning into action.

He attempts to identify and rank teaching and learning interventions by impact and abhors the current culture of teaching, as, “We have a profession where everything goes…where we close the door and don’t let anyone discuss what we do behind that door”. He backs this up with evidence about what teachers actually talk about in staff rooms and training sessions. What they talk about, when recorded and measured, are kids, the curriculum and a culture of complaint around the politics of the school. “It’s a profession that doesn’t talk about teaching” he claims. The evidence? When measured, only, “One minute a month they talked about teaching!”
Effect sizes
In Visible Learning Hattie hung his hat on ‘effect size’. Every teacher intervention has an effect but not all effects are equal in terms of impact Some interventions may have small effect sizes but you’ve lost the opportunity cost of applying other more successful interventions. He recommends interventions with an effect size greater than 0.4. These are:
This is a fascinating list and what’s just as interesting are the things Hattie regards as low impact and by this he means wasteful. These lie at the bottom and include many of the sacred cows Hattie sees as distractions in the education debate; school leaders, class sizes, homework (in primary schools), extra-curricular activities, gender, ability grouping, open learning spaces. summer holidays, welfare policies and television.
Teacher training
He is highly critical of teacher training, claiming that it is gets bogged down in largely irrelevant debates and is largely led by opinion, not research. This is important, as he sees a renewed focus on teacher practice as the best way to improve student attainment. It’s not that teachers need more time, they need to do things differently. In short, he thinks that teacher training and CPD debate is upside down, with its focus on things at the bottom of his table, that have relatively small effects, especially structural issues in schools. He wants teachers to change in response to what has been shown to work best. He is highly critical of allowing teachers to be largely ‘autonomous’, closing the door on their classrooms, often trying out untested and personal ‘bandwagon’ techniques. His recommends far more collaboration and sharing.
Criticism
Black and Wiliam criticized the work for failing to recognize that effect size is influenced by the range of achievement in the population. He has also been widely criticised or presenting probabilities as being negative, a mathematical impossibility. Hattie defends his use of the first but admits he made an error in the calculation of the second. Hattie also excluded student background and social context from his research, yet many assert that these trump many of the effects Hattie puts forward as good interventions. On the whole the criticism centres around the idea that the issues are more complex than Hattie asserts and that effect sizes are not nearly enough, in terms of evidence, to lead to the sort of policy decisions that are put forward on the basis of this one book. There is another issue around the sensitivity to instruction. None of the meta-analyses control for the effects of differences in the sensitivity to instruction of the different outcome measures. It has long been known that this is a significant effect, and that it is difficult to assess.
Conclusion
Hattie attempts to raise the bar for the teaching profession in seeing teaching practice as something that needs to be informed by research and evidence. His point is that continuous improvement must be sought, based on what we can show works, rather than autonomy, traditional or existing practice. We can criticise his effect sizes and data, but the recommended principle is sound and remains intact.
Bibliography
Hattie, John A. (2008). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement.
Hattie, John A. (2011). Visible Learning for Teachers: Maximizing Impact on Learning.
Hattie, John A. (2011). Visible Learning for Teachers: Maximizing Impact on Learning.

Hattie, John A. Masters, D, Birch, K. (2015) Visible Learning into Action: International Case Studies of Impact.

Wednesday, September 09, 2015

10 sound pieces of advice on use of AUDIO in online learning

Of all the media in online learning, audio is the one most likely to be catastrophic. If the learner is not bombarded with beeps, buzzes and bings, they’re assaulted with corny and needless background music or subjected to tinny, echo-ridden, variable volume-level voiceovers from people with all the charisma of a soggy lettuce. Having built a sound studio, just for online learning and done many dozens of voiceovers, here’s a few tips to avoid the most obvious blunders.
1. Quality matters
In The Media Equation: How People Treat Computers, Television and New Media Like Real People and Places by Byron Reeves and Clifford Nass, research shows that that users are more sensitive to the quality of audio than they are to that of video. The quality of audio production matters more than the quality of video images. This is because our eyes cope well with twilight, different light levels, distance and so on. Our ears, however, expect high quality audio, as that’s the norm in real dialogue. Record quality audio. If you don't, the learner will sound you out as an amateur.
2. DON’T use background music
Background music adds nothing to the learning experience. In fact, it can inhibit learning. Moreno and Mayer (2000) did an experiment with and without background music, showing that (even with low-level, barely discernible music) the same learning experience was 20-67% better in the sequence without background music. As a postscript, the Mozart Effect is nonsense.
3. DON’T use sound effects
Moreno and Mayer (2001) found that extraneous sounds damaged learning. Unnecessary sounds create unnecessary cognitive load and distract from, rather than increase, learning. This also applies to sounds, such as beeps or applause, that reinforce right and wrong answers. This may be appropriate in a games or young children, but not for most online learning. Ear candy is as bad as eye candy.
4. DON’T do voice and text
The same words in both text and accompanying audio narration can damage learning. Mayer and Clark argue, from their own research, for the use of ‘audio and graphics’ - without screen text. According to Clark and Mayer (2003), ‘audio or text on their own’ are better than ‘text and audio together’. This is confirmed by another study by Kalyuga, Chandler and Sweller (1999) where the group with audio scored 64% better than the group with both text and audio. They claim that one or other is redundant and will overload the visual and aural channels.
5. DON’T double up for accessibility
It’s tempting to add audio for everything to meet accessibility standards but that’s a big mistake. Ruining a good learning programme for the vast majority, for the needs of a small minority, is a mistake. There are other ways of handling the accessibility for blind users.
6. DO get levels right
It’s easy to record audio. Cameras do it, even smartphones do it, but it’s less easy to keep the quality and volume-levels the same across different recordings at different times, without extensive post-production. Users are very sensitive to sudden drops and rises in volume and you run the risk of seeming amateurish, losing the respect of the learner, if they vary.
7. DO use professional voices
Tempting though it may be to use the subject-matter-expert or someone in the team, to record the voiceovers, don’t do it. Or, at the very least choose someone with a strong voice, that can sound genuine and enthusiastic in a real recoding. You may save yourself a considerable amount of time and money by using professionals.
8. DO buy a good Mike
A good microphone is a sound investment, as it’s easy to rely on the in-built camera mike or cheap iPhone accessories. A good mike will make all the difference and remember - the quality of audio is more important than the quality of video in learning.
9. Consider podcasts
Consider podcasts, especially for mobile learning but also where you want the learner to use their imagination. Radio has been doing this well for a long time. Think about the interview format or, as in the ‘In our time’ podcasts, one interviewer and three or four experts. Listening is a far more common skill than reading.
10. DON’T assume that audio is good learning
As I outlined in my 10 reasons for using text in online learning, we can read much faster than we can listen. Audio sometimes introduces a rather ponderous pace to online learning, especially for adult and experienced learners. It is also, difficult and expensive to update, as you need to have the same voice, same levels and same recording environment. It can, at times, seem almost patronising. So think long and hard before spending all that time and money on audio - it's difficult to do well.

Friday, September 04, 2015

10 essential points on use of (recall not recognition) open-response questions

Multiple choice questions are, essentially, a test of recognition. They do not elicit full recall from memory. That’s not to say they are not useful but it does say that they are of limited use.. In practice, it is active recall that really matters in knowledge and skills, not recognition. So why not move up the assessment ladder and consider open response?
1. Tests recall not recognition
Open-response test items ask for recall of the actual text. This is very different from the recognition that multiple choice questions demand. It’s a step up in terms of competence.  We can distinguish between four levels of learning:
Familiar – knew but can’t now remember
Recognised – correctly recognise answer in a multiple choice question
Recalled – recall with effort but without help and takes time
Automatic – immediate, effortless, high-performance recall
It is quick and ‘Automatic’ recall that is the goal of high-performance learning and expert ability.
2. Reinforces learning
Open-response takes cognitive effort and the very act of recalling knowledge reinforces that knowledge in memory. Active recall, pulling something out of memory, not just recognising something from a list, improves future performance, something we have known for a century (Gates 1917). The act of active recall develops and strengthens memory. It also improves the process of recall in ways that passive recall – reading, listening and watching do not. So retrieval in itself, prevents memory loss (Bjork 1975) and the more we recall, the more recallable memories become.
3. Accept synonyms
You may accept the word ‘synonym’ as the correct answer but should you not also accept ‘substitute’ or ‘replacement”? Always consider correct synonyms as correct answers, unless you’re asking for that one, specific word.
4. Accept common misspellings
Unless you are also testing for spelling, accept common misspellings, especially double letters, silent letters, capitalisation and common mistakes. You may also want to accept both British and American spellings. Consider also typos, especially transposition errors i.e. when someone either accidentally types in two letter the wrong way round (a common typing error). Ntoe htat trnaspoesd lettres aer usaully tpying errosr adn hte maennig remians intcat.
5. Feedback on common misconceptions
For example, if you are doing a medical test and you’ve typed in virus, as opposed to bacterium/bacteria, it may be useful to point out that this is a common error in diagnosis, and that it seriously affects the recommended treatment.
6. Provide letter count
Some like to leave the filed blank or have a continuous line where the answer is to be typed. Others like to provide small dashes, so that the learner knows what’s expected in terms of letter length. It depends on how tough you want to be. In formative assessment, I prefer dashes.
A _____ is a small infectious agent that replicates only inside the living cells of other organisms.
A ----- is a small infectious agent that replicates only inside the living cells of other organisms.
7. First letter hint
First letter hints avoid the wild guessing but you must be aware that you’re not testing for full automatic recall.
A v---- is a small infectious agent that replicates only inside the living cells of other organisms.
8. Meaningful hints
You may want to provide meaningful semantic hints, as open input can be a challenge. Moving the learner meaningfully towards typing in the right answer can reinforce learning. If the answer is virus and the learner has typed in ‘germs’ or ‘disease’ or ‘infection’, you may want to clarify their mistake. Open-response is often useful for this tight form of assessment around conceptual knowledge. For example. Knowing the difference between a molecule, mixture and compound in chemistry.
9. Timed responses
To up the stakes, and distinguish between recall and automatic recall, add a timer. Automatic recall can be tested by putting a time limit on the answer.
10. Provide a get out
Whatever check and feedback technique(s) you use, make sure the learner has a chance, eventually, to get out of an endless loop of guessing. Given the mathematical possibility that there are millions of options for even relatively short words, don’t trap them into guessing and trying forever. You do have to either provide letter-by-letter reveals or eventually provide enough information for them to get the right answer or the answer itself Note that it may still be important for the learner to type that correct answer in, as this is an important act of reinforcement.
Conclusion
I have spent some time building an open response tool that creates e-learning on the fly. You simply put in your text (sentence, paragraph, document, paper, company policy, textbook - whatever) and it automatically creates e-learning with formative, open-input assessment. It also links out to Wikipedia for additional information and does the same, creating e-learning on the fly. If you want to know more about this automatic creation of e-learning tool, contact me here.
Bibliography
Bjork R.A. (1975) Retrieval as a memory solidifier: an interpretation of negative recency and related phenomena. In RL Solso (Ed.) Information processing and cognition (123-124) Hillsdle, NJ: Erbaum

Gates A.I. (1917) Recitation as a factor in memorizing. Arch. Phschol. 6, 40.