Thursday, December 13, 2018

Learning Experience Systems – just more click-through online learning?

I have this image in my lounge. He's skating, a clergyman skating, as we so often do when we think we're learning - just skating over the surface. For all the talk of Learning Experience Systems and ‘engagement’, if all you serve up are flat media experiences, no matter how short or micro, with click-through multiple choice or worse, drag and drop, you’ll have thin learning. Simply rebadging platforms with the word ‘Experience’ in the middle doesn’t cut it, unless we reflect on what those ‘experiences should be. All experience is learning but some experiences are much more effective than others (effortful). Simply plopping the word 'experience into the middle of the old LMS terms is to simply rebadge. 
 As Mayer showed, this does not mean making things media rich; media rich is not mind rich. This often inhibits learning with unnecessary cognitive load.
Neither does it simply mean delivering flat resources. Similarly with some types of explicit gamification, where the Pavlovian rewards become ends in themselves and inhibit learning. Good gamification, does in fact, induce deep thought – collecting coins, leader boards and other ephemera do not, as the gains are short-lived.
The way to make such systems work is to focus on effortful ‘learning’ experiences, not just media production. We know that what counts is effortful, desirable and deliberate practice.
Engagement
Engagement does not mean learning. I can be wholly engaged, as I often am, in all sorts of activities – walking, having a laugh in the pub, watching a movie, attending a basketball game – but I’m learning little. Engagement so often means that edutainment stuff - all tainment and no edu. The self-perception of engagement is, in fact, often a poor predictor of learning. As Bjork repeatedly says, on the back of decades of research, from Roediger, Karpicke, Heustler, Metcalfwe and many others, “we have a flawed model of how we learn and remember”. 
We tend to think that we learn just by reading, hearing and watching. When, in fact, it is other, effortful, more sophisticated practices that result in far more powerful learning. Engagement, fun, learner surveys and happy sheets have been shown to be poor measures of what we actually learn and very far from being optimal learning strategies.
Ask Traci Sitzman who has done the research, Sitzmann (2008). Her work on meta-studies, on 68,245 trainees over 354 research reports, attempt to answer two questions:
Do satisfied students learn more than dissatisfied students?After controlling for pre-training knowledge, reactions accounted for only 2% of the variance in factual knowledge, 5% of the variance in skill-based knowledge, 0% of the variance in training transfer. The answer is clearly no!
Are self-assessments of knowledge accurate? Self-assessment is only moderately related to learning. Self-assessment capture motivation and satisfaction, not actual knowledge levels
Her conclusion based on years of research, and I spoke to her and she is adamant, is that self-assessments should NOT be included in course evaluations and should NOT be used as a substitute for objective learning measures.
Open learning
It’s effort to ‘call to mind’ that makes learning work. Even when you read, it’s the mind reflecting, making links, calling up related thoughts that makes the experience a learning experience. But this is especially true in online learning. The open mind is what makes us learn and therefore open response is what makes us really learn in online learning.
You start with whatever learning resource, in whatever medium you have: text (pdf, paper, book…), text and graphics (PowerPoint…), audio (podcast) or video. By all means read your text, go through a Powerpoint, listen to the podcast or watch a video. It’s what comes next that matters.
With WildFire, in addition to the creation of on line learning, in minutes not months, ae have developed open input by learners, interpreted semantically by AI to. You literally get a question and a blank box into which you can type whatever you want. This is what happens in real life – not selection items from multiple-choice lists. Note that you are not encouraged to just retype what you read saw or heard. The point, hence the question, is to think, reflect, retrieve and recall what you think you know.
Here’s an example, a definition of learning…
What is learning?
Learning is a lasting change in a person’s knowledge or behaviour as a result of experiences of some kind.
Next screen….

You are asked to tell us what you think learning is. It’s not easy and people take several attempts. That’s the point. You are, cognitively, digging deep, retrieving what you know and having a go. As long as you get the main points, that it is a lasting change in behaviour or knowledge through experiences, you’re home and dry. As the AI does a semantic analysis, it accepts variations on words, synonyms and different word order. You can’t cut and paste and when you are shown the definition again, whatever part you got right, is highlighted.  
It’s a refreshing experience in online learning, as it is so easy to click through media and multiple-choice questions thinking you have learnt. Bjork called this the ‘illusion of learning’ and it’s remarkably common. Learners are easily fooled into thinking they have mastered something when they have not.
This fundamental principle in learning, developed in research by Bjork and many others, is why we’ve developed open learning in WildFire
Conclusion
Engagement is not a bad thing but it is neither a necessary, and certainly not a sufficient condition, for learning. LXP theory lacks - well theory and research. We know a lot about how people learn, the excessive focus on surface experience may not help. All experience leads to some learning. But that is not the point, as some experiences are better than others. What those experiences should be are rarely understood by learners. What matters is effortful learning, not ice skating across the surface, having fun but not actually learning much – that is click-through learning. 
Bibliography
Alleger et al. (1997) A meta-analysis of the relations among training criteria. Personnel Psychology 50, 341-357.
Sitzmann, T. and Johnson, S. K. (2012). When is ignorance bliss? The effects of inaccurate self-assessments of knowledge on learning and attrition. Organizational Behavior and Human Decision Processes, 117, 192–207.
Sitzmann, T., Ely, K., Brown, K. G., & Bauer, K. (2010). Self-assessment of knowledge: A cognitive learning or affective measure? Academy of Management Learning and Education, 9, 169-191.
Brown, K. G., Sitzmann, T., & Bauer, K. N. (2010). Self-assessment one more time: With gratitude and an eye toward the future. Academy of Management Learning and Education, 9, 348-352
Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K. and Zimmerman, R. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology93, 280-295.

Wednesday, December 12, 2018

Learning is not a circus and teachers are not clowns - the OEB debate

‘All learning should be fun’ was the motion at the OEB Big Debate. No one is against a bit of fun but as an imperative for ALL learning, it’s an odd, almost ridiculous, claim. and sure enough there were some odd arguments. Elliot Masie, the purveyor of mirth, started with his usual appeal to the audience “Let me give you another word for fun - HA HA (that’s two words Elliot, but let’s not quibble)…. turn to your neighbour and say that without one letter. Some, like my neighbour, were genuinely puzzled ‘HAH?’ he said. I think it’s ‘AHA’, says I. Geddit? Oh dear. Elliot wants learning to be like Broadway. I saw him a few of weeks before show some eightball dance routine as a method for police training. 
To be fair Benjamin Doxtdator was more considered with his arguments about subversion in education and the fact that those who design learning were debating what was good not the learners – they were missing. But this was to miss the point. In deciding what treatments to give patients, one must appeal to research to show what works, not rely on the testimonies of patients.
Research matters
What was fun, was to watch anecdote and franky ‘funless’ arguments put to the sword by research. Patti Shank urged us to read Bjork, to consider the need for effort. Desirable difficulties matter and she killed the opposition with the slow drip of research. I suddenly noticed that the audience was not laughing but attentive, listening, making the effort to understand and reflect, not just react. That’s what most learning (other than Kindergarden play) is and should be. Patti Shank talked sense - research matters. Engagement and fun are proxies and the research shows that effort trumps fun every time. Learners may like 'fun' but research shows that learners are often delusional about learning strategies. What matters in the end is mastery, not just the feeling that you have mastered something but actual mastery.
On Twitter and during the audience questions, there were those who simply misread the motion, forgetting the word ‘all’. Some mistook fun for other concepts, like attention, being engrossed, gripped or immersed in a task. I have read literally thousands of books in my life and rarely chortled while reading them. Athletes learn intensely in their sports and barely register a titter. Learning requires attention, focus and effort, not a good giggle. Only those who think that ‘Happy sheets’ are a true indicator of learning adhere to the nonsense that learning should be all fun. Others made non sequiturs, claiming that those who disagree that all learning should be fun, think that all learning should be dull and boring. Just because I don’t think that all clothes should be pink, doesn’t mean I believe they should all be black! It's not that motivation, some fun and the affective side of learning don't matter, just that it is pointless motivating people to embark on learning experiences if they don't actually learn. This is not a false dichotomy, between fun and learning, it is the recognition that there are optimal learning strategies.
It is this obsession that led to the excesses of gamification, with its battery of Pavlovian techniques, which mostly distract from the effort needed to learn and retain. It’s what’s led to online learning being click-through, largely the presentation of text, graphics and video, with little in the way of effortful learning, apart from multiple-choice options. Which is why open input, effortful learning tools like WildFire result in much higher levels of retention. When designers focus relentlessly on fun they more often than not, destroy learning. There is perhaps no greater sin than presenting adults with hundreds of screens of cartoons, speech bubbles and endless clicking, in the name of ‘fun’.
A touch of humour certainly helps raise attention but learning is not stand-up comedy. In fact, we famously forget most jokes, as they don’t fit into existing knowledge schemas. Fun can be the occasional cherry on the cake but never the whole cake.
Conclusion
'Fun', funnily enough, is a rather sad word – it’s naive, paltry, diminishes and demeans learning and I came away from this debate with a heavy heart. There’s an emptiness at the heart of the learning game. A refusal to accept that we know a lot about learning, that research matters. The purveyors of fun, and those who think it’s all about ‘engagement’, are serving up the sort of nonsense that creates superficial, click-through, online learning. This is the dark, hollow world that lies behind the purveyors of mirth. Learning is not a circus and teachers are not clowns.

Tuesday, December 04, 2018

What one, intensively--researched principle in learning is like tossing a grenade into common practice?

Research has given us one principle that is like tossing a grenade into common practice – interleaving. It’s counterintuitive and, if the research is right, basically contradicts almost everything we actually practice in learning.
The breakthrough research was Shea & Morgan (1979), who had students learn in a block or through randomised tasks. Randomised learning appeared to result in better long-term retention. This experiment was repeated by Simon & Bjork (2001), but this time they asked the learners at the end of the activities how they think they’ll perform on day 2. Most thought that the blocked practice would be better for them. They were wrong. Current performance is almost always a poor indicator of later performance.
Interleaving in many contexts
Writing the same letter time after time is not as effective as mixing the letter practice up. HHHHHHIIIIIIIIIIJJJJJJJJJ is not as good as HIJHIJHIJHIJHIJHIJHIJHIJ. This also true in conceptual and verbal skills. Rohrer & Taylor (2007) showed that maths problems are better interleaved. Although it feels as though blocked is better but interleaving was three times better! The result in this paper was so shocking the editors of three major Journals rejected the paper on first reading. The size effect was so great that it was hard to believe, so hard to believe that few teachers even do it.
Interleaved in unrelated topics
Rohrer, Dedrick & Stershic (2015) took this a stage further and took unrelated topics in maths, to compare blocked with interleaved practice. Interleaved produced better performance in both short and long-term (30 days). William Emeny, a teacher in England showed that interleaving is actually done by many teachers but only in run up to exams, but that, he showed was where most of the actual learning was taking place.
Interleaving in inferences
What about learning from examples, learning general skills from exposure to examples, like reading X-rays or inferring a painter’s style by exposure to many paintings by specific painters. Kornell & Bjork (2008) did the painter test, 12 paintings by each of 6 artists, then show learners 48 new paintings. The results showed that interleaving was twice as effective as blocked training. It has been replicated in the identification of butterflies, birds, objects, voices, statistics and other domains. Once again, learners were asked what sort of instruction they thought was best. They got it wrong. In young children, 3 year olds, Vlach et al (2008) showed that learning interleaved with play produced better performance.
So why does interleaving work?
Interleaving works as you are highlighting the ‘differences’ between things. These relationships matter in your own mind. Blocking seems more fluid, where interleaving seems confusing, yet it smooths out comparisons.  Another problem is that learners get years and years of blocking in school. They’re actually taught bad habits and that prevents new, fresh habits from forming or even being tried. 
Conclusion
This is a strange thing. Interleaving, as opposed to blocked learning, feels wrong, feels disjointed, almost chaotic. Yet is it much more effective. It seems to fly in the face of your intuitions. Yet it is significantly more efficient as a learning strategy. Yet how often do we see interleaving in classrooms, homework or online learning? Hardly ever. More worryingly, we’re so obsessed with ‘student’ evaluations and perceptions that we can’t see the wood for the trees. We demand student engagement not learning, encourage the idea that learning is easy when it is not. When it comes to teaching, we’re slow learners.