Sunday, October 31, 2021

HyFlex, Hybrid, Fusion or Blended learning - lots of names, few know what it is....

Lots of terms flying around by arrivistes for what has been discussed for decades -  Blended learning. We now have Hyflex, Hybrid, or Fusion Learning. Who cares? The problem is that few know what it is.... most fall into simple dualisms.


Don't get me wrong, I agree with this approach and have designed many such blends, even a mathematical tool that determines optimal blends. It has the promise to shake us out of the ‘classroom/lecture-obsessed’ straight-jacket into a fully developed, new paradigm, where online, social, informal and many other forms of learning could be considered and implemented. This needed an analytic approach to developing and designing blended learning solutions. So what happened?

1. Muddled by metaphor
It all got muddled by metaphor. Blended learning started to fail when it got bogged down by banal metaphors. I've heard them all - fusion, hyflex, hybrid, blended... I've heard to described as cocktails and alloys. Within the ‘food metaphor’ we got courses, recipes, buffet learning, tapas learning, smorgasbord, fast food versus gourmet. The problem with metaphor-driven blended learning is who is to say that your metaphor is any better than mine? I’ve even seen the 'fruit blender' metaphor, trying to explain the concept in terms of a fruit smoothie! Let me put forward my own food metaphor. What do you get when you blend things in a metaphoric mixer, without due care and attention to needs, taste and palette? Blended baloney. That is often what we get with models as metaphors - dull, tasteless sausage meat. Blended LEARNING is not a metaphor.

2. Blended bandage
Blended learning (whatever you want to call it) was really just the learning world coping with the onslaught of new ways of teaching and learning. The more recent terms Hyflex, Hybrid and Fusion were the learning world coping with the onslaught of Covid. It is an adaptive response to what is happening to the learning world as the real world changes around it. By real world I don't just mean Covid, I mean changes in attitudes, learner expectations, demographics, politics, but above all massive and rapid change in technology. Blended learning, as a concept, allowed the system to absorb all of this at a sensible pace, as it was a useful bridge between the new and the old. However, seeing it as some sort of bandage or compromise can quickly disabled the idea, as it can lead not to fresh thinking but a defense of old with a few new, adjunct ideas added on.

3. Blended learning is not blended TEACHING
Blended Learning also turned out the very opposite of Blended Learning theory, namely Blended TEACHING. Teacher/lecturer/trainers simply sliced and diced existing ‘teaching’ practices and added a few online extras. Attempts at defining, describing and prescribing blended learning were crude, involving the usual suspects (lectures/classroom plus e-learning). It merely regurgitated existing 'teaching' methods. Blended LEARNING is not Blended TEACHING.

4. Velcro learning
Dozens of definitions of blended learning then float around, most of them muddle-headed, as they are simple delivery dualisms:

   Blend of classroom and e-learning
   Blend of face-to-face and e-learning

This ‘velcro’ approach to blended learning simply took the old classroom paradigm and added an online dimension. It was an attempt to simply use the definition to carry on doing what you did before with some extras. The problem with a definition that fixes a delivery mechanism in advance of the blended design e.g. classroom or ‘f2f’ is that you’ve already given up on rational design. We see this in the Zoom + model rapidly adopted during Covid.

5. Broad dualisms
A slightly better approach was to broadly define the world of learning into two inclusive categories:

   Blend of online and offline
   Blend of synchronous and asynchronous
   Blend of formal and informal

The problem with these definitions is that they are looser but still wide components that may not be needed in an optimal blend. These definitions are simply too general, in that they simply divide the universe into two sets. However, the real issue with all of these definitions is that they are really definitions of blended INSTRUCTION not blended learning. We need to look at the concept from a broader learning perspective with definitions that rise above ‘instruction’ to concepts that encompass context.

6. Flipped classroom
This is just one species of blended learning and a rather simplistic version. Again, however, the focus is on blended ‘teaching’ not ‘learning’. It’s yet another fixed dualistic formula. The concept is primarily about switching the focus of teaching away from exposition towards more Socratic f2f methods. It served a purpose in proposing a radical rethink but still fits the old lecture/classroom/f2f v online dualistic mindset.

7. 70:20:10
This is a more sophisticated version of blended learning in that it emerges from theory and studies that show how people actually learn in practice, as opposed of supply type models of teaching. Around 70% of learning comes from experience, experiment and reflection, 20% from working with others and 10% from planned learning solutions and reading. It’s common in organizational learning, it proposes and explained in superb detail in 702010 towards 100% performance by Arets, Jennings and Heijnen. Now we’re getting there but again these percentages apply more to workplace learning than education. It’s a great shift away from traditional, flawed mindsets about how people learning but needs further work to be useful across the entire learning landscape. Blended learning has certainly taken root but it has no defined shape, theory, methodology or best practice. You can call anything a blended solution.

8. Sophisticated
All of the above are either metaphors, simplistic dualisms, or subsets of blended learning. Don't mistake the phrase for an anlaytic theory. Blended learning is so often used as a platitude. It is an old mindset that smothers the idea before it has had the chance to breath. What happened to analysis? Blended learning abandoned careful thought and analysis, the consideration of the very many methods of learning delivery, sensitivity to context and culture and a matching to resources and budget. It also needs to include scalability, updatability and several other variables. What it led to were primitive, dualistic 'classroom and e-learning' mixes. It never got beyond vague 'velcro' models, where bits and bobs were stuck together (now that's a metaphor). You need to work towards an 'optimal' blend. 

9. Analytic
Truly analytic Blended Learning is not a back of an envelope exercise. It needs a careful analytic process, where the learners, type of learning, organisational culture and available resources need to be matched with the methods of delivery. It has INPUTS, decision making and OUTPUTS. Until we see 'Blended learning' as a sophisticated analytic process for determining optimal blends, we'll be stuck in this vague, qualitative world, where the phrase is just an excuse for old practices. Your blend may have no lecture or no classroom components. It may have no online components. But most will be an optimal blend where good teaching and learning theory is applied, alongside analysis of what needs to be taught, who you are teaching and the resources for delivery. We have designed a tool that does precisely this.

10. ’Veil of ignorance’
In practice, to do blended learning, one has to apply what called the ’veil of ignorance’, an idea that goes back to Kant, Locke, Rousseau and more recently John Rawls. You have to go through a thought experiment and imagine your course, workshop, whatever, as having NO pre-set components. Now do some detailed analysis on what type of outcome you want from this in terms of your ‘learning’. Only then, having rid yourself of personal preconceptions and institutional forms of delivery, can you really start to rebuild your course/learning experience. So you start with an analysis of the learning and learners, then take into consideration your resources envelope, with a full cost analysis. Also include long-term sustainability issues such as updatability and maintenance. To construct a blended learning experience you have to deconstruct your natural bias to do what you or your institution have always done and reconstruct the learning experience from scratch.

Tuesday, October 26, 2021

Learning Experience Design

My last book was ‘AI for learning’ in which I explained the massive role that AI has already played in learning and how it will shape the learning landscape of the future. In that book I explained how learning experience designers will have to upskill to deal with the new world of AI, data, learning analytics and the complexities of new AI interfaces such as voice, AI mediated and adaptive content. The technology is about to become much smarter and much more complex. Learning designers will have to understand how AI will shape interfaces and content.

This new book, Learning Experience Design, sees learning design as grounded in learning theory and evidence, so that appropriate experiences are selected, then professionally designed. This moves us from the old to the new, seeing experiences as more than just flat pieces of media but a whole world of learning experiences that motivate and result in lasting change to long-term memory. 

The book is about the sheer range of possible learning experiences, as well as what media and learning theory lies behind their use. 

Chapter 1 looks at what Learning eXperience Design is through each of those three words – ‘Learning’, ‘eXperience’ and ‘Design’.

Chapter 2 looks at who LXDs are, where they come from and what they do. It digs into the design process and the practical challenges LXDs have to face.

Chapter 3 covers the learning theory behind LXD and looks at emotion, attention and motivation, what you need to know to design learning experiences. Not all experiences are optimal learning experiences and without a knowledge of the science of learning, it is too easy to design the illusion of learning.

Chapters 4 to 7 deal with interfaces, text, graphics, audio, video and animation. Learning Experience Design needs knowledge of the media and technology through which learning takes place.

Chapter 8 shifts gear into engagement, questions and feedback. Good learning experiences need to push into effortful learning.

Chapters 9 to 12 takes effortful learning to another level through scenarios, simulations, AR, VR, games, gamification and social learning.

Chapters 13 and 14 bring everything together through practice, transfer, workflow learning, curation and data.

It has a ton of learning experiences, based on evidence-based research and practice, as well as lots of DOs and DON"Ts.

Available here on Amazon.

Also some recent podcasts...

The Curious Case of Benjamin Bloom! bit.ly/3FEdYk8 Pragmatists & practice bit.ly/3aAK5mk Behaviourists bit.ly/3iUnINE Cognitivists bit.ly/3oUxOBK

7 ways AI & Data are transforming learning

Tesla passed $1 trillion market cap today so it is now worth more than Pfizer, Aztrazeneca, GSK, ExxonMobil, BP, and IBM combined. The only companies now worth more than Tesla are Apple, Microsoft, Google and Amazon. Their common denominator is that their underlying tech is now AI. Europe is falling behind, as we'd rather regulate than innovate.

Those who claim to ‘know’ where AI is going, and how fast, are being constantly challenged. 

So where is it going on learning? Well the main area of focus is NLP (Natural Language processing). AI is moving fast on several fronts here.

Data

Tesla has what seems to be an outrageous valuation. Yet what is being valued is not traditional car production, it is the driving data it harvests and the promise of a world where the very concept of vehicles and transport will be transformed. This will happen in learning. The data we gather will feed into optimising future learning experiences, as processes not events. This is why AI, or rather AI that uses data, will shape the future landscape of learning. Data will lie at the heart of all learning experiences. I explain this in my new book ‘Learning Experience Design’.

AI is the new UI

I’ve written about this in ‘AI for Learning’ and ‘Learning Experience Design’, the reshaping of UX as almost all interfaces are now mediated by AI - all social media, Netflix, Amazon, Google, YouTube - almost everything you do online. This is now happening in learning thorough LXP systems. In addition, voice interfaces are now in smartphones and on devices in cars and homes. It is getting better, faster and is scalable. AI is changing our whole relationship with technology, making it more human.

AI personalises

We know that personalised learning gives really significant advantages to large numbers of learners. We’d all love to have one-on-one teaching but that was never economically possible. It is now. Adaptive and personalised learning, enabled by AI, is now here at all levels in learning. CogBooks, a company I helped build has just been sold to Cambridge University Online and will power its online learning. LXPs, such as Learning Pool’s Stream, something I’ve been involved in, will deliver personalised learning to employees in the workplace and workflow.

AI teaches

Teaching largely addresses deficits in motivation and effort, learning is largely achieved by oneself. Took me a long time to truly understand this. It can create sense-making experiences for learners. The problem with traditional online learning is that it was essentially the presentation of content. It never really did what a good teacher does and that is create the opportunities for learning then allow and support you to make the effort to learn. AI enables both. We do this in WildFire.

AI learns

We used to have teachers and learners. Now we have teachers, learners and technology that also learns. Tesla learns as it aggregates driving data and uses that data to improve performance. The more we use Google the better it gets. The more we use personalised and adaptive learning the better it becomes for future students. We are no longer stuck on a plateau of human performance but on an upward trajectory of performance, making learning better, faster and cheaper.

Transformers

Transformers, such as GTP-3 are already useful in learning. We’ve been using them in WildFire for summarisation, content creation and question generation. This software is so powerful that just learning how to ask it questions or do things for you needs a new skillset - it is called ‘prompting’. These AI models have been trained with unimaginably large data sets. They have so much data in their training set that they, at times, transcend the ability of humans to create prose. They are now also entering the world of audio, images and video. They will literally be transformative.

Edge AI

The processing and application of AI on the ‘edge’, on devices, has really arrived. Look at the new Pixel6 mobile phone to see how AI is being delivered via chips in devices such as phones. It has a Tensor AI chip on-board; so translates, transcribes and does speech recognition blazingly fast. It can also erase unwanted objects on photos. These are seriously difficult tasks that require localised processing.

Conclusion

We can wallow in existing practices and technology and see modest but not substantial change in the efficacy and cost of learning. Or we can accept that the future is one where data, and what we do with that data, determines upward progress. A future that uses AI and data to create learning experiences as processes not events, improve interfaces, personalise, teach, support learning. All of this possible to wherever, whenever and to whoever. Technology, specifically AI and Data are finally delivering what we used to call Lifelong Learning.


Monday, October 18, 2021

THE GREAT BITCON

THE GREAT BITCON


I got the Blockchain and Bitcoin ‘bug’ around 2015, gave talks on the subject, even got married (should I say remarried) on Blockchain, paid for in Bitcoin. Then, volte face, I became a sceptic real quick and over the last few years, I’ve seen it get worse - huge projects trying to solve problems that already have adequate solutions or problems that don’t actually exist and now this period of speculative mania.

The Great Bitcon is a financial mirage, a shark that needs to keep afloat through dirty energy and liquidity to keep it afloat. That gold coin image is a complete con. As Nassim Taleb says, its worth is “exactly zero”. There is no common good here, in fact it is a dangerous, damaging piece of energy hungry speculation… and unsustainable.

As if to confirm the absurdity of Bitcoin as a currency, there’s been an experiment, on a real country -  El Salvador! Pick a small, poor South American, stagnant economically, with a horrific homicide rate, run by two massive drug gangs (MS13 and Barrio18) and a major land-route for drugs into the US - and give it a highly volatile virtual currency. Its capital San Miguel is the money laundering capital of central America  - hotels, nightclubs, car dealerships, right down to the hardware stores. What could possibly go wrong?

Former PR man, turned Dictator, Bukele, is what has gone wrong. Despite having brought inflation under control by pegging the ‘colon’ (you couldn’t make it up) to the dollar, he decides he wants to be Mr Cool. It’s a stunt. He has sacked the judiciary, put his henchmen in place, limited the power of the opposition and in true South American Dictator style - scrapped the limited term law for Presidents. A few weeks ago he gives everyone $30 in Bitcoin. You don’t solve the problem of poverty by foisting a volatile asset on poor people. Not that poor people get the money anyway.

So, how’s it going? It dropped 20% in the first day and is already trickling upwards into the hands of the rich and  gangs, with whom the President did a deal. Widespread reports of fraud in the wallet system has left people bewildered. Few businesses are accepting the currency and 15,000 took to the streets demanding it be stopped. They even destroyed a Bitcoin ATM.About half of El Salvadorians have no internet access, sure many have phones but the old and poor often lack the skills to use this stuff. It’s a process of exclusion not inclusion. I repeat, the last thing we want to do for poor people is get them involved in a highly volatile asset, when what they need is stability.

The great con works as there’s something in crypto for everyone, from the idle speculator to every species of ideologue. For Libertarians - no authority in control. For the Left - no corporates and banks in control. For the Right - no corporations in control. Why? Because no one is in control. That’s the real problem. It is, quite simply, speculation or to use old Marxist expressions, the purest separation of capital from production the world has ever seen, and its seen a few. A pure expression of greed from those that can least afford to lose - the young, women and ethnic minorities. The FCA rightly issued a warning this year NON-TRIVIAL DOWNSIDE RISKS.

I get that people are concerned, especially during Covid and it is that uncertainty that’s driving the speculation and volatility fuels this speculation. But spare me the duplicity of doing good - it’s just plain bad. You always hear about winnings, never losses. What’s happened is that the whales, VCs, hedge-funds and billionaires have stepped in. It’s the 1% folks. The people who control it are the people with lots of it.

Jackson Palmer, founder of Dogecoin says crypto has “evolved to incorporate many of the same institutions tied to the existing centralized financial system they supposedly set out to replace. All the while shoving cash and profits back up the funnel towards the rich, not the bankless & the poor. Crypto avoids audits, regulation, taxation, all the protections that are they to protect citizens”. It sucks capital in like a black hole, indeed it has to, to sustain its existence, but without contributing to real economic productivity.

That’s not to mention several other bad actors; tax evaders, money launderers, sanction busters, ransomware gangs, kidnappers and scammers. Forget your password - hard luck. Get scammed - hard luck.

Then there's the fatal objection. Right in the middle of one of a serious, global energy crisis, where Lebanon quite simply went dark and where fuel poverty will hit hundreds of millions, cryptocurrencies are doubling their energy requirements, to the equivalent energy use of Poland. Talk about swimming against the tide. It’s not just the cost of mining, the cost per transaction but the waste. The problem is systemic, it is in the model of verification, in the maths. It is quite simply energy intensive. It’s also an emissions disaster as its primary energy source is fossil fuels. When China banned Bitcoin mining and trading, the miners fled to other countries.

Why Khazikstan? Low financial regulation, only 6% renewables and tons of dirty coal. Why Mongolia? Low regulation and tons of dirty coal. Why TEXAS? Cheap electricity. Why is it cheap? An independent grid, deregulated, old infrastructure, low investment, so bad that they had severe outages in February of this year - it is estimated that up to 700 people may have died, 4.5 m homes and businesses had no power.

Why did China do what they did? Climate change targets. It came on the back of huge internal Chinese energy outages, where factories were shut down. China has a target of being carbon neutral by 2060 and see crypto being regulated and banned through climate change blowback alone. Oh, and the chips for mining are almost all made in Taiwan.

No hears you when you scream with the pain of your hacked losses or lost password in cryptoland. No one comes to your rescue in an unregulated suprastate environment.

My fear is that crypto is ‘doomed to succeed’ to take us to dark, ugly places we can’t get back from. I said at the start I had gone from zealot to sceptic. I will keep an open mind but I am convinced that cryptocurrencies are purely speculative, not a viable set of currencies, destabilising, increasing inequalities, energy intensive and therefore on an unsustainable path, both politically and in terms of climate change.


Since when did Christmas become the celebration of successful supply-chains?

Since when did Christmas become the celebration of successful supply-chains?
We believe, like children, that there really is a Santa Claus. Santa has become the just-in-time delivery point at the end of a global network. In truth it is just-in-time manufacturing, with no resilience or just-in-case. The Reindeer are shiploads of containers on polluting ships that radiate out from ports to shops and Amazon centres, via low paid lorry and van drivers. Santa is a shitty piece of logistics software.
We have fallen into buying endless amounts of consumerist crap from China and as that economy stagnates, we blame everyone but ourselves... so when Santa goes ‘Ho Ho Ho’ he’s laughing at our downright stupidity. Turns out we’re the turkeys voting for our own Christmas extinction, going out in a blaze of neon lights next to our personal mountain of landfill.
In my lifetime I’ve seen Christmas explode into a heap of glittery crap. Houses lit up like Las Vegas Casinos. Christmas trees, often plastic, laden with more baubles per branch than leaves. Chocolates that come, not in boxes but enormous buckets. People really want all their Christmases to come all at once, in the form of a skipload of junk for every man, woman and child.
When Jesus was born there was no room at the Inn, no doubt because workers in hospitality were in short supply, the Three Kings brought single natural presents, not cartloads of tawdry rubbish, so let’s get back to dreaming, not of a White, but Green Christmas.

Sunday, October 10, 2021

Mosher and Goddfredson - 5 moments of learning need

Bob Mosher is the Chief Learning Evangelist, at APPLY Synergies  and has been an active and influential leader in the learning and training industry for over 30 years. He is renowned worldwide for his pioneering role in new approaches to learning. Dr. Conrad Gottfredson is a founding partner, and the Chief Learning Strategist at APPLY Synergies, a 5 Moments of Need company, that specialises in helping learning professionals design, develop, maintain, and measure effective learning and performance support. Gottfredson and Mosher have given learning in the workflow some needed focus and definition.

Performance support

Gottfredson and Mosher build on the fact that most learning takes place on the job and not on training courses. In Innovative Performance Support (2011) they recommended a whole raft of techniques, tools and tech which can be used to implement performance support. Their arguments are that it reduces costs on formal training, while at the same time increasing performance and productivity. Time to improved performance is increased, along with managing cognitive load and transfer. A positive side effect is that expensive internal support and help-desks can also be reduced. They claim a performance first approach can reduce time to competence, by half. It also makes people feel better in their jobs and that helps retention. 

Workflow learning

A training mindset is about building an ‘instructional’ system, mainly courses and being an instructional ‘order taker’ determines what’s on your menu - time-based courses. The shift they recommend is to move away from this service mentality, to being a partner to help learning and development solve problems. 

People don’t need general principles or courses on printing, they need to know ‘how to’ fix the printer problem they have at that moment. So you have to identify the actual workflow, to get to the authentic performance needs, then design for those real processes and support, with an ascending range of options available to the learner.

For Mosher and Gottfredson transfer is also important, not by throwing them over the fence at the end of a course but integrating what you’ve learnt into the knowledge and work you do.

Five Moments of Need

Jumping straight into an analysis that extracts knowledge from SMEs is a mistake, as it leads to over-formal courses that deliver too much, in courses, at the wrong time, in the wrong place. One should design learning around these needs first with some detailed analysis of the critical tasks involved in those needs and focus on what they need to know.

Learning should meet these needs and deliver to the right people, at the right time, in the right context. In organisations this means in the workflow, at the point-of-need. Gottfredson and Mosher's famous five moments of learner need are:

  1. New - learning for first time

  2. More - wanting to learn more

  3. Apply - trying to remember and apply

  4. Solve - something goes wrong

  5. Change - something changes

Most first think of new and more but apply, solve and change tend to be more common. This is where delivery must be orchestrated, as it also needs to be a combination of pull and push. 

Digital coach

Failure really matters in work, for the individual and organisation. Learning in the flow of work means learning from those hesitancies, failures and mistakes. A Digital Coach or EPSS, allows the learner to take the relevant steps to overcome failure, as they do their work. A workflow map unpacks context and the Digital Coach supplies the resources. 

As all resources are not created equal, there is a hierarchy of support, from the simple to complex. One must always look towards delivering the minimal amount of support to reach your given goal.

At its simplest there’s the 2-click, 10 second access to support in response to the five moments of need. Then there’s steps support (quick and detailed). This is followed by, supporting knowledge, documents, policies, procedures, job aids, FAQs, articles and so on.

Only if these resources have been exhausted do you move to real-time learning such as e-learning. And of all else has been tried - it’s people, email, chats, social networks, communities of practice.



Influence

Gottfredson and Mosher’s five moments of need have been used to underpin the development of performance support technology, the sort of technology that Gery envisioned. More than that, their precise identification of the needs of real learners in the workflow have been fundamental in helping shape Learning Experience Platforms (LXPs) that push and pull learning in the workflow. Above all they have pushed learning and development into waking up to the challenge that learning is a process, that for most, takes place in the context of work, by doing. This shift in mindset was given some concrete recommendations in terms of implementing solutions to real needs.

Bibliography

Gottfredson, C. and Mosher, B., 2021. The 5 Moments of Need | A Performance-First Approach.  

Gottfredson, C. and Mosher, B., 2011. Innovative performance support: Strategies and practices for learning in the workflow. McGraw Hill Professional.

Monday, October 04, 2021

Pressey (1888-1979) - first teaching machine

 Sidney Pressey was the first to design and create a Teaching Machine, nearly 40 years before Skinner, which presented content, took input from learners and provided feedback. He saw technology as offering an ‘industrial revolution’ in learning, allowing some tasks to be automated, reducing the burden on teachers.

Learning theory

He saw himself as an early cognitivist psychologist, decades before it replaced behaviourism as the dominant school in psychology and saw learning not in terms of simple reinforcement but a more complex process involving internal, cognitive features of the brain such as language, thought, reflection and writing. He refused to accept learning theory based on the reductionist behaviourism of animal psychologists such as Pavlov, the behaviourist evangelist Watson or Skinner, who he knew personally, and had little time for learning theory that excluded consciousness, language and mental phenomena. His teaching machines reflected this learning theory.

Teaching machines

Although there were Victorian precedents, the true origin of teaching machine was the relatively unknown figure of Pressey, who came up with his idea in 1915. He had to shelve the idea, as the First World War intervened, until he finally filed a patent in 1926. This was the first known machine to deliver content, accept input and deliver feedback. He is therefore the true originator of the first teaching machine.

His first machine used old typewriter parts to present multiple-choice questions with four options. The learner pressed a key for the right answer and the results were stored on a counter. It had the three necessary conditions for a teaching machine, the presentation of content, input by users and feedback. 

His second machine had the innovation of not moving on until you got the right answer and he continued to innovate with teaching machines into the late 1950s. Pressey understood that such machines could be used for both teaching and testing. You could set the machine, using a simple lever, to only move on if the learner got the right answer or alternatively assess by recording all of their answers, right and wrong. 

Using the second machine was easy, the learner simply pressed one of five keys (1-5), it had a small window that showed the numbers of questions asked and a window on the side showing the number of questions they got correct. In teaching mode the learners had to continue until the correct answer was chosen and the next question appeared. The question’s number did not change until it was answered correctly and the window on the side showed the number of tries. He argued that this was quick, gave immediate results so that the learner didn’t have to wait days for results and saved the teacher time from the drudgery of marking, also eliminating marking errors. He also argued that this could free teachers to teach in a more inspirational manner. The learner could also repeat the experience until they got full mastery. You could quickly reset for the next student in seconds or the next test and could cope with up to 100 questions. These arguments are sound. An interesting attachment to the main machine delivered a candy if you passed a threshold number of correct answers (the threshold could be changed on the machine via a dial). All for under $15. Unfortunately, his timing was bad and the Great Depression put an end to his dream of manufacturing and popularising individualised learning.

Learning theories

Pressey has very specific views on learning theory, more towards cognitive psychology than pure behaviourism. Errors or the correction of misconceptions were, for him, fundamental to learning, hence his fondness for multiple choice questions, which had 4/5 wrong answers. He saw learning as a complex process where relatively stable, cognitive structures had to be created. This had to be achieved through the analysis of errors, along with individualisation, diagnosis and feedback. Learning, for Pressey, was not a form of reinforcement, as with animals but involved uniquely human mediation through language, speaking, listening, reading and writing. It was a deeply cognitive process. He was the antithesis of Skinner, whose teaching machine was designed around positive reinforcement, hence his avoiding multiple choice questions, where the wrong answers (negative stimuli) outnumbered the right answer, that were actually given to the student, in advance of them having to think. Skinner saw this as weak learning and didn’t buy the idea that the study of wrong answers was anything but a distraction and, more seriously, seeding confusion in terms of what was learned.

Blended Learning

He even formulated an early theory of Blended Learning, which he called, rather clumsily, ‘Adjunct Autoinstruction’. This involved the combination of programmed learning through technology and human teaching. He never saw his Teaching Machines as replacing teachers but as merely adjunct ways of extending teaching and testing. Indeed, the whole point was to free teachers from the mundane tasks of basic learning and marking.

Influence

Pressey was convinced that education had to be reformed and called for an ‘industrial revolution’ in learning, based on the use of technology. He suffered a breakdown when his devices failed to sell and felt that the education system was closed to innovation. Skinner went on to build his own versions of mechanical Teaching Machines but by the 1960s mechanical teaching machines had had their day. As mechanical devices they were clunky and relied on discs, barrels, levers and buttons, all hardware and no software. They had little real effect on learning technology in the long-term, other than objects of obscure interest by commentators 

Bibliography

Pressey S.L. 1933. Psychology and the new education. Harper.

Pressey S.L. & Janney J.E. 1937. Casebook of research in education. Harper.

Pressey S.L; Janney J.E. & Kuhlen R.G. 1939. Life: a psychological survey. Harper.

Pressey S.L; Robinson F.P & Horrocks J.E. 1959. Psychology in education. Harper.

Benjamin, L. T. (1988). A history of teaching machines. American Psychologist, 43(9), 703–712.

Mellan, I., 1936. Teaching and educational inventions. The Journal of Experimental Education, 4(3), pp.291-300.

Petrina, S., 2004. Sidney Pressey and the automation of education, 1924-1934. Technology and Culture, 45(2), pp.305-330.

Ferster, B., 2014. Teaching machines: Learning from the intersection of education and technology. JHU Press.