Sunday, January 17, 2016

Deep learning just got deeper – writing on blackboard for some forms of teaching?

'I think there is a world market for maybe five computers', so Thomas Watson of IBM NEVER said. It’s just one of many made-up and misattributed quotes (mostly from Einstein) which pepper slides at education and tech conferences. But in a weird sort of way this often mocked quote (oh how we laugh) is turning out to be true. The only people with the computing power to solve the big problems may just be be Google, Microsoft, Facebook, Amazon and IBM. They bring services to the cloud, power on tap, making AI a utility, like electricity. Nicholas Carr wrote about this in The Big Switch, but underestimated the ultimate reach of such cloud services.
Deep Learning
We’re in the Age of Algorithms. They find things for you on Google, stop porn appearing on Twitter, protect your savings and online transactions, filter out spam, allow you to use files and share files. The world of learning is not immune, where there’s 5 levels at which AI currentlyoperates. But it’s Deep Learning by software, that is sprinting ahead at the moment.
Microsoft – image recognition
An interesting, but only one case, of deep learning is visual recognition. Only last month IBM wiped the floor with their image recognition system. The point, of course, is not to mimic the human eye but to produce perceptual apparatus that is better – higher fidelity, more range on electromagnetic spectrum and so on. It’s really the cognitive recognition of images that matter – that’s the hard bit.
It’s best to see neural networks, not in terms of the meat brain, but in terms of layers of algorithmic maths. As these layers get deeper and more complex they can handle more complex tasks with higher degrees of success. The problem with the layers has been a law of diminishing returns. A success on one layer gets diminished as it moves through lower levels. The trick is to ‘preserve’ success by moving success forward on a conditional basis, only taking it to other 'relevant' layers. Microsoft has done this down to over 150 layers.
Given the increase in speed and reduction in cost of processing power, deep learning researchers also run many models and allow the software to learn through many iterations. Raw experimentation then produces optimised solutions. The resources needed to do this well are mind-blowing, with all but a few heavyweights excluded. The winners are likely to be those who have the deep pockets and deep commitment to succeed - these are the big tech companies.
AI passes University entrance exams
I first heard about this from Professor Toby Walsh in Berlin, who stated that in November 2015, an AI programme had passed the entrance exam for Tokyo University that includes maths, physics, English and history. This was the Todai Robot Project. Remarkably it had scored a much higher than average score (53.8% against a national human average of 43.8%), with its highest marks in maths and history. The point, of course, is NOT to get a piece of software or robot into a top university. It is to act as the basis for research into the development of machine intelligence to solve problems.
AI predicts student performance (85%)
Other researchers, such as the Chris Piech’s team at Stanford and Google, have developed AI that does detailed analysis on student performance as the student learns and predicts how they will perform on subsequent problems. Their approach used 1.4 million student answers to maths problems posed by the Khan Academy. As the internet and global education projects, such as Khan and MOOCs, slew off huge amounts of data, we are now in apposition to exploit AI (a neural network) to be predictive on the basis of an enormous amount of real human data. We can, in a sense, bypass traditional cognitive psychology and use large data sets in conjunction with smart sets of algorithms, to diagnose what students are likely to get right or wrong. More than this, it can tell what went wrong and why. The accuracy stands, presently at around 85%. This has obvious applications in terms of doing what a teacher can do, assessing and predicting performance, only better.
What’s the point

Why are these three, and many more successes, all so interesting? Well, image recognition, (and speech and other forms of data) has already revolutionized search, fraud detection and can be used in online assessment to authenticate students for online exams. Adaptive learning systems to present personalized learning to each and every student, according to their measured progress. This gets away from the obvious faults in one-size-fits-all, linear curricula and teaching. It also allows the system to track each and every student to a degree that is impractical for real teachers. This one-to-one diagnosis works in all sorts of other areas of online activity, such as Google, advertising, Amazon, online dating and Netflix. There is every reason to suppose that it will work in optimizing learning journeys. The net results may be faster progression, less dropout, the ability to deliver on scale and volume, therefore lowering the currently skyrocketing costs in education. For me, the ultimate goal is to satisfy growing demand in the developing world, which we will never satisfy using our existing, expensive methods. The point of projects like Tokai is not that such a piece of software can pass an exam but that it can do things which graduates think is their sole domain. If a machine can do a graduate level task in the workplace, as robots can in factories, then their jobs are under threat. The interesting point is the degree to which AI and deep learning will result in the erosion of middle class professions, including teaching. Augmented intelligence and augmented teaching are already in operation. But the writing is on the blackboard for other forms of learning and teaching.

Wednesday, January 13, 2016

10 things you really need to know about READING ON SCREEN for effective online design

Ever get that feeling that when you’re reading on a screen it feels different, as you tend to scan and browse more than on paper? The research clearly shows that reading habits on screen are different from on paper. On screen, we skim and dart around more than on a printed page. To deal with this different type of reading you need to be aware of both the physical and cognitive ergonomics of both paper and screen reading. Understand these and you’ll see why simply taking paper text and pasting on a screen is often a bad idea. Here’s what you may want to consider.

1. Self-illuminated v reflected
Screens are backlit or the source of their own illumination, print relies wholly on reflected light. This is an important difference, in that self-lit screens can be seen and read in any light conditions and their brightness adjusted (manually or more commonly these days by sensor-based software). Screens, however, seem to allow the brain to spot non-proximate elements when you read and these can distract. So be careful with extraneous 'noise'.

2. Screens vary in size
With responsive systems, online learning can be delivered on everything from a high definition desktop screen to all sorts of sizes on laptops, tablets and mobiles; whereas print tends to be designed for one format only – a book page, newspaper page or journal page. Be aware that you are publishing on screen for a huge variety of shapes and sizes, as the size and format of the text will change. This is why chunking matters. Chunk text down for screen and use more headings than with print.

3. Screens landscape, paper portrait
As a follow-on point, most books, newspapers and journals are portrait, not landscape, whereas many screens (apart from mobiles) are landscape (tablets optional). This means that line length will vary enormously on screens but not on paper. The line length really can vary enormously from a few words on a mobile to overlong lines stretched across the whole of a landscape screen. You have to be aware of this elasticity in line length, as it affects readability and pushes you towards more highly edited text. Don't allow the full screen length lines of text, they reduce readability,

4. Scrolling is a feature of screens not print
When we access, say Wikipedia, or most web pages for news and other information, we commonly scroll down the page. Much online learning restricts you to a non-scrollable page but increasingly it is becoming the norm. You need to be aware of whether this functionality is present or not.

5. Navigation is different
Holding a book, newspaper or journal gives you a feel for where you are in terms of pages and the navigation is easy - turn the page, forwards or back. On screen you need to provide some sort of sense of where you are and progress in the text, whether it’s a progress bar or page x/y numbers. This is a design feature that you need to consider. Icons leading you forward and back may also be necessary.

6. Search possible on screen
Search is possible on screens, often used by users on computers but impossible in print, unless you count the clumsy mechanism of an index. This is a significant advantage, not only in finding text resources but also in finding an item within a text resource.

7. Hyperlinking is possible on screen
The humble hyperlink is something that paper does not have and can be used to good effect, for links out to more detail, glossary definitions or other navigational functions. Wikipedia is a good example of a text resource where the hyperlink is of significant advantage in vectoring through a subject or finding additional resources.

8. Paper usually professionally published
Books, newspapers, articles and academic papers are usually professionally published, with good layouts, use of fonts and therefore good readability. On screen text is more difficult to layout and polish, so often appears in layouts, formats and fonts that make them slightly more difficult to read. That is why you must pay attention to the rules that print publishing follow but also edit down to keep readability high. Layout, font choice, colour, sentence length may all need attention.

9. Paper has less distractions
Printed resources, at least most books, have only text on a page, screens often have a lot more items, as the text is embedded in a browser, word processor or web page where there’s lots of distractive navigational items and within the content more imagery in terms of images and video. The fact that more distractions are there means keeping the text clear and simple with lots of white space to aid isolation and readability.

10. Browse more on screen
We browse more on screens, in the sense that we skim and dart around looking for the pertinent cues. It’s almost like non-linear reading. This leads to the conclusion that you must avoid unnecessary distractions in terms of graphic elements, animation or audio, when you expect a learner to read. Unfortunately, many designers do the opposite and feel that the more movement and imagery you have the better, Media rich does not mean mind rich. It also important to realize that sentence length should be shorter and cues for important points given more emphasis, such as bold, italic and so on.

Conclusion
One last thing, when it comes to reading fatigue, research shows that it is the same on screen as on paper. Text is a great medium on both print and screen. Just be aware of the differences. You need to edit down, use more bullet points, highlight key terms and, in general, simplify. The same is true, of course with graphics, photographs and video.

And there's more...
10 challenging ways to get the best from your SMEs 
10 ways to make badass INTROs in online learning 
10 bloody good reasons for using much-maligned TEXT in online learning 
10 text layout FAILS in online learning
10 essential online learning WRITING TIPS in online learning 
10 stupid mistakes in design of MULTIPLE CHOICE questions
10 essential points on use of (recall not recognition) OPEN RESPONSE questions
10 rules on how to create great GRAPHICS in online learning 
10 sound pieces of advice on use of AUDIO in online learning 
10 ways based on research to use VIDEO in online learning
10 ideas on use of much maligned TALKING HEAD videos in online learning
10 ways to design challenging SCENARIOS

5 level taxonomy of AI in learning (with real examples)

AI fallacy 1: dystopia
Let’s not be misled by dystopian Hollywood visions of AI. Movies like Her, Ex Machina and Chappie are fiction and this is about fact. Robots have certainly had an impact in manufacturing (as did machines in agriculture when labour moved from fields to factories), where their speed, precision and ability to deliver 24/7 have led to massive increases in productivity. The cost has been the elimination of dull, monotonous and repetitive jobs. But AI is a broad and complex area of endeavour, only one of which is robotics.
AI fallacy 2: mimics the brain
Neither should we see AI as simply analogous with the human brain. This is another AI fallacy. We didn’t succeed in the airline business by aping birds, nor did we make much progress in going faster by copying the legs of a cheetah – we invented the wheel. So it is with AI. It’s about doing things well, more consistently, faster, more accurately than the human brain. Our brains have several drawbacks when it comes to some real world tasks. It likes to spend one third of its time asleep, the other third in leisure. It is also full of biases, gets tired, inattentive, has emotional swings, even suffers from mental illness.
Similarly, in the world of learning, AI is not about dystopian fantasies or aping teachers. AI is already being used by almost every learner on the planet, through that algorithmic tool Google. It is already being used in predictive analytics and already being used in adaptive learning.
5 Level taxonomy of AI in learning
To untangle some of the complexity I propose a five level taxonomy for AI in learning. My taxonomy is similar to the five level taxonomy developed for automated vehicles, where the driver is in complete and sole control of a vehicle, with only some interal algorithmic fucntions obvious on the dashboard, through assistive power steering, predictive satnav tech, therough degrees of autonomy, to full self-driving automation. At the top level, vehicles are designed to perform all safety-critical driving functions and can safely operate without any driver intervention.
Level 1  Tech
Level 2  Assistive
Level 3  Analytic
Level 4  Hybrid
Level 5  Autonomous
Level 1  Tech
You’re reading this from a network, using software, on a device, all of which rely fundamentally on algorithms. These include; Public Key Cryptograph, Error Correcting Codes, Pattern Recognition, Database use and Data Compression – to name but a few. With data compression, we when we use files, they are compressed for transmission, decompressed for use. Lossless and lossy compression and decompression magically squeeze big files into little files for transfer.
These, and many other algorithms, enable the tech to work and shape the software and online behaviours of people when they are online. These algorithms really are works of art that have been designed, tweaked and finessed in response to experiment with real hardware and users. They work because they’ve been proven to work in the real world. Of course, what’s seen as an algorithm is likely to be multiple algorithms with all sorts of fixes and tricks. These ‘tricks’ of the trade, such as checksum, prepare then commit, random surfer, hyperlink, leave it out, nearest neighbour, repetition, shorter symbol, pinpoint, same as earlier, padlock,, these make algorithms really sing. Every time you go online all files you use, audio you hear, images and videos you watch, are only possible because of an array of compression algorithms. These are so deeply embedded in the systems we use they are all but invisible. The personal computer you use is essentially a personal assistant that helps you on your learning journey. With mobile you now have a PA in your pocket. These are examples of AI and algorithms deeply embedded in the technology and tools.
Level 2  Assistive
Google was a massive pedagogic shift, giving instant access to a vast amount of human knowledge teaching and learning resources. Yet Google is still simply an algorithmic service that finds and sorts data. Every time you enter a letter into that letterbox it brings huge algorithmic power to bear on trying to find what you personally are looking for. Search Engine Indexing is like finding needles in the world’s biggest haystack. Search for something on the web and you’re ‘indexing’ billions of documents and images. Not a trivial task and it needs smart algorithms to do it at all in a tiny fraction of a second. Then there’s Pagerank, now superseded, the technology that made Google one of the biggest companies in the world. Google has moved on, or at least greatly refined, the original algorithm(s), nevertheless, the multiple algorithms that rank results when you search are very smart.
Other forms of assistive, algorithmic power in learning include; unique typing and facial recognition in online assessment. Pattern Recognition is just one species of algorithms used in learning. Learning from large data sets in translation, identifying meaning in speech recognition – pattern matching plucks out meaning from data. Mobile devices especially need to use these algorithms when you type on virtual keyboards or use handwriting software. Facial and typing recognition are now being used to authenticate learners in online assessment.
A nice example of assitive AI in learning is PhotoMaths, which uses the mobile phone camera to 'read' maths problems and not only provide the answer, but break down the steps to that answer. Algorithms are therefore increasingly used to directly assist learners in the process of learning.
Level 3  Analytic
Using algorithmic power to analyse student, course, admission or other forms of educational data, is now commonplace. Here, an institution can mine its own, and other, data to make decisions about what it should do in the future. This could be increasing levels of attainment, identifying weaknesses in courses, lowering student dropout and so on.
Beyond the institution, on MOOCs, for example EdX have identified useful pedagogic techniques, such as keeping video at 6 minutes or less, based on an analysis of aggregated data across many courses and many thousands of students. Smart algorithmic analysis an also identify weak spots in courses, such as ambiguous or too difficult questions.
Level 4  Hybrid
Technology enhanced teaching where algorithmic power is applied to the tasks of teaching and learning. Here the AI powered system works in tandem with the teacher to deliver content, monitor progress and work with the system to improve outcomes.
A good example is automated essay marking, where the system is trained using a large number of professionally marked essays. These marking behaviours are then used to mark other student essays. For more detail see Automated essay marking - kick-ass assessment.
Another example would be spaced practice tools, that often use algorithms such as SuperMemo, to determine the pattern and frequency of spaced practice events. See an example here as used by real students.
However, the most common use is in adaptive learning systems, where the software uses student and aggregated student data to guide the learner, in a personalised fashion, through a course or learning experience. This is still in the context of a human teacher, who uses the system to deliver learning but also as a tool to identify progress among large numbers of students and take appropriate action. We are educating everyone uniquely but it is still technology enhanced teaching. A good example is CogBooks. This is where we are at the moment with AI in learning. Our evidence from courses at ASU suggest that good teachers plus good adaptive learning produces optimum results.
Level 5  Autonomous
Autonomous tutoring is the application of AI to the issue of teaching without the participation, even intervention, of a teacher, lecturer or trainer. The aim is to provide scalable, personalised solutions to many thousand, if not millions of learners, at very low cost. The software needs to be able to deliver personalised content based on user data and behaviour, as well as assess. In some cases autonomy can rise to a level where the system learns how to deliver better learning experiences, on its own, to produce self-improvement, through machine learning. There are already online systems that attempt to do this, such as Duolingo, used by over one hundred million learners and other platforms are on their way to performing at this level.
At this level, one could argue that the concept of teaching collapses. There is only learning. In the same way that Google collapsed the idea of the person looking through shelves on libraries or card indexes tos earch and find information, autonomous AI will disintermediate teaching.
Other dimensions
This taxonomy looks at AI from the educators perspective and works back from the learning task. Another perspective is the different types of AI that can be applied in learning. Looking at our taxonomy again, one can identify algorithmic power that delivers, technical functionality, speed/accuracy of learning task, speed/efficacy of predictive analytics, algorithms that embody learning theory, Natural Language Processing, genetic algorithms, neural networks, machine learning and many other species of algorithm.
Within this there’s a plethora of different techniques using data mining, cluster theory, semantic analysis, probability theory and decision making that takes things down to the next level of analysis. This, in my view is too reductive. Yet this is where the real work is being done. This is very much a field where real progress is being made at a blistering pace, fuelled by massive amounts of data from the internet. But this approach to taxonomy is of little use to professional educators who want to understand and apply this technology in real learning. Contexts.
Conclusion

AI in learning is not without its problems, in terms of privacy, false positives, errors, over-learning and potential unemployment. Some of these problems can be overcome through progress in the maths and design, others lie on the regulatory, cultural and political sphere. But given the fact that productivity has stalled in education and costs still rising, this seems like a sensible way forward. If we can deliver scalable technology, assistance, analysis, learning and teaching, at a much lower costs than at present, we will be solving one of the great problems of our age. Macines helped move us on through the industrial revolution, where machines replaced manula labour. They will now move us on replacing some forms of mental work. When a famous economist standing at a huge building site asked of a government official “Why are all these people digging with shovels?” the official proudly said “It’s our jobs programme”. The economist replied “So then, why not give the workers spoons instead of shovels?” We are still, in education, using spoons to educate.

Tuesday, January 12, 2016

10 reasons why 2015 is the year of the MOOC


2012 was heralded, largely by people who had never taken any, as the year of the MOOC. 2013 was heralded, largely by people who hadn’t taken any, as the year MOOCs died, 2014 saw realism emerge with a swing towards real demand (vocational) demand and 2015 finished as what Stuart Sutherland called “the actual Year of the MOOC”. Here are his reasons for this last statement:
1. MOOCs are not massive, demand is massive
Stuart’s first point is spot on. As MOOCs have adjusted to demand, they’ve got smaller, better and adjusted to learner, not teacher design. With over 400 delivered, 35 million enrolments and 18 million in 2015, more that any previous year, MOOCs are not going away. We’re still taking them and still making them.
2. MOOC learners motivated by desire to learn
I made this point in this report on the first six Edinburgh Coursera MOOCs. The great majority of MOOC takers are not there because they want certification or accreditation, they really do want to learn. That’s why completion is not such a big deal. Sure large numbers enrol, take a peek, then drop out, that’s because they are window shopping, again a point I made when this whole thing started, It’s OK to drop-out, what’s astonishing is the number of people who drop-in. Stuart recommends a report from Southampton “Liberatinglearning: experiences of MOOCs”. It really is worth a read.
3. Large numbers of secondary school students are taking MOOCs
This is heartening. In Stuart’s work on MOOCs in healthcare, the second largest audience, after Doctors and nurses, was (surprisingly) School students. He is keen that we tap into this market for students who are looking at different careers and subjects, while still at school. This ‘look-see’ role of MOOCs could well positively influence the choice young people make, preventing wrong choices and opening up new ones. Futurelearn and other providers have even provided, in a timely fashion just before University choices are made, courses on College readiness and Try out newsubjects/careers, Writing applications for university and Preparing forUniversity. An average of 11k students signed-up.
5. Huge number of educators taking MOOCs
Often taken as a weakness, Stuart rightly points out that this is a strength. First MOOCs have been marketed at this audience, many other target audiences simply don’t know they exist – that’s changing as the big providers market out towards vocational learning. Lots of these educators are the ‘look-see’ people, which is great, as they are the early adopters and influencers, who will take MOOCs and other forms of online learning forward. An interesting side effect is MOOCs targeted at educators. A good example is the Blended learning MOOC I defined and put forward in 2013, that has now been delivered via Futurelearn.
6. MOOCs are stepping stones
The straw man, that MOOCs will destroy HE, has been put to bed. This was always a tiny number of journalists and people new to the game that made this claim – that’s why the claim became a bit of a piñata. MOOCs are just one species of online learning. Stuart mentioned Citizens Maths, another project I’m proud to have helped fund, put forward by Seb Schmoller, a really informed practitioner and commentator on MOOCs. MOOCs, having swung towards vocational subjects and really are being taken for solid reasons. Coursera’s survey of MOOCers showed two main types; The Career Builder and The Education Seeker. That’s helpful, as it starts to untangle the types of audiences out there. This is NOT about the 18 year-old undergraduate. It’s much more important than that.
7.  Research focusing on learner experience
We’ve had ‘lectures’ for 2500 years since Plato’s Academy – it is still the dominant pedagogy in Higher Education, yet as Sir John Daniels showed in the talk before Stuart, there’s “very little evidence to support f2f teaching therefore substitute for cheaper, scalable digital options… research shows that f2f is NOT superior to online teaching, which is also true of synchronous f2f”. Stuart put forward the interesting idea that a MOOC shape is emerging around highly analytical and integrated learning. Interestingly, the ‘social’ side of MOOCs may be overplayed. The evidence suggests that social participation is not as strong as some suggest and that the quality of social is often quite weak. Fascinating.
He pointed us to this report ‘Engaged learning in MOOCs: a study using the UK Engagement Survey’. An HEA Engagement Survey was used on two Southampton MOOCs. Participants felt engaged in the intellectual process of forming understanding, making connections with previous knowledge and experience, and exploring knowledge actively, creatively and critically. An addition finding was that persistent learners engaged, regardless of prior educational attainment.
8. Jury is out on MOOC learning design
Stuart is right here but I think this is due to it being relatively embryonic and the limits of design expertise in HE, as well as platform restrictions. MOOCs still have to deliver real engagement in terms of learning by doing and actual problem solving. This is often substituted by ‘chat’ and ‘peer assessment’. However, in my experience, in the vocational MOOCs on coding, and other similar subjects, this is coming of age. Stuart pointed us toward this paper Instructionalquality of MassiveOpen Online Courses (MOOCs) by Anoush Margaryan  et al.
9. MOOCs offer value for real needs
MOOCs have certainly provided education beyond boundaries and borders. They have reached out to satisfy a demand for higher education beyond the campus model, which is high cost and still based on scarcity. He quotes the Ebola Virus MOOC from Alison and participation from many countries giving diversity within MOOC cohorts. This is a big advantage in an educational experience.
10. Dementia MOOC
Stuart gave a reasonable and level headed analysis of where we had got to on MOOCs by the end of 2015 then ended by mentioning one fascinating MOOC. Let me tell you about this MOOC as it shows how things have progressed. It is the Dementia MOOC by the University of derby. I like this example, as it’s focused, shows the big boys how it should be done and illustrates the sort of progress that’s being made. First it is low budget but high on design. As Syed Munib Hadi, Head of the Academic Innovation hub said, “it all started with a focus on learning not teaching’” In fact, the teaching and many frontline people who were shown on video in this MOOC, is exemplary, as it all seems so real. Syed is right in saying that this is all about the learners, and he has the proof,  with a 35.48% completion rate. That’s impressive. As Syed reminded us, “remember that the UK Higher Education system has a 16% drop out rate in the first year”. They kept the course to six weeks as “we don’t take short course seriously in HE, so MOOCs are filling the gap and they’re getting shorter.” Badges worked with six badges for each of six sections, and an ovell badge for completion, this rewards those who don’t want to do all content, as well as those who see completion as a goal.
Conclusion

This was an honest and level-headed presentation by someone who not only studies MOOC but also designs, develops and delivers MOOCs. It was Stuart who helped me define the Blended learning MOOC delivered by Futurelearn. His comments were astute and free from that strand of irrational skepticism one often finds, even in the HE edtech community.

Can of Stella with Sir John Daniel - he's as radical as ever

Got to admire a man who, when I asked if he wanted a coffee from the trolley on the train, said “No, I’ll have a can of Stella”! I had a full day with Sir John Daniel, someone who played a huge role in building an institution which I greatly admire - the Open University. In doing so he cleared a wide and open path through which online learning could progress.  We had a good old chinwag about the OU.
Unsung hero – Walter Perry
He was full of praise for Scotsman Walter Perry, the first Vice-Chancellor, who had to design the OU from scratch, which he did quickly, with a model that stood the test of time. Perry is rarely mentioned in relation to the OU. It was his son who showed him the advert for the job. His view was that “standard of teaching in conventional universities was pretty deplorable” and he saw this as one way to raise their game. It was Perry, who copied the structure of the NHS with regional offices. Some of these are being dismantled, which Daniel agrees is right. The investment in Futurelearn is more important.
OU and Thatcher
Scotching another myth, he claimed that Thatcher, far from trying to close down the OU, actually saved it. The document to close it down was on another politician’s desk when he collapsed, literally on top of it with a heart attack. Her rationale had nothing to do with education as a social good, merely an attempt to slow down funding for HE in general. She was abrasive but Perry fought back, which she admired. In fact, it was she who insisted that the OU accept 18+ year olds, in an attempt to stem the costs of HE.
Culture of academe a problem
Daniel has taken 10 MOOCs and thinks “they’ve put online learning on the map”. They’re only one strand in the expansion of online learning but they really do matter. The problem, he thinks, lies within academe and the deeply embedded and traditional attitudes towards teaching. He quotes the recent Babson survey  where “70% of HE leaders see online as highly strategic but only 28% of staff – there’s a huge disconnect or tension out there”. He’s writing a piece on this as we speak. I recommended Nozick’s ‘Why do Intellectuals oppose Capitalism?’ as a possible causal explanation, which he was delighted to read.
Digital by default
But his most important belief is what I’ll call ‘Digital by default’ , the idea that we must get out of the structural habit of  f2f and lecture by default. He quotes Dubin & Taveggia (1968), Bernard (2004) and Means (2013) to claim that there is a law of substitution i.e. no significant difference in outcomes between different instructional methods, which leqads to what he calls the Law of Substituion (term coined by Tony Bates). I disagree with this but we have come to the same conslucion, for different reasons, that the term 'Blended learning' has become excuse for the preservation of f2f and lectures. He bravely recommends digital by default as there is “very little evidence to support f2f teaching therefore substitute for cheaper, scalable digital options… research shows that f2f is NOT superior to online teaching, which is also true of synchronous f2f”. Radical views.

Thursday, January 07, 2016

10 things BLENDED LEARNING is NOT

What has ‘Blended Learning’ done for the world of learning? It had the promise to shake us out of the ‘classroom/lecture-obsessed’ straightjacket into a fully developed, new paradigm, where online, social, informal and many other forms of learning could be considered and implemented. This needed an analytic approach to developing and designing blended learning solutions. So what happened?
1. Blended bandage
Blended learning was really just the learning world coping with the onslaught of new ways of teaching and learning. It's an adaptive response to what's happening to the learning world as the real world changes around it. By real world I mean changes in attitudes, learner expectations, demographics, politics, but above all massive and rapid change in technology. Blended learning as a concept allowed the system to absorb all of this at a sensible pace, as it was a useful bridge between the new and the old. However, seeing it as some sort of bandage or compromise simply disabled the idea, as it led not to fresh thinking but a defense of old with a few new, adjunct ideas added on.
2. Blended learning became blended TEACHING
Blended Learning books also turned out the very opposite of Blended Learning theory, namely Blended TEACHING. Teacher/lecturer.trainer authors simply sliced and diced existing ‘teaching’ practices and added a few online extras. Attempts at defining, describing and prescribing blended learning were crude, involving the usual suspects (classroom plus e-learning). It merely regurgitated existing 'teaching' methods. Blended LEARNING is not Blended TEACHING.
3. Muddled by metaphor
It also got muddled by metaphor. Blended learning started to fail when it got bogged down by banal metaphors. I've heard them all - blended cocktails, meals, even alloys. Within the ‘food metaphor’ we got courses, recipes, buffet learning, tapas learning, fast food versus gourmet. The problem with metaphor-driven blended learning is that who's to say that your metaphor is any better than mine? I’ve even seen the 'fruit blender' metaphor, trying to explain the concept in terms of a fruit smoothie! Let me put forward my own food metaphor. What do you get when you blend things in a metaphoric mixer, without due care and attention to needs, taste and palette? Blended baloney. That is often what we get with models as metaphors - dull, tasteless sausage meat. Blended LEARNING is not a metaphor.
4. Delivery dualisms
Dozens of definitions of blended learning then floated around, most of them muddle-headed as they were simple delivery dualisms:
Blend of classroom and e-learning
Blend of face-to-face and e-learning
This ‘velcro’ approach to blended learning simply fixed the old classroom paradigm and added an online dimension. It was an attempt to simply use the definition to carry on doing what you did before with some extras. The problem with a definition that fixes a delivery mechanism in advance of the blended design e.g. classroom or ‘f2f’ is that you’ve already given up on rational design.
5. Broad dualisms
A slightly better approach was to broadly define the world of learning into two inclusive categories:
Blend of online and offline
Blend of synchronous and asynchronous
Blend of formal and informal
The problem with these definitions is that they are looser but still wide components that may not be needed in an optimal blend. These definitions are simply too general, in that they simply divide the universe into two sets. However, the real issue with all of these definitions is that they are really definitions of blended INSTRUCTION not blended learning. We need to look at the concept from a broader learning perspective with definitions that rise above ‘instruction’ to concepts that encompass context:
6. Flipped classroom
This is just one species of blended learning and a rather simplistic version. Again, however, the focus is on blended ‘teaching’ not ‘learning’. It’s yet another fixed dualistic formula. The concept is primarily about switching the focus of teaching away from exposition towards more Socratic f2f methods. It served a purpose in proposing a radical rethink but still fits the old lecture/classroom/f2f v online dualistic mindset.
7. 70:20:10
This is a more sophisticated version of blended learning in that it emerges from theory and studies that show how people actually learn in practice, as opposed of supply type models of teaching. Around 70% of learning comes from experience, experiment and reflection, 20% from working with others and 10% from planned learning solutions and reading. It’s common in organizational learning, it proposes and explained in superb detail in 702010 towards 100% performance by Arets, Jennings and Heijnen. Now we’re getting there but again these percentages apply more to workplace learning and not education. It’s a great shift away from traditional, flawed mindsets about how people learning but needs further work to be useful across the entire learning landscape. Blended learning has certainly taken root but it has no define shape, theory, methodology or best practice. You can call anything a blended solution.
8. Sophisticated
All of the above are either metaphors, simplistic dualisms, or subsets of blended learning. Don't mistake the phrase for an anlaytic theory. It is so often used as a platitude. It is an old mindset that smothers the idea before it has had the chance to breath. What happened to analysis? Blended learning abandoned careful thought and analysis, the consideration of the very many methods of learning delivery, sensitivity to context and culture and a matching to resources and budget. It also needs to include scalability, updatability and several other variables. What it led to were primitive, dualistic 'classroom and e-learning' mixes. It never got beyond vague 'velcro' models, where bits and bobs were stuck together (now that's a metaphor). You need to work towards an 'optimal' blend. 
9. Analytic
Truly analytic blended learning is not a back of an envelope exercise. It needs a careful analytic process, where the learners, type of learning, organisational culture and available resources need to be matched with the methods of delivery. It has INPUTS, decision making and OUTPUTS. Until we see 'Blended learning' as a sophisticated analytic process for determining optimal blends, we'll be stuck in this vague, qualitative world, where the phrase is just an excuse for old practices.
10. ’Veil of ignorance’
In practice, to do blended learning, one has to apply what called the ’veil of ignorance’, an idea that goes back to Kant, Locke, Rousseau and more recently John Rawls. You have to go through a thought experiment and imagine your course, workshop, whatever, as having NO pre-set components. Now do some detailed analysis on what type of outcome you want from this in terms of your ‘learning’. Only then, having rid yourself of personal preconceptions and institutional forms of delivery, can you really start to rebuild your course/learning experience. So you start with an analysis of the learning and learners, then take into consideration your resources envelope, with a full cost analysis. Also include long-term sustainability issues such as updatability and maintenance. To construct a blended learning experience you have to deconstruct your natural bias to do what you or your institution have always done and reconstruct the learning experience from scratch.

Monday, January 04, 2016

Times tables – the phony, proxy war between traditionalists and progressives

These are tablets from Mesopotania. They show a multiplication table and a practice tablet by a schoolchild. A practice that has been going on for millennia.
And sure enough, the ‘times-tables’ wars have erupted again. This time, however, it has become a proxy, even phony, war between traditionalists and progressives, which in turn shows that both sides are often wrong-headed. It is a litmus test for the whole debate.

Traditionalists
The traditionalist position is that there’s nothing wrong with being able to recall things and knowledge from long-term memory. This type of immediate recall is called ‘automaticity’. This is a good thing, as automatic recall, for times tables, is both faster and more accurate than other methods (counting using fingers, using tools etc.). Sylvia Steel (2004), using a sample of 241 seven to twelve year olds, showed that three major strategies were employed in this task – retrieval, calculation, apparatus. Retrieval is faster and more accurate, yet only a third were using retrieval for the basic tables. So there is nothing wrong with teaching young people to recall a range of simple operations by recall. It makes other maths and real world problems simpler and faster to complete. (Interesting, however, that the traditionalists don't argue for division tables, as the same arguments apply.) There is one other important argument for automaticity. You may imagine that teachers always know what mental arithmetic competences their pupils have at this age – they don’t. That's why the trads want a test. 

Progressives
Now here’s the rub. The progressives argue that the amount of time spent drilling this stuff will detract from the real job of teaching maths, which is far more than basic mental arithmetic. They are also right. 

Even mastering the task of counting up to get the answer, has its problems. Let’s say I want to add 2 plus 6. Some children start with the 2 and add the 6 using fingers or internally in their head. This is less efficient that counting up from the largest of the two numbers, namely counting up from 6 and adding 2. This is called the MIN strategy. You teach this, as it reduces computational effort. In practice, adults think they know their times tables but often use a variety of computational strategies – counting up from one they know, counting down from one they know, estimating and so on. In practice very few adults really know all (up to 12) times tables with instant recall. In fact, multiplication is far more complex than most of us imagine. Repeated addition is used by many but it is long-winded process and prone to error. 

Multiplication often involves a knowledge of ‘ratios’, and not simple addition and/or subtraction. I could go on but you get the point. Being able to calculate, in your head, 11x12, the progressives argue, is a more important skill than instant recall of that factoid. The trouble with maths is that it is hard to learn and hard to teach because surface phenomena hide the deeper problems. They have a point.

Also, in this decimal age one would wonder why the 12 times table is included at all (although there is an argument that some professions still use imperial measurements). The reason, of course, is just a throwback from older politicians who remember their schooldays. Beyond the10 times table, using mathematical methods (not rote) seems sensible as you have to ask at what point rote learning has to stop.

On the other hand the 11 times table is a synch, so there's no great battle to be fought on that one, as most master it with ease. But why not 13 and upwards? That's when we have to think in terms of learning efficient algorithms to calculate these answers.

One way forward is to ask what children have the most difficulty with. As you can see, it is the 12 times table. This means that a great deal of effort has to be put into this one task. Wouldn't it be far better to focus on the cluster in the middle, namely the clear evidence that the 6, 7 and especially the 8 times tables, pose most problems. Teachers and learners have limited time and bandwidth, so wouldn't it be better to get competence in this cluster, than spend a lot of time on the less used 12 times table? For a more detailed mathematical analysis of why the 12 times table suffers from the law of diminishing returns see Wolfram's detailed analysis (using maths!).

In fact we can break this down even further to show what specific calculations cause most problems. As you can see, if we eliminate the 12 times table, we can focus on the group in the middle.

On tests, the trads want a test as they claim, with some justification, that these basics are not taught properly or consistently. Against this is the argument that timed tests at this age simply increase anxiety and negative attitudes about both maths and the child's ability to cope with the subject. This view is held by Professor Jo Boaler, Professor of Maths Education at Stanford.

Conclusion
This is not just a proxy war, with two sides lining up to create a false binary, it is a phony war, as both sides are right. The right balance is to climb out of the traditionalist v progressive trenches and accept that some automaticity is required, as well as some mathematical thinking. They are not mutually exclusive. In fact they are complementary. The right approach would have been to accept that tables up to ten have to be taught to ‘automaticity' but with an acceptable error rate, where the answers can be found using other efficient methods.

Bibliography
Data from Guardian Datablog which was based upon data collected at Caddington Village School (great work), subsequently written up, and freely available here for Mathematics Teaching. Thanks.