Sunday, February 28, 2016

Completion a category mistake in MOOCs



In a fascinating survey taken at the start of the University of Derby’s ‘Dementia’ MOOC, using Canvas, where 775 learners were asked whether they expected to fully engage with the course, 477 said yes but 258 stated that they did NOT INTEND TO COMPLETE. This showed that people come to MOOCs with different intentions. In fact, around 35% of both groups completed, a much higher level of completion that the vast majority of MOOCs. They bucked the trend.

Now much is made of dropout rates in MOOCs, yet the debate is usually flawed. It is a category mistake to describe people who stop at some point in a MOOC as ‘dropouts’. This is the language of institutions. People drop out of institutions,  ‘University dropouts', not open, free and online experiences. I’m just amazed that many millions have dropped in.

So let’s go back to that ‘Dementia’ MOOC, where 26.29% of those that enroled never actually did anything in the course. These are the window-shoppers and false starters. False starters are common in the consumer learning market. For example, the majority of those who buy language courses, never complete much more than a small portion of the course. And in MOOCs, many simply have a look, often just curious, others want a brief taster, just an introduction to the subject, or just some familiarity with the topic, and further in, many find the level inappropriate or, because they are NOT 18 year old undergraduates, find that life (job, kids etc.) make them too busy to continue. For these reasons, many, myself included, have long argued that course completion is NOT the way to judge a MOOC (Clark D. 2013, Ho A. et al, 2014; Hayes, 2015).

Course completion may make sense when you have paid up front for your University course and made a huge investment in terms of money, effort, moving to a new location and so on. Caplan rightly says that 'signalling' that you attended a branded institution explains the difference. In open, free and online courses there is no such commitment, risks and investments. The team at Derby argue for a different approach to the measurement of the impact of MOOCs, based not on completion but meaningful learning. This recognises that the diverse audience want and get different things from a MOOC and that this has to be recognised. MOOCs are not single long-haul flights, they are more like train journeys where some people want to get to the end of the line but most people get on and off along the way.

Increasing persistence
Many of the arguments around course completion in MOOCs are, I have argued, category mistakes, based on a false comparison with traditional HE, semester-long courses. We should not, of course, allow these arguments to distract us from making MOOCs better, in the sense of having more sticking power for participants. This is where things get interesting, as there have been some features of recent MOOCs that have caught my eye as providing higher levels of persistence among learners. The University of Derby ‘Dementia’ MOOC, full title ‘Bridging the Dementia Divide: Supporting People Living with Dementia’ is a case in point.

1. Audience sensitive
MOOC learners are not undergraduates who expect a diet of lectures delivered synchronously over a semester. They are not at college and do not want to conform to institutional structures and timetables. It is unfortunate that many MOOC designers treat MOOC learners as if they were physically (and psychologically) at a University – they are not. They have jobs, kids, lives, things to do. MOOC designers have to get out of their institutional thinking and realize that their audience often has a different set of intentions and needs. The new MOOCs need to be sensitive to learner needs.

2. Make all material available
To be sensitive to a variety of learners (see why course completion is a wrong-headed measure), the solution is to provide flexible approaches to learning within a MOOC, so that different learners can take different routes and approaches. Some may want to be part of a ‘cohort’ of learners and move through the course with a diet of synchronous events but many MOOC learners are far more likely to be driven by interest than paper qualifications, so make the learning accessible from the start. Having materials available from day one allows learners to start later than others, proceed at their own rate and, importantly, catch up when they choose. This is in line with real learners in the real world and not institutional learning.

2. Modular
The idea of a strictly linear diet of lectures and learning should also be eschewed, as different learners want different portions of the learning, at different times. A more modular approach, where modules are self-contained and can be taken in any order is one tactic. Adaptive MOOCs, using AI software that guides learners through content on the basis of their needs, is another. 6.16% of the dementia MOOCs didn’t start with Module 1.
This tracked data shows that some completed the whole course in one day, others did a couple of modules on one day, many did the modules in a different order, some went through in a linear and measured fashion. Some even went backwards. The lesson here is that the course needs to be designed to cope with these different approaches to learning, in terms of order and time. This is better represented in this state diagram, showing the different strokes for different folks. 
Each circle is a module containing the number of completions. Design for flexibility.

3. Shorter
MOOC learners don’t need the 10-week semester structure. Some want much shorter and faster experiences, others medium length and some longer. Higher Education is based on an agricultural calendar, with set semesters that fit harvest and holiday patterns. The rest of the world does not work to this pre-industrial timetable. In the Derby Dementia MOOC, there is considerable variability on when people did their learning. Many took less that the six weeks but that did not mean they spent less time on the course, Many preferred concentrated bouts of longer learning than the regular once per week model that many MOOCs recommend or mandate. Others did the week-by-week learning. We have to understand that learning for MOOC audiences is taken erratically and not always in line with the campus model. We need to design for this.

4. Structured and unstructured
I personally find the drip-feed, synchronous, moving through the course with a cohort, rather annoying and condescending. The evidence in the Dementia MOOC suggests that there was more learner activity in unsupported periods than supported periods. This shows a considerable thirst for doing things at your own pace and convenience, than that mandated by synchronous, supported courses. Nevertheless, this is not an argument for a wholly unstructured strategy. This MOOC attracted a diverse set of learners and having both structured and unstructured approach brought the entire range of learners along.
You can see that the learners who experienced the structured approach of live Monday announcement by the lead academic, a Friday wrap-up with a live webinar, help forum and email query service was a sizeable group in any one week. Yet the others, who learnt without support were also substantial in every week. This dual approach seems ideal, appealing to an entire range of learners with different needs and motivations.

5. Social not necessary
Many have little interest in social chat and being part of a consistent group or cohort. One of the great MOOC myths is that social participation is a necessary condition for learning and/or success. Far too much is made of ‘chat’ in MOOCs, in terms of needs and quality. I’m not arguing for no social components in MOOCs, only claiming that the evidence shows that they are less important than the ‘social constructivist’ orthodoxy in design would suggest. In essence, I’m saying it is desirable but not essential. To rely on this as the essential pedagogic technique, is, in my opinion, a mistake and is to impose an ideology on learners that they do not want.

6.  Adult content
In line with the idea of being sensitive to the needs of the learners, I’ve found too many rather earnest, talking heads from academics, especially the cosy chats, more suitable to the 18 year-old undergraduate, than the adult learner. You need to think about voice and tone, and avoid second rate PhD research and an over-Departmental approach to the content. I’m less interested in what your Department is doing and far more interested in the important developments and findings, at an international level in your field. MOOC learners have not chosen to come to your University, they’ve chosen to study a topic. We have to let up on being too specific in content, tone and approach.

7. Content as a driver
In another interesting study of MOOCs, the researchers found that stickiness was highly correlated to the quality of the 'content'. This contradicts those who believe that the primary driver in MOOCs is social. They found that the learners dropped out if they didn't find the content appropriate, or of the right quality and good content turns out to be a primary driver for perseverance and completion, as their stats show.

8. Badges
The Dementia MOOC had six independent, self-contained sections, each with its own badge for completion, and each can be taken in any order, with an overall badge for completion. These partial rewards for partial completion proved valuable. It moves us away from the idea that certificates of completion are the way we should judge MOOC participation. In the Dementia MOOC 1201 were rewarded with badges against 527 completion certificates.

9. Demand driven
MOOCs are made for all sorts of reasons, marketing, grant applications, even whim - this is supply led. Yet the MOOCs market has changed dramatically, away from representing the typical course offerings in Universities, towards more vocational subjects. This is a good thing, as the providers are quite simply reacting to demand. Before making your MOOC, do some marketing, estimate the size of your addressible audince and tweak your marketing towards that audience. Tis is likely to resultin a higher number of participants, as well as higher stickiness.

10. Marketing
If there's one thing that will get you more participants and more stickiness, it's good marketing. Yet academic institutions are often short of htese skills or see it as 'trade'. This is a big mistake. Marketing matters, it is a skill and need a budget.

Conclusion
The researchers at Derby used a very interesting phrase in their conclusion, that “a certain amount of chaos may have to be embraced”. This is right. Too many MOOCs are over-structured, too linear and too like traditional University courses. They need to loosen up and deliver what these newer diverse audiences want. Of course, this also means being careful about what is being achieved here. Quality within these looser structures and in each of these individual modules must be maintained.

Bibiography
Clark, D. (2013). MOOCs: Adoption curve explains a lot. http://donaldclarkplanb.blogspot.co.uk/2013/12/moocs-adoption-curve-explains-lot.html
Hayes, S. (2015). MOOCs and Quality: A review of the recent literature. Retrieved 5 October 2015, from http://www.qaa.ac.uk/en/Publications/Documents/MOOCs-and- Quality-Literature-Review-15.pdf
Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J. & Chuang, I. (2014). HarvardX and MITx: The first year of open online courses. Re- trieved 22 September 2015, from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2381263
Leach, M. Hadi, S. Bostock, (2016) A. Supporting Diverse Learner Goals through Modular Design and Micro-Learning. Presentation at European MOOCs Stakeholder Summ
Hadi, S. Gagen P. New model formeasuring MOOCs completion ratesPresentation at European MOOCs Stakeholder Summit.
You can enrol for the University of Derby 'Dementia' MOOC here.
And more MOOC stuff here.

Saturday, February 27, 2016

MOOCs: course completion is wrong measure

In a fascinating survey taken at the start of the University of Derby’s ‘Dementia’ MOOC, using Canvas, where 775 learners were asked whether they expected to fully engage with the course, 477 said yes but 258 stated that they did NOT INTEND TO COMPLETE. This showed that people come to MOOCs with different intentions. In fact, around 35% of both groups completed, a much higher level of completion that the vast majority of MOOCs. They bucked the trend.
Now much is made of dropout rates in MOOCs, yet the debate is usually flawed. It is a category mistake to describe people who stop at some point in a MOOC as ‘dropouts’. This is the language of institutions. People drop out of institutions -  ‘University dropouts,’ not open, free and online experiences. I’m just amazed that 40 million have dropped in.
So let’s go back to that ‘Dementia’ MOOC, where 26.29% of enrolees never actually did anything in the course. These are the window-shoppers and false starters. False starters are common in the consumer learning market. For example, the majority of those who buy language courses, never complete much more than a small portion of the course. And in MOOCs, many simply have a look, often just curious, others want a brief taster, just an introduction to the subject, or just some familiarity with the topic, and further in, many find the level inappropriate or, because they are NOT 18 year old undergraduates, find that life (job, kids etc.) make them too busy to continue. For these reasons, many, myself included, have long argued that course completion is NOT the way to judge a MOOC (Clark D. 2013, Ho A. et al, 2014; Hayes, 2015).
Course completion may make sense when you have paid up front for your University course and made a huge investment in terms of money, effort, moving to a new location and so on. In open, free and online courses there is no such commitment, risks and investments. The team at Derby argue for a different approach to the measurement of the impact of MOOCs, based not on completion but meaningful learning. This recognises that the diverse audience want and get different things from a MOOC and that this has to be recognised. MOOCs are not single long-haul flights, they are more like train journeys where some people want to get to the end of the line but most people get on and off along the way.
Audience age
Here’s two sets of data from the Derby Dementia MOOC and the six Coursera MOOCs delivered by the University of Edinburgh. It is clear that MOOCs attract a much older audience than the average campus student.



This is important, as older learners are far less likely to want pieces of paper and certification or bother that much about not completing the full diet of content.
Audience mix
We are also seeing a drift away from the initial graduate only audience. There is still a skew towards graduates but this is because these are the early adopters and almost the only group who know that MOOCs exist. Only now, do we see some serious marketing, targeted at different audiences and this is starting to have effect. Indeed, the majority of participants (55%) in the Dementia MOOC are not University graduates.
Audience motivation
Now here’s an interesting thing.  A point often forgotten in MOOCs -  learner motivation.
This compares well with the Edinburgh data.
The bottom line is that people who do MOOCs really want to learn. They are not largely motivated by pieces of paper or even completion.
Conclusion
As MOOC audiences are different from traditional HE students and as their audiences change in terms of age, background and motivation, the more likely MOOCs will have to respond to these new audiences and not mimic University semester courses. The team at Derby have already suggested an alternative set of metrics for measuring the success of a MOOC. They’re right. It’s time to move beyond the boring. Repetitive questions we hear every time the word MOOC is mentioned – dropout, graduates only…..
Bibliography

Hadi, S. Gagen P. New model for measuring MOOCs completion ratesPresentation at European MOOCs Stakeholder Summit.
You can enrol for the University of Derby 'Dementia' MOOC here.
And more MOOC stuff here.

Friday, February 26, 2016

AI maths app that students love and teachers hate

We’ve all been stuck on a maths problem. Look up a textbook – hardly ever helps, as the worked examples are rarely close to what you need and explanations clumsy and generic. What you really need in help on THAT specific problem. This is personalised learning and an app called Photomath does it elegantly using AI. Simply point your mobile camera at the problem. You don’t even have to click. It simply scans and comes up with the answer and a breakdown of the steps you need to take to get to the answer. It can’t do everything, such as word problems, but it’s OK for school-level maths.
Getting there
The app is quite simple at the moment and only solves basic maths problems. It has been criticised for being basic but it’s at this level that the vast majority of learners fail. But it’s getting there and I don't want to get hung up on whether Photomaths is as good as it says it is. or better than other maths apps. For me, it's a great start and a hint of great things to come. In fact Wolfram Alpha is a lot more sophisticated. But it is the convenience of the mobile camera functionality that makes it special.
The problem that is maths
Maths is a subject that is full of small pitfalls for learners, many which switch off learners, inducing a mindset of ‘I’m not good at maths’. In my experience, this can be overcome by good teaching/tutoring and detailed, deliberate feedback, something that is difficult in a class of 30 plus students. This subject, above all others, needs detailed feedback, as little things lead to catastrophic failure. This approach, therefore, where the detail of a maths problem is unpacked, is exactly what maths teaching needs. It is a glimpse of a future, where performance support, or teacher-like help, is available on mobile devices. AI will do what good teachers do, walk you through specific problems, until you can do it for yourself.
Students love it, teachers hate it
Predictably, students love this app, while teachers hate it. This is a predictable phenomenon and neither side is to blame. It happened with Google, Wikipedia, MOOCs,…..  and it’s the same argument we heard when calculators were invented. The teachers’ point is that kids use it to cheat on homework. That depends on whether you see viewing the right answer and steps in solving an equation as cheating. In my opinion, it simply exposes bad homework. Simply setting a series of dry problems, without adequate support, is exactly what makes people hate maths, as help is so hard so find when you’re sitting there, on your own, struggling to solve problems. Setting problems is fine for those who are confident and competent, it often disheartens those who are not.
Sure the app will give you the answer but it also gives you a breakdown of the steps. That’s exactly where the real leaning takes place. What we needs is a rethink about what learning and practice means to the learner (and homework) in maths. The app is simple but we now see technology that is, in effect, doing what a good teacher does – illustrating, step-by-step, how to solve maths problems.
Homework
Homework causes no end of angst for teachers, parents and students. Some teachers, based on cherry-picked evidence or hearsay, don't provide any homework at all. Many set banal and ill-designed tasks that become no more than a chore to be endured by the student. I personally think the work 'homework' is odd. Why use the language of the workplace 'work' to describe autonomous learning? In any case, we must move beyond the 'design a poster'  and get the right answer tests, to encoring autonomy in the learner. This means providing tasks where adequate support is available to help the learner understand the process or task at hand.
AI in learning
AI is entering the learning arena at five different taxonomic levels; tech, assistive, analytic, hybrid and automatic. This is a glimpse of what the future will bring, as intelligent AI-driven software delivers, initially assistance to students, then teacher-level functionality and eventually the equivalent of the autonomous, self-driving car. It's early days but I've been involved in projects that are seeing dramatic improvements in attainment, dropout and motivation using AI technology in learning.
WildFire

I’ve been using AI in a tool called WildFire that uses semantic AI to create online learning content from ANY document, PowerPoint or video. No lead time, sophisticated active learning and a massive reduction in cost. We’re starting to see a new generation of tools that use smart AI techniques to deliver personalised learning. AI is fast becoming the most important development in the advancement of teaching we’ve seen to date.

Saturday, February 20, 2016

World is 3D, learning should be 3D: 3 BIG reasons why VR should be used in learning

I’m on a panel at NAACE on 23 March, and we’ll explore the potential use of VR in education. I have argued that the potential is great for many reasons. But here is my top three:
1. World is 3D, not 2D
This brilliant cartoon from the New Yorker says it all. So much learning is 1D – one dimensional written text. I don’t mind this for subjects that are solely concerned with texts, but when text is used to deliver teaching and learning that falls short by that method, things start to fall apart, or get very, very long-winded.
Sure, we’ve gone up a gear with a richer mix of media. We have paper, radio, film, TV, the web - all 2D. We ‘ve had books, blackboards, whiteboards, computer screens, tablets and mobiles. – all are 2D. But VR is a new medium that allows you see the world in 3D. All of what we do in life is in 3D. We live, enjoy and work in 3D environments, doing 3D things with 3D people. That’s why 3D learning matters.
2. VR is a medium not a gadget
As Chris Milk said, "In all other media, your mind interprets that medium. In VR your consciousness is the medium." It's that profound a shift. This is the first major new medium to emerge since the web and it may be the last, the final medium, as can be anything and everything. It is not a toy or gadget but a way of re-presenting the world of learning that is fundamentally different from paper, audio or screens. It represents ANY world for learning in full 3D, worlds you can look and move around in. More than this, you will believe you are there. Your mind will scream “I’m here’ – and you will have no choice. Any world can be presented in 3D but think on this – your imagination is the only limit. I can take you into space to learn Newton’s three laws (I have), under the ocean to teach biology (I have), to the molecular level in chemistry, to any habitat for biology, to any scale or lab for physics, to any place for geography, back in time for history, immersion for languages. We can also let you hone your sports skills and all practical, vocational subjects, training of soft skills, design skills, engineering skills and other real world behaviours. I can even take you to impossible worlds and you can do impossible things.
3. Learning theory
The big basic principles in the psychology of learning are served well in VR, the need for:
Attention
Emotion
Engagement
Doing
Transfer
Recall

These are all provided by VR. I’ve never seen so many people so entranced with a  piece of new technology. You will pay full attention in a way that you’ve never experienced before, be fully and emotionally engaged and do things as if they were real. Because your consciousness is so immersed in the learning task, transfer will be high, along with retention. What’s not to like?