Sunday, February 28, 2016

Completion a category mistake in MOOCs



In a fascinating survey taken at the start of the University of Derby’s ‘Dementia’ MOOC, using Canvas, where 775 learners were asked whether they expected to fully engage with the course, 477 said yes but 258 stated that they did NOT INTEND TO COMPLETE. This showed that people come to MOOCs with different intentions. In fact, around 35% of both groups completed, a much higher level of completion that the vast majority of MOOCs. They bucked the trend.

Now much is made of dropout rates in MOOCs, yet the debate is usually flawed. It is a category mistake to describe people who stop at some point in a MOOC as ‘dropouts’. This is the language of institutions. People drop out of institutions,  ‘University dropouts', not open, free and online experiences. I’m just amazed that many millions have dropped in.

So let’s go back to that ‘Dementia’ MOOC, where 26.29% of those that enroled never actually did anything in the course. These are the window-shoppers and false starters. False starters are common in the consumer learning market. For example, the majority of those who buy language courses, never complete much more than a small portion of the course. And in MOOCs, many simply have a look, often just curious, others want a brief taster, just an introduction to the subject, or just some familiarity with the topic, and further in, many find the level inappropriate or, because they are NOT 18 year old undergraduates, find that life (job, kids etc.) make them too busy to continue. For these reasons, many, myself included, have long argued that course completion is NOT the way to judge a MOOC (Clark D. 2013, Ho A. et al, 2014; Hayes, 2015).

Course completion may make sense when you have paid up front for your University course and made a huge investment in terms of money, effort, moving to a new location and so on. Caplan rightly says that 'signalling' that you attended a branded institution explains the difference. In open, free and online courses there is no such commitment, risks and investments. The team at Derby argue for a different approach to the measurement of the impact of MOOCs, based not on completion but meaningful learning. This recognises that the diverse audience want and get different things from a MOOC and that this has to be recognised. MOOCs are not single long-haul flights, they are more like train journeys where some people want to get to the end of the line but most people get on and off along the way.

Increasing persistence
Many of the arguments around course completion in MOOCs are, I have argued, category mistakes, based on a false comparison with traditional HE, semester-long courses. We should not, of course, allow these arguments to distract us from making MOOCs better, in the sense of having more sticking power for participants. This is where things get interesting, as there have been some features of recent MOOCs that have caught my eye as providing higher levels of persistence among learners. The University of Derby ‘Dementia’ MOOC, full title ‘Bridging the Dementia Divide: Supporting People Living with Dementia’ is a case in point.

1. Audience sensitive
MOOC learners are not undergraduates who expect a diet of lectures delivered synchronously over a semester. They are not at college and do not want to conform to institutional structures and timetables. It is unfortunate that many MOOC designers treat MOOC learners as if they were physically (and psychologically) at a University – they are not. They have jobs, kids, lives, things to do. MOOC designers have to get out of their institutional thinking and realize that their audience often has a different set of intentions and needs. The new MOOCs need to be sensitive to learner needs.

2. Make all material available
To be sensitive to a variety of learners (see why course completion is a wrong-headed measure), the solution is to provide flexible approaches to learning within a MOOC, so that different learners can take different routes and approaches. Some may want to be part of a ‘cohort’ of learners and move through the course with a diet of synchronous events but many MOOC learners are far more likely to be driven by interest than paper qualifications, so make the learning accessible from the start. Having materials available from day one allows learners to start later than others, proceed at their own rate and, importantly, catch up when they choose. This is in line with real learners in the real world and not institutional learning.

2. Modular
The idea of a strictly linear diet of lectures and learning should also be eschewed, as different learners want different portions of the learning, at different times. A more modular approach, where modules are self-contained and can be taken in any order is one tactic. Adaptive MOOCs, using AI software that guides learners through content on the basis of their needs, is another. 6.16% of the dementia MOOCs didn’t start with Module 1.
This tracked data shows that some completed the whole course in one day, others did a couple of modules on one day, many did the modules in a different order, some went through in a linear and measured fashion. Some even went backwards. The lesson here is that the course needs to be designed to cope with these different approaches to learning, in terms of order and time. This is better represented in this state diagram, showing the different strokes for different folks. 
Each circle is a module containing the number of completions. Design for flexibility.

3. Shorter
MOOC learners don’t need the 10-week semester structure. Some want much shorter and faster experiences, others medium length and some longer. Higher Education is based on an agricultural calendar, with set semesters that fit harvest and holiday patterns. The rest of the world does not work to this pre-industrial timetable. In the Derby Dementia MOOC, there is considerable variability on when people did their learning. Many took less that the six weeks but that did not mean they spent less time on the course, Many preferred concentrated bouts of longer learning than the regular once per week model that many MOOCs recommend or mandate. Others did the week-by-week learning. We have to understand that learning for MOOC audiences is taken erratically and not always in line with the campus model. We need to design for this.

4. Structured and unstructured
I personally find the drip-feed, synchronous, moving through the course with a cohort, rather annoying and condescending. The evidence in the Dementia MOOC suggests that there was more learner activity in unsupported periods than supported periods. This shows a considerable thirst for doing things at your own pace and convenience, than that mandated by synchronous, supported courses. Nevertheless, this is not an argument for a wholly unstructured strategy. This MOOC attracted a diverse set of learners and having both structured and unstructured approach brought the entire range of learners along.
You can see that the learners who experienced the structured approach of live Monday announcement by the lead academic, a Friday wrap-up with a live webinar, help forum and email query service was a sizeable group in any one week. Yet the others, who learnt without support were also substantial in every week. This dual approach seems ideal, appealing to an entire range of learners with different needs and motivations.

5. Social not necessary
Many have little interest in social chat and being part of a consistent group or cohort. One of the great MOOC myths is that social participation is a necessary condition for learning and/or success. Far too much is made of ‘chat’ in MOOCs, in terms of needs and quality. I’m not arguing for no social components in MOOCs, only claiming that the evidence shows that they are less important than the ‘social constructivist’ orthodoxy in design would suggest. In essence, I’m saying it is desirable but not essential. To rely on this as the essential pedagogic technique, is, in my opinion, a mistake and is to impose an ideology on learners that they do not want.

6.  Adult content
In line with the idea of being sensitive to the needs of the learners, I’ve found too many rather earnest, talking heads from academics, especially the cosy chats, more suitable to the 18 year-old undergraduate, than the adult learner. You need to think about voice and tone, and avoid second rate PhD research and an over-Departmental approach to the content. I’m less interested in what your Department is doing and far more interested in the important developments and findings, at an international level in your field. MOOC learners have not chosen to come to your University, they’ve chosen to study a topic. We have to let up on being too specific in content, tone and approach.

7. Content as a driver
In another interesting study of MOOCs, the researchers found that stickiness was highly correlated to the quality of the 'content'. This contradicts those who believe that the primary driver in MOOCs is social. They found that the learners dropped out if they didn't find the content appropriate, or of the right quality and good content turns out to be a primary driver for perseverance and completion, as their stats show.

8. Badges
The Dementia MOOC had six independent, self-contained sections, each with its own badge for completion, and each can be taken in any order, with an overall badge for completion. These partial rewards for partial completion proved valuable. It moves us away from the idea that certificates of completion are the way we should judge MOOC participation. In the Dementia MOOC 1201 were rewarded with badges against 527 completion certificates.

9. Demand driven
MOOCs are made for all sorts of reasons, marketing, grant applications, even whim - this is supply led. Yet the MOOCs market has changed dramatically, away from representing the typical course offerings in Universities, towards more vocational subjects. This is a good thing, as the providers are quite simply reacting to demand. Before making your MOOC, do some marketing, estimate the size of your addressible audince and tweak your marketing towards that audience. Tis is likely to resultin a higher number of participants, as well as higher stickiness.

10. Marketing
If there's one thing that will get you more participants and more stickiness, it's good marketing. Yet academic institutions are often short of htese skills or see it as 'trade'. This is a big mistake. Marketing matters, it is a skill and need a budget.

Conclusion
The researchers at Derby used a very interesting phrase in their conclusion, that “a certain amount of chaos may have to be embraced”. This is right. Too many MOOCs are over-structured, too linear and too like traditional University courses. They need to loosen up and deliver what these newer diverse audiences want. Of course, this also means being careful about what is being achieved here. Quality within these looser structures and in each of these individual modules must be maintained.

Bibiography
Clark, D. (2013). MOOCs: Adoption curve explains a lot. http://donaldclarkplanb.blogspot.co.uk/2013/12/moocs-adoption-curve-explains-lot.html
Hayes, S. (2015). MOOCs and Quality: A review of the recent literature. Retrieved 5 October 2015, from http://www.qaa.ac.uk/en/Publications/Documents/MOOCs-and- Quality-Literature-Review-15.pdf
Ho, A. D., Reich, J., Nesterko, S., Seaton, D. T., Mullaney, T., Waldo, J. & Chuang, I. (2014). HarvardX and MITx: The first year of open online courses. Re- trieved 22 September 2015, from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2381263
Leach, M. Hadi, S. Bostock, (2016) A. Supporting Diverse Learner Goals through Modular Design and Micro-Learning. Presentation at European MOOCs Stakeholder Summ
Hadi, S. Gagen P. New model formeasuring MOOCs completion ratesPresentation at European MOOCs Stakeholder Summit.
You can enrol for the University of Derby 'Dementia' MOOC here.
And more MOOC stuff here.

Saturday, February 27, 2016

MOOCs: course completion is wrong measure

In a fascinating survey taken at the start of the University of Derby’s ‘Dementia’ MOOC, using Canvas, where 775 learners were asked whether they expected to fully engage with the course, 477 said yes but 258 stated that they did NOT INTEND TO COMPLETE. This showed that people come to MOOCs with different intentions. In fact, around 35% of both groups completed, a much higher level of completion that the vast majority of MOOCs. They bucked the trend.
Now much is made of dropout rates in MOOCs, yet the debate is usually flawed. It is a category mistake to describe people who stop at some point in a MOOC as ‘dropouts’. This is the language of institutions. People drop out of institutions -  ‘University dropouts,’ not open, free and online experiences. I’m just amazed that 40 million have dropped in.
So let’s go back to that ‘Dementia’ MOOC, where 26.29% of enrolees never actually did anything in the course. These are the window-shoppers and false starters. False starters are common in the consumer learning market. For example, the majority of those who buy language courses, never complete much more than a small portion of the course. And in MOOCs, many simply have a look, often just curious, others want a brief taster, just an introduction to the subject, or just some familiarity with the topic, and further in, many find the level inappropriate or, because they are NOT 18 year old undergraduates, find that life (job, kids etc.) make them too busy to continue. For these reasons, many, myself included, have long argued that course completion is NOT the way to judge a MOOC (Clark D. 2013, Ho A. et al, 2014; Hayes, 2015).
Course completion may make sense when you have paid up front for your University course and made a huge investment in terms of money, effort, moving to a new location and so on. In open, free and online courses there is no such commitment, risks and investments. The team at Derby argue for a different approach to the measurement of the impact of MOOCs, based not on completion but meaningful learning. This recognises that the diverse audience want and get different things from a MOOC and that this has to be recognised. MOOCs are not single long-haul flights, they are more like train journeys where some people want to get to the end of the line but most people get on and off along the way.
Audience age
Here’s two sets of data from the Derby Dementia MOOC and the six Coursera MOOCs delivered by the University of Edinburgh. It is clear that MOOCs attract a much older audience than the average campus student.



This is important, as older learners are far less likely to want pieces of paper and certification or bother that much about not completing the full diet of content.
Audience mix
We are also seeing a drift away from the initial graduate only audience. There is still a skew towards graduates but this is because these are the early adopters and almost the only group who know that MOOCs exist. Only now, do we see some serious marketing, targeted at different audiences and this is starting to have effect. Indeed, the majority of participants (55%) in the Dementia MOOC are not University graduates.
Audience motivation
Now here’s an interesting thing.  A point often forgotten in MOOCs -  learner motivation.
This compares well with the Edinburgh data.
The bottom line is that people who do MOOCs really want to learn. They are not largely motivated by pieces of paper or even completion.
Conclusion
As MOOC audiences are different from traditional HE students and as their audiences change in terms of age, background and motivation, the more likely MOOCs will have to respond to these new audiences and not mimic University semester courses. The team at Derby have already suggested an alternative set of metrics for measuring the success of a MOOC. They’re right. It’s time to move beyond the boring. Repetitive questions we hear every time the word MOOC is mentioned – dropout, graduates only…..
Bibliography

Hadi, S. Gagen P. New model for measuring MOOCs completion ratesPresentation at European MOOCs Stakeholder Summit.
You can enrol for the University of Derby 'Dementia' MOOC here.
And more MOOC stuff here.

Friday, February 26, 2016

AI maths app that students love and teachers hate

We’ve all been stuck on a maths problem. Look up a textbook – hardly ever helps, as the worked examples are rarely close to what you need and explanations clumsy and generic. What you really need in help on THAT specific problem. This is personalised learning and an app called Photomath does it elegantly using AI. Simply point your mobile camera at the problem. You don’t even have to click. It simply scans and comes up with the answer and a breakdown of the steps you need to take to get to the answer. It can’t do everything, such as word problems, but it’s OK for school-level maths.
Getting there
The app is quite simple at the moment and only solves basic maths problems. It has been criticised for being basic but it’s at this level that the vast majority of learners fail. But it’s getting there and I don't want to get hung up on whether Photomaths is as good as it says it is. or better than other maths apps. For me, it's a great start and a hint of great things to come. In fact Wolfram Alpha is a lot more sophisticated. But it is the convenience of the mobile camera functionality that makes it special.
The problem that is maths
Maths is a subject that is full of small pitfalls for learners, many which switch off learners, inducing a mindset of ‘I’m not good at maths’. In my experience, this can be overcome by good teaching/tutoring and detailed, deliberate feedback, something that is difficult in a class of 30 plus students. This subject, above all others, needs detailed feedback, as little things lead to catastrophic failure. This approach, therefore, where the detail of a maths problem is unpacked, is exactly what maths teaching needs. It is a glimpse of a future, where performance support, or teacher-like help, is available on mobile devices. AI will do what good teachers do, walk you through specific problems, until you can do it for yourself.
Students love it, teachers hate it
Predictably, students love this app, while teachers hate it. This is a predictable phenomenon and neither side is to blame. It happened with Google, Wikipedia, MOOCs,…..  and it’s the same argument we heard when calculators were invented. The teachers’ point is that kids use it to cheat on homework. That depends on whether you see viewing the right answer and steps in solving an equation as cheating. In my opinion, it simply exposes bad homework. Simply setting a series of dry problems, without adequate support, is exactly what makes people hate maths, as help is so hard so find when you’re sitting there, on your own, struggling to solve problems. Setting problems is fine for those who are confident and competent, it often disheartens those who are not.
Sure the app will give you the answer but it also gives you a breakdown of the steps. That’s exactly where the real leaning takes place. What we needs is a rethink about what learning and practice means to the learner (and homework) in maths. The app is simple but we now see technology that is, in effect, doing what a good teacher does – illustrating, step-by-step, how to solve maths problems.
Homework
Homework causes no end of angst for teachers, parents and students. Some teachers, based on cherry-picked evidence or hearsay, don't provide any homework at all. Many set banal and ill-designed tasks that become no more than a chore to be endured by the student. I personally think the work 'homework' is odd. Why use the language of the workplace 'work' to describe autonomous learning? In any case, we must move beyond the 'design a poster'  and get the right answer tests, to encoring autonomy in the learner. This means providing tasks where adequate support is available to help the learner understand the process or task at hand.
AI in learning
AI is entering the learning arena at five different taxonomic levels; tech, assistive, analytic, hybrid and automatic. This is a glimpse of what the future will bring, as intelligent AI-driven software delivers, initially assistance to students, then teacher-level functionality and eventually the equivalent of the autonomous, self-driving car. It's early days but I've been involved in projects that are seeing dramatic improvements in attainment, dropout and motivation using AI technology in learning.
WildFire

I’ve been using AI in a tool called WildFire that uses semantic AI to create online learning content from ANY document, PowerPoint or video. No lead time, sophisticated active learning and a massive reduction in cost. We’re starting to see a new generation of tools that use smart AI techniques to deliver personalised learning. AI is fast becoming the most important development in the advancement of teaching we’ve seen to date.

Friday, February 19, 2016

10 powerful results from Adaptive (AI) learning trial at ASU

AI in general, and adaptive learning systems in particular, will have enormous long-term effect on teaching, learner attainment and student drop-out. This was confirmed by the results from courses run at Arizona State University in Fall 2015. 
One course, Biology 100, delivered as blended learning, was examined in detail. The students did the adaptive work on the CogBooks platform then brought that knowledge to class, where group work and teaching took place – a flipped classroom model. This data was presented at the Educause Learning Initiative in San Antonio in February and is impressive.
Aims
The aim of this technology enhanced teaching system was to:
increase attainment
reduce in dropout rates
maintain student motivation
increase teacher effectiveness
It is not easy to juggle all three at the same time but ASU want these undergraduate courses to be a success on all three fronts, as they are seen as the foundation for sustainable progress by students as they move through a full degree course.
1. Higher attainment
A dumb rich kid is more likely to graduate from college than a smart poor one. So, these increases in attainment are therefore hugely significant, especially for students from low income backgrounds, in high enrolment courses. Many interventions in education show razor thin improvements. These are significant, not just on overall attainment rates but, just as importantly, the way this squeezes dropout rates. It’s a double dividend.
2. Lower dropout
A key indicator is the immediate impact on drop-out. It can be catastrophic for the students and, as funding follows students, also the institution. Between 41-45% of those who enrol in US colleges drop out. Given the 1.3 trillion student debt problem and the fact that these students dropout, but still carry the burden of that debt, this is a catastrophic level of failure. In the UK it is 16%. As we can see increase overall attainment and you squeeze dropout and failure. Too many teachers and institutions are coasting with predictable dropout and failure rates. This can change. The fall in drop out rate for the most experienced instructor was also greater than for other instructors. In fact the fall was dramatic.
3. Experienced instructor effect
An interesting effect emerged from the data. Both attainment and lower dropout were better with the most experienced instructor. Most instructors take two years until their class grades rise to a stable level. In this trial the most experienced instructor achieved greater attainment rises (13%), as well as the greatest fall in dropout rates (18%).
4. Usability
Adaptive learning systems do not follow the usual linear path. This often makes the adaptive interface look different and navigation difficult. The danger is that students don;t know what to do next or feel lost. In this case ASU saw good student acceptance across the board. 
5. Creating content
One of the difficulties in adaptive, AI-driven systems, is the creation of ustable content. By content, I mean content, structures, assessment items and so on. CogBooks has create a suite of tools that allow instructors to create a network of content, working back from objectives. Automatic help with layout and conversion of content is also used. Once done, this creates a complex network of learning content that students vector through, each student taking a different path, depending on their on-going performance. The system is like a satnav, always trying to get students to their destination, even when they go off course.
6. Teacher dashboards
Beyond these results lie something even more promising. The CogBooks system slews off detailed and useful data on every student, as well as analyses of that data. Different dashboards give unprecedented insights, in real-time, of student performance. This allows the instructor to help those in need. The promise here, is of continuous improvement, badly needed in education. We could be looking at an approach that not only improves the performance of teachers but also of the system itself, the consequence being on-going improvement in attainment, dropout and motivation in students.
7. Automatic course improvement
Adaptive systems, such as Cogbooks, take an AI approach, where the system uses its own data to automatically readjust the course to make it better. Poor content, badly designed questions and so on, are identified by the system itself and automatically adjusted. So, as the courses get better, as they will, the student results are likely to get better.
8. Useful across the curriculum
By way of contrast, ASU is also running a US History course, very different from Biology. Similar results are being reported. The CogBooks platform is content agnostic and has been designed to run any course. Evidence has already emerged that this approach works in both STEM and humanities courses.
9. Personalisation works
Underlying this approach is the idea that all learners are different and that one-size-fits-all, largely linear courses, delivered largely by lectures, do not deliver to this need. It is precisely this dimension, the real-time adjustment of the learning to the needs of the individual that produce the reults, as well as the increase in the teacher’s ability to know and adjust their teaching to the class and individual student needs through real-time data.
10. Student’s want more
Over 80% of students on this first experience of an adaptive course, said they wanted to use this approach in other modules and courses. This is heartening, as without their acceptance, it is difficult to see this approach working well.
Conclusion
I have described the use of AI in learning in terms of a 5-Level taxonomy. This Level 4 application (hybrid of teacher plus adaptive system) assists instructors to increase attainment and combat dropout. So far, so good. If we can replicate this overall increase in attainment across all courses and the system as a whole, the gains are enormous. The immediate promise is one of blended learning, using adaptive systems to get immediate results. The future promise is of autonomous systems, even adaptive driven MOOCs, that deliver massive amounts of high quality learning at a minute cost per learner.

Wednesday, February 17, 2016

10 ways AI can go wrong: artificial intelligence v artificial stupidity

For some in the learning game, the mere mention of the word ‘algorithm’ makes them feel as if smart aliens have landed and we must exterminate them before we become enslaved to prescribed brain washing. They’re the work of the devil, introducing autonomous, flawed and biased bots into the world of learning. They want to keep them out at all costs. For others they are the saviour that will accelerate the longed-for personalised learning, eliminate the harmful effects of human bias and make learning much more open, efficient, personal and productive. As usual, the truth is that there’s a bit of both.
Taxonomy of AI in learning
The confusion is caused, I think, by not really understanding the full range or taxonomy of algorithms and AI used in learningTo that end, I’ve come up with  5-level taxonomy of AI in learning (see detail). The important thing is not to confuse or equate AI with autonomy (or lack of). t is easy to slip from one of these to the other. AI has a spectrum of levels, reflected in this taxonomy and is many things, using many different types of AI and machine learning.
But there are a number of dangers that do lurk in the world of AI and algorithms, so it important that the debate is open and that we don’t close down opportunities by being naive about what AI is, how it works and what it can do for us.
1. Too deterministic
As individuals, living in the Western world at least, we value individuality, the idea that we have liberties, rights and opportunities. One danger in the algorithmic age is that it may start, without us realizing, to funnel us towards groupthink. We get fed the same newsfeeds, sold the same popular stuff, hear the same music, buy the same books. On the other had, it may expand our horizons, not limiting us to fixed cultural, commercial and consumerist channels, but delivering personalised (individualised) entities.
This debate is alive in learning, where adaptive, AI driven platforms can deliver personalized learning but, if we are not careful, some may constrict the nature of learning. We need to be careful that the learner retains the curiosity and critical thinking that is necessary to become an autonomous learner. AI ‘could’ deliver narrow, deterministic, prescribed pathways in learning, not allowing the learner to breath and expand their horizons, and apply critical thought.
This is really a design issue, as sometimes that is desirable (when diagnosing misconceptions and teaching fixed domains, such as maths and science), sometimes it doesn’t, when a more critical approach is needed. There’s a tendency for both sides to snipe at each other, not realising that learning is a broad spectrum of activities, not one thing.
AI doesn’t always make the learning narrower or more deterministic. That’s because many of these systems still leave lots open, indeed suggest open options, to the learner. One can allow the learner as much freedom as you wish by making the course porous. I’ve done this with WildFire, a semantic tool that automatically creates e-learning content. It deliberately suggests lots of links to the student, based on what they look at and their performance, out to content on the web. This is the sort of thing they are likely to receive from a teacher or lecturer, but with AI it’s personalized. It tracks what ‘you’ do and predicts what it thinks you may need to know. That ‘may’ word is important. Far from closing you down, it opens up the mind to other sources and possibilities. When I search on Google, it gives me lots of options, not one. That is useful, similarly with books on Amazon and movies on Netflix. Learning is not just leaving people to flounder around on their own without support. It’s about giving learners the teaching and tools to support them in their learning.
2. Losing serendipity
Loss of serendipity is a stronger argument than the one above. The joy of a bookstore is sometimes that chance find, the book you would never have come across apart from chance, or a conversation with someone you’ve just met.
What does Amazon know about you? Not that much actually but enough to turn you into a sucker? When you’re clicking around in Amazon, it makes its recommendations. This ‘prediction is based on your past purchasing and browsing behavior, as well as data aggregated from people similar to you. It knows what you and your type tend to like in terms of genre, subject, even author as it works with ontologies. Beyond this it is also making intelligent inferences about your gender, age, marital status and so on. Netflix works in a different way for TV programmes and movies.
These systems also gives you loads more choice from the long tail, saves you time and so on. Far from destroying serendipity, they often deliver the magic dust that is the accidental piece of research, new author, new movie, book or place that you end up loving.
3. Make it too easy
“You can’t make learners’ lives to easy” is something. They need to struggle, goes the argument. I’ve heard this a lot, especially in Higher Education. There is a real piece of psychology that backs this up – Bjork’s concept of ’deliberate difficulty’. However, this is no excuse for leaving students to drown and fail through a dry diet of dull lectures and the occasional essay, returned weeks after it was written. Dropout and failure is not desirable. In fact, AI driven systems can be finely tuned to make levels of difficulty fit that of the individual student. You have a far greater chance of delivering deliberate difficulty through AI, than one-size-fits-all courses. I’ve seen this happen at ASU with rises in attainment, lowered dropout and good student motivation, on adaptive learning courses.
4. To err is algorithmic
When IBMs Watson beat the champs at Jeopardy, to the question, “What grasshoppers eat,” it answered, “Kosher.” That seemed stupid but it’s the sort of thing algorithmic systems throw out, as they often have no integrated sense of real meaning. Watson may have won Jeopardy but it was the only winner that didn’t know it had won.
Yet, given that humans also make mistakes, some catastrophic, that machines could have calculated better, we often know when we’ve been thrown a mathematical outlier online. When we receive a text saying that you are ‘Taking the Kids to see Satan”. It’s Christmas, and I know you mean Santa Claus (this actually happened). In any case, search, translation, speech recognition, image and face recognition just gets better and better. The promise is that algorithms will not only get better but learn how to get better. In adaptive systems, mistakes do happen, but that’s also true in real life. Courses often have unanswerable questions, wrong answers and straight errors. I have a very long list of errors in subject matter expert material that I’ve had to deal with over the years, some of it terrifying. In any case, learners are web savvy, they know that computers throw up some daft things, they’re not stupid – learners that is!
5. Overpromising
A danger with reliance on AI and machine learning models, is that there are lots of them, and people get attached to the one they know best. This is not always best for the actual output. More specifically, 'overfitting' problems may lead to putting too much confidence into models and data sets that don;t actually bring home the predictive bacon. The good news, is that these are problems that can and will be solved. AI and machine learning, are making lots of little leaps that all add up to fast, very fast progress. As long as we recognise that these systems can overlearn and not produce the results we expect, we'll get somewhere. To imagine that they'll solve all teaching and learning problems immediately is hopelessly optimistic. To imagine that they have and will continue to have a major role in the world of learning is realistic.
6. Lack moral compass
It’s true that AI is not guided by human nature, and has no moral compass. This could be a problem, if they are used to evil ends by humans or end up causing havoc when they become too autonomous. Obvious examples are fraud, deception, malware and hate. Then again, algorithms don’t have many of the dodgy moral traits that we know, with certainty, most humans carry around in their heads – cognitive biases, racial bias, gender bias, class bias and so on. In fact, there are many things algorithms can do that teachers can't. A teaching algorithm is unlikely to subtly swing girls away from maths, science and computer studies, whereas we know that humans do. Algorithms may lack morals but on that score they are also not immoral.
7. Dehumanise
We could be dehumanised by AI, if we regard what it is to be human, human nature, as inviolable and something that is altogether good and desirable - much of it is not. There is a sense in which machines and AI could eat away at the good stuff, our social relations, economic progress, ecological progress, political progress. But let's not pigeon-hole AI into the box that sees it as mimicking the human brain (and nature). We didn't learn to fly by copying the flapping of a bird's wings, we invented far better technology and we didn't move faster by focusing too much on the legs of a cheetah - we invented the wheel. In some ways AI will enlighten us about our brains and our human nature but the more promising lines of inquiry point towards it doing things differently, and better.
8. Black art
There is a danger that the AI is a black box, esoteric maths that will ever remain opaque. Few could read the maths that constitute modern algorithmic and machine learning. It’s a variety of complex techniques that is readable by a minority. Nevertheless, this is not as completely opaque as one imagines. A lot of this maths is open source, from Google and others, as is the maths itself, in the AI community. Sure proprietary systems abound but it is not all locked up in black boxes. One could argue that most of what we call ‘teaching’ lies in the brains and practices of teachers, lecturers and trainers, totally inaccessible and opaque. Many argue that teaching is precisely this – a practice that can’t be evaluated. That, of course, simply begs the obvious question ‘what practice?
9. Private eyes
Algorithms are smart. Smart enough to suss out who you are, where you and in many cases what you are likely to do next. This can be used for some pretty awful commercial practices, especially among the large tech companies. Let’s not imagine for one minute that they now have the ‘do no evil’ heart they may once have worn on their sleeves. They are rapacious in avoiding tax, and that folks, means they avoid all the things that taxes pay for like health, education and the poor. But they are merely annoying teenagers compared to fiendish adult governments, who already use these techniques to gather data and spy on their own citizens. In most dictatorships, and in the case of the US, the citizens of other countries, smart software is working away on everything you do online.. We must remain wary of tech at that level, and make sure that we have checks and balances to stop the abuse of the power it will most certainly bring.
In education, the problem can be solved by regulation on how data is managed, stored and used. I’m not sure that there’s much to be gained from spying on essay scores. But the regulation is already happening and we must remain vigilant.
Should an institution promise all data to students of they demand it? In the UK, we have the data Protection Act, which is a legal right but this may be right in principle but exemptions are also possible. If they haven't asked, you don't have to provide the data. Exceptions would be infringements of IP by students, data to do with a crime/investigation, third party data - commonly used in AI systems that aggregate data and use it in delivery. You have also to be careful with access, as students are savvy and may look for access to change, say grades.
Predictive models are interesting. Should we provide that? Note that you only have to provide stored data, not data used on the fly, which is common in AI. Strictly speaking it is not data but a statistical inference or probability. This is where it gets tricky. It is also impractical to explain predictive analytics to all - it's far too complicated. Institutions do not want to use bad data, so everyone's on the right side here. It is unlikely that you will be used, as you have to show monetary damage - that's difficult and you'd have to show that the organisation would have to have been shown to act unlawfully. Do you have to provide all of the transactional data? In fact, with online AI systems, the data is there and visible. It is far more likely that you'd want the useful, summarised, analysed data, not the data itself, which may be almost meaningless. In practice though, students do not appear to be that interested in this issue. Do what you have to do but be pragmatic.
10. Unemployment
We saw how machines led to workers moving from fields to factories, then, with robots in manufacturing, from factories to services to produce long-term, structural, blue collar unemployment. We may now be seeing the start of white collar unemployment, as AI becomes capable of doing what even well educated graduates could do in the past. This is a complex economic issue but there can be no doubt that it will happen to one degree or another, as it has several times in the past. Each technological revolution tends to bring in job losses and new opportunities. Let's not imagine for one moment that the learning game is immune from such economic shifts, We've already see its effect on reducing the number of librarians, trainers and other learning professionals. There is, undoubtedly, more to come.
Conclusion

All technology has its downsides. Cars kill, we still drive cars. It’s a calculus where we accept technology when the benefits outweigh the cons. With some technology, the dangers may be too great – nuclear, chemical weapons and so on. But mostly is fairly benign even massively beneficial. However, as technology has become more virtual, it may be harder to spot, combat and regulate. We already find that the technology is ahead of the sociology and that older regulators find it hard to understand the pace and consequences, often rushing to throw the baby out with the bathwater and the bath. But these AI opportunities must be taken at the flood. Like most tides in the affairs of men, it may “lead to fortune…we must take the current when it serves, or lose our ventures” Shakespeare.