For some in the learning game, the mere mention of the word
‘algorithm’ makes them feel as if smart aliens have landed and we must
exterminate them before we become enslaved to prescribed brain washing. They’re
the work of the devil, introducing autonomous, flawed and biased bots into the
world of learning. They want to keep them out at all costs. For others they are
the saviour that will accelerate the longed-for personalised learning,
eliminate the harmful effects of human bias and make learning much more open, efficient,
personal and productive. As usual, the truth is that there’s a bit of both.
Taxonomy of AI in
learning
The confusion is caused, I think, by not really
understanding the full range or taxonomy of algorithms and AI used in learningTo
that end, I’ve come up with 5-level
taxonomy of AI in learning (see detail). The important thing is not to confuse
or equate AI with autonomy (or lack of). t is easy to slip from one of these to
the other. AI has a spectrum of levels, reflected in this taxonomy and is many
things, using many different types of AI and machine learning.
But there are a number of dangers that do lurk in the world
of AI and algorithms, so it important that the debate is open and that we don’t
close down opportunities by being naive about what AI is, how it works and what
it can do for us.
1. Too deterministic
As
individuals, living in the Western world at least, we value individuality, the
idea that we have liberties, rights and opportunities. One danger in the
algorithmic age is that it may start, without us realizing, to funnel us
towards groupthink. We get fed the same newsfeeds, sold the same popular stuff,
hear the same music, buy the same books. On the other had, it may expand our
horizons, not limiting us to fixed cultural, commercial and consumerist
channels, but delivering personalised (individualised) entities.
This debate
is alive in learning, where adaptive, AI driven platforms can deliver
personalized learning but, if we are not careful, some may constrict the nature
of learning. We need to be careful that the learner retains the curiosity and
critical thinking that is necessary to become an autonomous learner. AI ‘could’
deliver narrow, deterministic, prescribed pathways in learning, not allowing
the learner to breath and expand their horizons, and apply critical thought.
This is
really a design issue, as sometimes that is desirable (when diagnosing
misconceptions and teaching fixed domains, such as maths and science),
sometimes it doesn’t, when a more critical approach is needed. There’s a
tendency for both sides to snipe at each other, not realising that learning is
a broad spectrum of activities, not one thing.
AI doesn’t
always make the learning narrower or more deterministic. That’s because many of
these systems still leave lots open, indeed suggest open options, to the learner.
One can allow the learner as much freedom as you wish by making the course
porous. I’ve done this with WildFire, a semantic tool that automatically creates e-learning content. It deliberately suggests lots of links to the
student, based on what they look at and their performance, out to content on
the web. This is the sort of thing they are likely to receive from a teacher or
lecturer, but with AI it’s personalized. It tracks what ‘you’ do and predicts
what it thinks you may need to know. That ‘may’ word is important. Far from
closing you down, it opens up the mind to other sources and possibilities. When
I search on Google, it gives me lots of options, not one. That is useful,
similarly with books on Amazon and movies on Netflix. Learning is not just
leaving people to flounder around on their own without support. It’s about
giving learners the teaching and tools to support them in their learning.
2. Losing serendipity
Loss of
serendipity is a stronger argument than the one above. The joy of a bookstore
is sometimes that chance find, the book you would never have come across apart
from chance, or a conversation with someone you’ve just met.
What does
Amazon know about you? Not that much actually but enough to turn you into a
sucker? When you’re clicking around in Amazon, it makes its recommendations.
This ‘prediction is based on your past purchasing and browsing behavior, as
well as data aggregated from people similar to you. It knows what you and your
type tend to like in terms of genre, subject, even author as it works with
ontologies. Beyond this it is also making intelligent inferences about your
gender, age, marital status and so on. Netflix works in a different way for TV
programmes and movies.
These
systems also gives you loads more choice from the long tail, saves you time and
so on. Far from destroying serendipity, they often deliver the magic dust that is
the accidental piece of research, new author, new movie, book or place that you
end up loving.
3. Make it too easy
“You can’t
make learners’ lives to easy” is something. They need to struggle, goes the
argument. I’ve heard this a lot, especially in Higher Education. There is a
real piece of psychology that backs this up – Bjork’s concept of ’deliberate
difficulty’. However, this is no excuse for leaving students to drown and fail
through a dry diet of dull lectures and the occasional essay, returned weeks
after it was written. Dropout and failure is not desirable. In fact, AI driven
systems can be finely tuned to make levels of difficulty fit that of the
individual student. You have a far greater chance of delivering deliberate
difficulty through AI, than one-size-fits-all courses. I’ve seen this happen at
ASU with rises in attainment, lowered dropout and good student motivation, on
adaptive learning courses.
4. To err is algorithmic
When IBMs
Watson beat the champs at Jeopardy, to the question, “What grasshoppers eat,”
it answered, “Kosher.” That seemed stupid but it’s the sort of thing
algorithmic systems throw out, as they often have no integrated sense of real
meaning. Watson may have won Jeopardy but it was the only winner that didn’t
know it had won.
Yet, given
that humans also make mistakes, some catastrophic, that machines could have
calculated better, we often know when we’ve been thrown a mathematical outlier
online. When we receive a text saying that you are ‘Taking the Kids to see
Satan”. It’s Christmas, and I know you mean Santa Claus (this actually
happened). In any case, search, translation, speech recognition, image and face
recognition just gets better and better. The promise is that algorithms will
not only get better but learn how to get better. In adaptive systems, mistakes
do happen, but that’s also true in real life. Courses often have unanswerable
questions, wrong answers and straight errors. I have a very long list of errors
in subject matter expert material that I’ve had to deal with over the years,
some of it terrifying. In any case, learners are web savvy, they know that
computers throw up some daft things, they’re not stupid – learners that is!
5. Overpromising
A danger with reliance on AI and machine learning models, is that there are lots of them, and people get attached to the one they know best. This is not always best for the actual output. More specifically, 'overfitting' problems may lead to putting too much confidence into models and data sets that don;t actually bring home the predictive bacon. The good news, is that these are problems that can and will be solved. AI and machine learning, are making lots of little leaps that all add up to fast, very fast progress. As long as we recognise that these systems can overlearn and not produce the results we expect, we'll get somewhere. To imagine that they'll solve all teaching and learning problems immediately is hopelessly optimistic. To imagine that they have and will continue to have a major role in the world of learning is realistic.
6. Lack moral compass
It’s true
that AI is not guided by human nature, and has no moral compass. This could be
a problem, if they are used to evil ends by humans or end up causing havoc when
they become too autonomous. Obvious examples are fraud, deception, malware and
hate. Then again, algorithms don’t have many of the dodgy moral traits that we
know, with certainty, most humans carry around in their heads – cognitive
biases, racial bias, gender bias, class bias and so on. In fact, there are many things algorithms can do that teachers can't. A teaching algorithm is
unlikely to subtly swing girls away from maths, science and computer studies,
whereas we know that humans do. Algorithms may lack morals but on that score
they are also not immoral.
7. Dehumanise
We could be dehumanised by AI, if we regard what it is to be human, human nature, as inviolable and something that is altogether good and desirable - much of it is not. There is a sense in which machines and AI could eat away at the good stuff, our social relations, economic progress, ecological progress, political progress. But let's not pigeon-hole AI into the box that sees it as mimicking the human brain (and nature). We didn't learn to fly by copying the flapping of a bird's wings, we invented far better technology and we didn't move faster by focusing too much on the legs of a cheetah - we invented the wheel. In some ways AI will enlighten us about our brains and our human nature but the more promising lines of inquiry point towards it doing things differently, and better.
8. Black art
There is a
danger that the AI is a black box, esoteric maths that will ever remain opaque.
Few could read the maths that constitute modern algorithmic and machine
learning. It’s a variety of complex techniques that is readable by a minority. Nevertheless,
this is not as completely opaque as one imagines. A lot of this maths is open
source, from Google and others, as is the maths itself, in the AI community.
Sure proprietary systems abound but it is not all locked up in black boxes. One
could argue that most of what we call ‘teaching’ lies in the brains and
practices of teachers, lecturers and trainers, totally inaccessible and opaque.
Many argue that teaching is precisely this – a practice that can’t be
evaluated. That, of course, simply begs the obvious question ‘what practice?
9. Private eyes
Algorithms
are smart. Smart enough to suss out who you are, where you and in many cases
what you are likely to do next. This can be used for some pretty awful
commercial practices, especially among the large tech companies. Let’s not
imagine for one minute that they now have the ‘do no evil’ heart they may once
have worn on their sleeves. They are rapacious in avoiding tax, and that folks,
means they avoid all the things that taxes pay for like health, education and
the poor. But they are merely annoying teenagers compared to fiendish adult
governments, who already use these techniques to gather data and spy on their
own citizens. In most dictatorships, and in the case of the US, the citizens of
other countries, smart software is working away on everything you do online..
We must remain wary of tech at that level, and make sure that we have checks
and balances to stop the abuse of the power it will most certainly bring.
In
education, the problem can be solved by regulation on how data is managed,
stored and used. I’m not sure that there’s much to be gained from spying on
essay scores. But the regulation is already happening and we must remain
vigilant.
Should an institution promise all data to students of they demand it? In the UK, we have the data Protection Act, which is a legal right but this may be right in principle but exemptions are also possible. If they haven't asked, you don't have to provide the data. Exceptions would be infringements of IP by students, data to do with a crime/investigation, third party data - commonly used in AI systems that aggregate data and use it in delivery. You have also to be careful with access, as students are savvy and may look for access to change, say grades.
Predictive models are interesting. Should we provide that? Note that you only have to provide stored data, not data used on the fly, which is common in AI. Strictly speaking it is not data but a statistical inference or probability. This is where it gets tricky. It is also impractical to explain predictive analytics to all - it's far too complicated. Institutions do not want to use bad data, so everyone's on the right side here. It is unlikely that you will be used, as you have to show monetary damage - that's difficult and you'd have to show that the organisation would have to have been shown to act unlawfully. Do you have to provide all of the transactional data? In fact, with online AI systems, the data is there and visible. It is far more likely that you'd want the useful, summarised, analysed data, not the data itself, which may be almost meaningless. In practice though, students do not appear to be that interested in this issue. Do what you have to do but be pragmatic.
Should an institution promise all data to students of they demand it? In the UK, we have the data Protection Act, which is a legal right but this may be right in principle but exemptions are also possible. If they haven't asked, you don't have to provide the data. Exceptions would be infringements of IP by students, data to do with a crime/investigation, third party data - commonly used in AI systems that aggregate data and use it in delivery. You have also to be careful with access, as students are savvy and may look for access to change, say grades.
Predictive models are interesting. Should we provide that? Note that you only have to provide stored data, not data used on the fly, which is common in AI. Strictly speaking it is not data but a statistical inference or probability. This is where it gets tricky. It is also impractical to explain predictive analytics to all - it's far too complicated. Institutions do not want to use bad data, so everyone's on the right side here. It is unlikely that you will be used, as you have to show monetary damage - that's difficult and you'd have to show that the organisation would have to have been shown to act unlawfully. Do you have to provide all of the transactional data? In fact, with online AI systems, the data is there and visible. It is far more likely that you'd want the useful, summarised, analysed data, not the data itself, which may be almost meaningless. In practice though, students do not appear to be that interested in this issue. Do what you have to do but be pragmatic.
10. Unemployment
We saw how
machines led to workers moving from fields to factories, then, with robots in
manufacturing, from factories to services to produce long-term, structural,
blue collar unemployment. We may now be seeing the start of white collar
unemployment, as AI becomes capable of doing what even well educated graduates
could do in the past. This is a complex economic issue but there can be no
doubt that it will happen to one degree or another, as it has several times in
the past. Each technological revolution tends to bring in job losses and new
opportunities. Let's not imagine for one moment that the learning game is immune from such economic shifts, We've already see its effect on reducing the number of librarians, trainers and other learning professionals. There is, undoubtedly, more to come.
Conclusion
All technology has its downsides. Cars kill, we still
drive cars. It’s a calculus where we accept technology when the benefits
outweigh the cons. With some technology, the dangers may be too great –
nuclear, chemical weapons and so on. But mostly is fairly benign even massively
beneficial. However, as technology has become more virtual, it may be harder to
spot, combat and regulate. We already find that the technology is ahead of the
sociology and that older regulators find it hard to understand the pace and
consequences, often rushing to throw the baby out with the bathwater and the
bath. But these AI opportunities must be taken at the flood. Like most tides in
the affairs of men, it may “lead to fortune…we must take the current when it
serves, or lose our ventures” Shakespeare.
No comments:
Post a Comment