Alvin
Toffler, who died this week, said, “The
illiterate of the 21st century will not be those who cannot read and write, but
those who cannot learn, unlearn, and relearn.” But he had a better quote, “If you don't have a strategy, you're part of
someone else's strategy.” There’s one species of this argument that really
matters in the learning game - if you don’t have an AI strategy, you’ll be part
of someone else’s strategy.
Most
pedagogic change now comes through the use of technology. The internet is a
huge Darwinian machine that selects ‘fit for purpose’ services, which millions,
sometimes billions, want and use. As a result, in the learning game, we have seen
more pedagogic change in the last 20 years, than the last 2000 years. This
process is accelerating.
There are
some new kids on the learning block, like blockchain. But at a deeper level there’s
something far more significant that’s happening; AI is re-shaping the learning
landscape. Intriguingly, AI now draws its inspiration from evidence of how we
actually learn. This should be a wake up call for those of us who work in the
field. The AI community now shows us what works, practically, in learning
theory, and what doesn’t. They do this by building ‘learners’, software that
learns. Pedagogic change no longer comes from educational research (not sure
that it ever did), it comes from insights in cognitive science and,
increasingly, through this form of technological innovation. AI is the latest
manifestation of digital pedagogy and the one that is now giving us
confirmation about what works and doesn’t work.
Learning
embedded in AI
We’ve
recently seen some super successes in AI, with Deepmind’s spectacular win
against one of the World's greatest GO players, the Todai project passing the Tokyo University
entrance exam. Deepmind used a layered neural network, with an executive layer,
initially trained on 30m human moves, which then played itself using a
trial-and-error process (reinforcement learning), with the huge processing
power of the Google Cloud Platform – and it won.
This is
where it gets fascinating for those of us interested in learning theory. Cognitive
scientists, and cross-discipline minds, like Demis Hassabis, have taken
principles from cognitive learning theory, such as clustering, attention,
learning by doing, reinforcement learning, chunking and practice, embodied
these in AI, and are using them to great effect in getting software to learn
and problem solve. These learning machines are showing us the way. So what can
WE learn from this?
1. Search
Access to
knowledge and skills has long been a problem in education and training. The
traditional model has been slow, scarce and costly supply through expensive
institutions such as Universities and libraries, with rising costs and debt. Then
along came Google, a great pedagogic leap, and reduced that time and cost to
almost nothing. This has been a huge pedagogic leap, one that completely
reshaped the learning landscape at all levels. It is all down to AI. Google is
nothing but AI.
2.
Feedback
We know that improved and detailed, personal feedback accelerates
learning. Yet traditional teaching, especially in the classroom and lectures, make
this very difficult. Using data to dynamically adapt, in real time, what is taught
next, is a common technique in AI. Much of what you see online is determined by
algorithms that constantly monitor your needs. Software (AI) driven feedback is
the only way to provide such detailed feedback, personally, on scale. AI
thrives on new data to update what it thinks it knows, whether from the
individual learner or aggregated learners. This is how Bayes and dozens of
other species of AI algorithms work. We need to recognise that the brain has
exactly the same needs. Everyone has unique learning needs and everyone need to
be educated uniquely.
3. Less
is more
The
principle of clustering, indeed most refinement of algorithms, reduces what has
to be learnt to a minimum, looking for optimal ways forward, while retaining
efficacy. This is an important ‘less is more’ principle that is all too often
ignored in real world teaching and learning. We still teach at too general a
level, for example, teaching how to write essays by repeated essay writing,
rather than the more detailed components of good writing and analysis. For a
complete breakdown of this error see this excellent video by Daisy Christodoulou. AI
has applied and developed a battery of mathematical techniques to optimize learning. They
select, reduce, optimize, judge and create recommendations. It has 2500 years
of philosophy, logic, probability theory and mathematics behind it and now that
we have an abundance of data and cheap computers, is bearing astonishing and
exotic fruit.
4. Chunking
We have
known about this for decades. Chunking, meaningfully, accelerates learning. AI does this often with data structures, pre-processing and choice of efficient algorithms. Sorting algorithms are a great example of chunking (binary sort) for efficieny. Skills acquisition is not about just, say playing golf, but practising how to
putt, drive etc. No golfer becomes great by simply playing golf – they chunk
down and practice. AI has shown that, to learn effectively, you chunk problems
down into their constituent parts, practice those, then build up your skills. AI
is lots of little skills that add up to something big – like the self-driving
car. Who saw that coming? Teaching general skills is still all too common,
especially in schools, colleges and universities. But it’s operating at the
wrong level. AI has taught us that we need to focus on the detail.
5. Reinforcement
learning
Another
insight from learning theory, that the AI folk have picked up on, is that
people learn by DOING things. Children don’t learn much by sitting and
listening. They do stuff. This reinforcement learning, the idea that every
state has a value, beyond the simple binary ‘win’ or ‘lose’ positions, is
powerful. Reinforcement learners (software) rehearse learning a huge number of
times, trying and trying again, sometimes the best way, sometimes randomly.
It’s a turbo-charged learner. So having learnt what to do from human data, it
plays itself, an enormous number of times in a short period of time, to get
super-smart. This is how Deepmind beat the GO champion. They have been
hugely successful in all sorts of AI, real world tasks. It tries all sorts of
habits but selectively chooses those that are successful. It learns how to
learn. This effortful learning, learn by doing theory, is at the heart of AI.
6. Deliberate practice
AI has
embodied the idea of ‘deliberate practice’, from Ericsson, and built algorithms
and methods of propagation that do exactly that. They practice with intent,
that intent being improved performance. This is what backpropagation in neural
networks and many other machine learning approaches do – they automate and
optimize deliberate practice and improvement. They embody what education and
training has ignored for too long – deliberate practice.
7. Spaced-practice
Spaced-practice
tools are largely driven by algorithms that deliver the pacing, interleaving
and load balance for spaced practice. We have known since Ebbinghaus, since
1885, that learning suffers from massive forgetting. We also know that the solution
is deliberate, spaced-practice. This can be effected online with smart
algorithms that determine how this should be delivered. They do what no teacher
can do, identify what needs to be reinforced, how it needs to be reinforced
(interleaved etc) and when it needs to be delivered (load balanced etc),
related to that individual’s needs. It is all down to AI.
8. Data driven
We are no
longer data poor, we are data rich. It is no accident that slow burning AI
suddenly had its Cambrian explosion, with thousands of practical examples, from
speech recognition to self-driving cars, breakthroughs that are reshaping global
industries. The ability to manage, read and interpret this data gives you the
radar you need to keep ahead of the game. That’s exactly what AI does applying
logic, probability and computer power to the problem of prediction. It has been
fashionable to ignore ‘knowledge’ as just ‘data’ in education, but AI has shown
that knowledge (data) really does matter. It is an integral part of learning,
not something to be abandoned or tritely classed as ‘rote learning’. It is, in
fact, what enables deep learning.
9. Socratic learning
Socrates
learning theory is often called a ‘theory of ignorance’. It was a process of excising
what you think you know, stripping things back until the real knowledge was
exposed. He taught us humility in learning. This is also true of AI. It knows
what it knows but also knows what the probabilities are in its outputs. In this
sense it is free from the cognitive biases, even gender, race and
socio-economic biases that teachers, as they are human, almost always have. We
have a chance here, to both recognize our limitations and embrace technology
enhanced teaching. A little Socratic humility,
through the use of AI, recognizing where we are good, and not so good, in
teaching (and learning), could go a long way.
10. Learning to learn
An
interesting insight comes from what AI folk call ‘learners’. These are machine
learning algorithms, that learn themselves. This fundamental point, that the
new landscape is not the old one of ‘human teachers’ and ‘human learners’, but
also one of’ machine teachers’, and importantly, ‘machine learners’. These new
entities cope with complexity; time complexity, space complexity and
errors. The machine learners learn to learn and now learn how to teach to
learn. The automatons automate automation. This has profound implications for
productivity, employment and politics. It will, in time, become an existential
issue, first for the professions, even for our species.
Conclusion
AI is all
about learning. It is software that learns. It is also software that can create good learning content. We have to pay attention to what is
happening here. AI tells us that learning is a process, one that can be unpacked and copied - reverse engineered. They are achieving things that were unimaginable just a few
years ago. They are learning about learning by creating effective learners. The
fact that AI is having so much success by following the lessons that cognitive
science has to teach, is surely a wake up call for learning theorists,
especially those still stuck in foggy world of social constructivism.
No comments:
Post a Comment