AI is the new UI
What do the most popular online applications all have in
common? They all use AI-driven interfaces. AI is the new UI. Google, Facebook,
Twitter, Snapchat, Email, Amazon, Google Maps, Google Translate, Satnav, Alexa,
Siri, Cortana, Netflix all use sophisticated AI to personalise in terms of
filtering, relevance, convenience, time and place-sensitivity. They work
because they tailor themselves to your needs. Few notice the invisible hand
that makes them work, that makes them more appealing. In fact, they work
because they are invisible. It is not the user interface that matters, it is
the user experience.
Yet, in online learning, AI UIs are rarely used. That’s a
puzzle, as it is the one area of human endeavour that has the most to gain. As
Black & William showed, feedback that is relevant, clear and precise, goes
a long way in learning. Not so much a silver bullet as a series of well
targeted rifle shots that keep the learner moving forward. When learning is
sensitive to the learner’s needs in terms of pace, relevance and convenience,
things progress.
Learning demands attention and because our working memory is
the narrow funnel through which we acquire knowledge and skills, the more
frictionless the interface, the more efficient the speed and efficacy of
learning. Why load the learner with the extra tasks of learning an interface,
navigation and extraneous noise. We’ve seen steady progress beyond the QWERTY
keyboard, designed to slow typing down to avoid mechanical jams, towards mice
and touch screens. But it is with the leap into AI that interfaces are becoming
truly invisible.
Textless
Voice was the first breakthrough and voice recognition is
only now reaching the level of reliability that allows it to be used in
consumer computers, smartphones and devices in the home, like Amazon Echo and
Google Home. We don’t have to learn how to speak and listen, those are skills
we picked up effortlessly as young children. In a sense, we didn’t have to
learn how to do these things at all, they came naturally. As bots develop the
ability to engage in dialogue, they will be ever more useful in teaching and
learning.
AI also provides typing, fingerprint and face recognition.
These can be used for personal identification, even assessment. Face
recognition for ID, as well as thought diagnosis, is also advancing, as is eye
movement and physical gesture recognition. Such techniques are commonly used in
online services such as Google, Facebook, Snapchat and so on. But there are
bigger prizes in the invisible interface game. So let's take a leapof the imagination and see where this may lead to over the next few decades.
Frictionless interfaces
Mark Zuckerberg announced this year that he wants to get into
mind interfaces, where you control computers and write straight from thought. This
is an attempt to move beyond smartphones. The advantages are obvious in that
you think fast, type slow. There’s
already someone with a pea-sized implant that can type eight words a minute.
Optical imaging (lasers) that read the brain are one possibility. There is an
obvious problem here around privacy but Facebook claim to be focussing only on
words chosen by the brain for speech i.e. things you were going to say anyway.
This capability could also be used to control augmented and virtual reality, as
well as comms to the internet of things. Underlying all of this is AI.
In Sex, Lies and Brain Scans, by Sahakian and Gottwald, the
advances in this area sound astonishing. John-Dylan Hayes (Max Plank Institute)
can already predict intentions in the mind, with scans, to see whether the
brain is about to add or subtract two numbers, or press a right or left button.
Words
can also be read, with Tom Mitchell (Carnegie Mellon) able to spot, from fMRI scans, nouns from a list of 60, 7 times
out of 10.
They moved on to train the model to predict words out of a set of 1001
nouns, 7 times out of 10. Jack Gallant (University of California) reconstructed
watched movies purely from scans. Even emotions
can be read, such as fear, happiness, sadness, lust and pride by Karim
Kassan (Carnegie Mellon).
Beyond this there has been modest success by Tomoyasu Horikawain identifying
topics in dreams. Sentiment analysis from text and speech is also making
progress with AI systems providing the analysis.
The good
news is that there seems to be commonality across humans, as semantic maps, the
relationship between words and concepts seems to be consistent across
individuals. Of course, there are problems to be overcome as the brain tends to
produce a lot of ‘noise’, which rises and falls but doesn’t tell us much else.
The speed of neurotransmission is blindingly fast, making that difficult to
track and, of course, most of these experiments use huge, immobile and
expensive scanners.
The implications for learning are obvious. When we know what
you think, we know whether you are learning, optimise that learning, provide
relevant feedback and also reliably assess. To read the mind is to read the
learning process, it’s misunderstandings and failures, as well as its
understanding and successful acquisition of knowledge and skills. A window into
the mind gives teachers and students unique advantages in learning.
Seamless interfaces
Elon Musk’s Neuralink goes one step further looking at
extending our already extended mind through Neural Laces or implants. Although
our brains can cope with sizeable INPUT flows through our senses, we are
severely limited on OUTPUT, with speech or two meat fingers pecking away on
keyboards and touchscreens. The goal is to interface physically with the brain
to explore communications but also storage and therefore extended memory. Imagine
expanding your memory so that it becomes more reliable – able to know so much
more, have higher mathematical skills, speak many languages, have many more
skills.
We already have cochlear implants that bring hearing to the
deaf, implants that allow those who suffer from paralysis to use their limbs.
We have seen how brain use in VR can rewire the brain and restore the nervous
system in paraplegics. This should come as no surprise that this will develop
further as AI solves the problem of interfacing, in the sense of both reading
and writing to the brain.
The potential for learning is literally ‘mind blowing’.
Massively leaps in efficacy may be possible, as well as retained knowledge,
retrieval and skills. We are augmenting the brain by making it part of a larger
network, seamlessly.
Conclusion
There is a sense in which the middleman is being slowly squeezed
out here or disintermediated. Will there be a need for classrooms, teaching,
blackboards, whiteboards, lectures or any of the apparatus of teaching when the
brain is an open notebook, ready to interface directly with knowledge and
skills, at first with deviceless natural interfaces using
voice, gesture and looks, then frictionless brain communications
and finally seamless brain links. Clumsy interfaces inhibit learning, clean
smooth, deviceless, frictionless and seamless interfaces enhance and accelerate
learning. This all plays to enhancing the weaknesses of the evolved biological brain
- its biases, inattentiveness, forgetting, need to sleep, depressive
tendencies, lack of download or networking, slow decline, dementia and death. A
new frontier has opened up and we’re crossing literally into ‘unknown’
territory. We may even find that we will come to know the previously unknowable
and think at levels beyond the current limitations of our flawed brains.
1 comment:
Hi Donald,
Very interesting perspective although the work done by Jack Gallant (not Gallout) and Tomoyasu Horikawa (not Horikavay) relies on calibrating fMRIs to specific individuals rather than using a universal template. It may also require another form of technology to really measure brain activity as the current $3M, 15 tonne fMRI blood flow measurement machines only have a resolution of about 3mm and are proving challenging to miniaturize.
Ian
Post a Comment