Thursday, March 28, 2024

Chater - The Flat Mind

Nick Chater is a Professor of Behavioural Science at Warwick Business School where he leads the Behavioural Science group, one of the largest in Europe. He is also known for his role as a scientist-in-residence on the Radio 4 series ‘The Human Zoo’. 

His work in the field is extensive, including co-authoring books and numerous papers on the human mind and rationality, challenging the whole theoretical basis of psychology and linguistics with ideas around the flat mind and language as a social construct.

The Flat Mind

In The Flat Mind (2018) starts with a quote from Dennett and presents a thought-provoking theory about human cognition and challenges the conventional belief in the depth and complexity of human thought, arguing instead, that our minds operate on a much more superficial level than traditionally thought. According to his theory, human thinking relies heavily on immediate perceptions and the present context, rather than on deep, abstract reasoning about the world. The mind is not a collection of specialised modules or faculties for different cognitive functions. Instead, he argues the mind is ‘flat’ and there are no true cognitive modules, just patterns of activation over a common coding base.

Chater proposes that rather than processing and manipulating detailed internal knowledge, our minds interpret and react to current sensory data. This approach suggests that much of our cognitive processing is less about drawing from a rich, detailed internal database of knowledge and more about improvising responses based on immediate environmental cues.

The Flat Mind shifts the focus from the idea of the mind as a deep, introspective processor to a more surface-level, reactive entity. This concept has implications for our understanding of memory, reasoning, and decision-making, suggesting that these processes are more context-dependent and less introspective than previously believed.

An important point is that the mind does appear to both read and speak one word at a time and perception has limited visual acuity and colour perception. In addition, he makes the even stronerg claim that the mind is a hoax which we play on ourselves.

Bayesian Brain Hypothesis

As we don't have unlimited cognitive resources, our decision-making exhibits ‘bounded rationality’ using heuristics and rules of thumb rather than optimal rationality and so questions the idea that cognition involves constructing rich mental representations or models of the world. Cognition emerges from simple processes operating over compressed sensory inputs. The brain essentially performs Bayesian inferences or similar probabilistic calculations to interpret sensory data and make decisions, continually updating beliefs based on new evidence. A key idea is that, in the absence of mental representations, human perception and cognition is geared towards efficiently compressing and encoding sensory information into simple codes.

In essence, Chater believes cognition arises from Bayesian probability updating over compressed sensory inputs by a flat, general-purpose mind, not specialised modules or rich mental representations. His perspective challenges many traditional assumptions in cognitive science. Visual perception is the most important of the senses and he sees thinking as largely an extension of perception.

The Language Game

In The Language Game (2022), co-authored with Morten Christiansen, he presents a radical, thought-provoking exploration of how language works and its role in human society. He argues that language originated in communicative charades then develops into other more complex forms of linguistic communications and argues against the common belief that language is a complex code passed down through generations. Instead, language is a form of social negotiation that is constantly evolving. This allows our linguistic abilities develop from social interaction rather than being hard-wired into our brains through deep-seated biological evolution. The book delves into how we use words to navigate our social worlds, construct complex ideas, and solve problems together and challenges traditional linguistic theories and provides insights into the dynamic, ever-changing nature of language as a product of social processes.

Discussion

He considers the rise of Large Language Models as a confirmation of his position. Like LLMs, there are no hidden depths and nothing lurks below the surface. In fact, those constructs are misconceived as stories we tell ourselves about cognition. Our imaginations tell stories and we carry this over into views about cognition, imagining that we work from deep, pre-conceived ideas and reason, when in fact we generate words in the moment and on the fly. Just as there are no hidden depths in LLMs, there are no ghosts in the machine in our minds, no Freudian psychotherapeutic phantoms like the ego, id, superego or Jungian archetypes. There is nothing of substance in our unconsciousness, as it does not exist as traditionally framed. Our imaginations see unconsciousness as a delving down to hidden depths but there are no depths, no creatures of the deep to be found. Like Hume, Parfit and Strawson, he recognises that we are storied-selves but these stories are largely imagined, narrative fictions.

Bibliography

Chater, N. (2018). The Flat Mind. Penguin

Chater, N. (2022). The Language Game. Penguin


Wednesday, March 27, 2024

Will AI will make us stupid? A conceit of the old when faced the the new!

A frequent question I now get when speaking, is actually a meme. I had it on a panel this week when people were arguing that AI will make young lawyers stupid because they will lose the ability to do research and drafting. This is largely the conceit of the old when faced the the new.

Surely AI will make us stupid… often followed by the example of Google Maps making us lose our ability to read maps, so without it we’d stumble abut like zombies bumping into things unable to find anywhere. I’m sure there were people saying the same, about us losing our shit, when we no longer navigated using the stars. 

I went through the whole, use a paper atlas/map, then print off route from computer and have it on your lap, trough to having a specialist satnav and on to GPS maps on my smartphone. Every single step was an improvement. And whether it is Apple or Google, I don’t need or want to pour over a difficult to unfold map and decide on some sub-optimal route, where if I get lost I genuinely have no idea where I am, don’t know how far I have to go, how long it will take and whether there’s a petrol station or toilet within the next 10 minutes of driving.

A good example is the Great Calculator panic in hte 1980s and 90s. I remember it well. A superb takedown was published this month in Scientific American. A survey at the time showed that 72% of teachers and mathematicians opposed their use in learning. This was the typical reaction of people who think, I had it tough, so you will have it just as tough.

What actually happened is clear. It made us rethink what we teach as mathematics. Learners went up the value scale, as calculators do much of the grunt work, even to the level of graphing.

Do we miss horse riding skills, before we had cars, the stagecoach before trains, and taking two weeks to get abroad by boat before planes. Making papyrus from reeds was no fun, neither was slaughtering cows and skinning them for vellum for writing.

I am happy I didn’t have to learn Morse code for communications, and now don’t have to write letters every time I want to communicate with someone afar, as that was how I did have to communicate with the only person I knew abroad, my penpal, one letter a month or using great thick phone directories. Neither do I miss having to go to a payphone to arrange a night out while my mate was in one at the other end at exactly that time. 

My grandmother used to wash clothes by hand and used a mangle, we now have washing machines and dryers. I spent untold hours washing and drying dishes in a sink of filthy water – give me a dishwasher any day – it’s not a skill I regret losing. Neither do I regret doing what I had to do for years in the rain and darkness, fetch a bucket of coal for the fire in our house, twice a night, or start a fire every morning like a caveman, from a firelighter, some small sticks and a newspaper.

I for one am happy that we don’t have to read Latin to study and research, use library cards to find books, pouring over microfiche on a hand-cranked machine, or walk endlessly up and down stacks of journals to find one research article. Nor having to buy the entire Encyclopedia at huge cost before search. It hasn’t affected by ability to learn or do research one bit – only made things easier, faster and better. 

Winding clocks up was a pain, developing my holiday snaps in a darkroom or waiting for weeks to get them developed by post was wearisome. Every holiday and flight booked through a high street travel agent was a pain. I like being able to watch what I want when I like, not being limited to three TV channels, which had to be changed by hand on the TV. I like not having to go to Blockbuster to rent videotapes, then return them.

And no, I don’t miss hand setting type for printing or threading typewriter ribbons, having to get every letter and word right, no reordering or revisions possible, putting sheets of carbon paper behind paper to make a single copy, even those damn expensive photocopiers, physically looking up a physical dictionary for the meaning of a word, or thesaurus for alternatives. No, I don’t want to do my company accounts in a ledger book, or add by hand very long lists of numbers - I like calculators and spreadsheets. I don’t want to learn binary arithmetic before learning to code, use floppy discs or use punchcards (all of which I did). Dial up internet was a pain.

Augmentation and automation mean we can progress and do things faster and better. This freed women from the drudgery of domestic chores and freed working-class people from the indignity of servitude. We love invoking the idea that tech will make us dumber. It’s a lazy hit. All tech has its doomsayers, claiming it will make people stupider – writing, printing, radio, film, TV, photocopiers, computers, the internet, search, smartphones… and now a technology that promises to make us massively more productive - AI.

Sunday, March 24, 2024

Sisyphean nature of moral panics against new technology

Moral panics around new technology are as absurd as they are predictable. No sooner do we forget that the last one ever happened and we do it all over again. That’s because we’re hard-wired for confirmation and negativity biases. 

Everything that’s already in the world when you’re born is just normal; 

Anything that gets invented between then and before you turn 30 is incredibly exciting and creative and with any luck you can make a career out of it;

Anything that gets invented after you’re 30 is against the natural order of things and the beginning of the end of civilisation as we know it, until it’s been around for about 10 years when it gradually turns out to be alright really…” 
Douglas Adams

Every technology induces a ‘moral panic’ which has roughly similar features. Children and adolescents are targeted, as we see every new generation as degenerate and inferior to our own. They are always being distracted by technologies from writing to radio, film and television. No sooner did these old technologies become the norm, indeed part of our culture, than the attacks began on social media and computer games. 

Social critics, journalists, academics & researchers are curiously immune themselves, of course, to negative effects of the panic. They are above it all. Yet they feel confident in beating every piece of new technology like a pinata.

Stereotypical critiques become the norm. It makes us stupid, they will claim, quoting the use of calculators, Google, Google Maps, Wikipedia and now AI, as mind numbing technologies that turn us all into morons, all the while happily using such technology themselves. 

Eventually it all subsides and fades away as the benefits are realised. But the cycle is doomed to repeat itself, what Amy Orban called the ‘Sisyphean Cycle of Technology Panics’ where she shows that previous systematic reviews always overestimated the negative link between digital technology and wellbeing. 

This has gone on for as long as we have invented technology:





In my book 'Learning Technologies', I made the point that a fundamental feature of the science of techn-ology is the backlash against the new. We are experiencing this now with AI, on a Global scale. The backlash is greater because adoption and use have been quicker and larger than anything we have ever seen before.

A deluge of reports, frameworks, even hastily and badly written laws, such as the EU AI Act are all part of this backlash, along with an army of people who see this as their realm and opportunity....

But I am not downhearted, as history is on the side of those who see such technology as beneficial.

In December last year I used this whole set of images in a debate against this motion: "This House believes that the widespread implementation of AI in Learning will do more Harm than Good"

We won.

Wednesday, March 20, 2024

What does the learning game have to learn from the beautiful game - football? Data really matters...

Most professional sports employ data to improve performance. Yet football (soccer), in data terms, is not so much the beautiful game as a rather messy and random affair or in statistical terms – stochastic. This refers to the level of unpredictability and the influence of random factors on its outcomes. Football is often considered highly stochastic compared to some other sports due its low scoring, flow with few stoppages and more unpredictable events and player performance and other variables.

It's important to note that all sports have some level of stochasticity but comparatively, sports like tennis and basketball have higher scoring and more frequent scoring opportunities, which can reduce the impact of a single random event. Baseball and cricket, with their more structured and turn-based nature, allow for more consistent application of skill and strategy, though they still have elements of unpredictability.

Unlike many sports, such as basketball, American football and baseball, in soccer the ball changes sides so often it is difficult to identify patterns in the numbers. That is not to say they don’t exist. As usual, the data, although messy, reveals some surprising facts:

1. Corners don’t matter that much. Mourino was amazed when English supporters cheered corners, as he knew they rarely led to goals. The stats support this. There is no correlation between corners and goals – the correlation is essentially zero.

2. Then there’s an old myth that teams are at their most vulnerable after scoring a goal. Teams are not more vulnerable immediately after scoring goal. In fact, the numbers show that this is the least likely time that a goal will be conceded.

3. Coin toss is the most significant factor in penalty-shootout success. 60% of all penalty shootouts have been won by coin toss winners. Goalkeepers who mess about on the line and hold their hands high to look bigger also have an effect, making a miss more likely. Standing 10 cms to one side also has a significant, almost unconscious effect on the goalscorer, making one side look more tempting.

4. It’s also a game of turnovers. The vast amount of moves never go beyond four passes. This has huge consequences – ‘pressing’ matters, especially in final third of field. Avoiding turnovers is perhaps the most important tactic in football.

These were just a few of the secrets revealed by Chris Anderson and David Sally, two academics, from Cornell and Dartmouth, in their book The Numbers Game – Why Everything You Know About Football is Wrong.

Artificial Intelligence

A new tool has caused a bit of a splash, called Tactic AI. A paper in Nature confirms its use in the taking of corners – although, As I say above, this is an odd focus as other tactics are more valuable. Google have worked for four years at Liverpool FC. Yet it is in other areas that data matters more, in scouting and transfers. Brighton (my home team) are lowest in Premiership on corners won but have one of the best track records in transfers, as they use data more widely. Brighton have sold on a nearly decent Premiership team to rest of Premiership: Sanchez, Curucalla, White, Bissouma, Ciaicedo, MacAllister, Trossard, Burn, Maupay, Knockaert... for getting on to a half a billion. These are key players in these other top teams.

Bias

Seasoned managers, coaches, trainers, players often get it wrong because, in football, our cognitive biases exaggerate individual events. We exaggerate the positives and what is obvious and seen at the expense of the hidden, subtle and negative. A good example is defending. Mancini may have been the greatest defender ever because of what he never did – tackle. We prize tackling, yet it is often a weakness not a strength. We think that corners matter when they don’t. Similarly in education, we prize the opinions of seasoned practitioners over the data: exams, uniforms, one hour lectures, one hour lessons and all sorts of specious things just because they are part of the traditional game. Yet, what good teachers don't do really matters. This is why guided coaching and tons of deliberate and variable practice matter in sports but is rarely taken seriously in education.

Soccer and learning

If a sport like football, which is random and chaotic, can benefit from data and algorithms that guide action such as buying players, picking players, strategy, and tactics, then surely something far more predictable, such as learning, will benefit from such an approach? What we can learn is that data about the ‘players’ is vital, what they do, when they do it and what leads to positive outcomes. It is this focus on the performance of the people that really counts, a personalised approach to learners, that is so often missing in learning.

Education gathers wrong data

Education has, perhaps, been gathering the wrong data – bums on seats, contact time, course completion, results of summative assessments, even happy sheets. What is missing is the more fine-grained data about what works and doesn’t work. Data about the learner’s progress. Here we can lever data, through algorithms to improve each student’s performance as they take a learning journey. We need the sort of data that a satnav uses to identify where they start, where they’re going and, when they go off-piste, how to get them back on track. In modern sports going over videos of a team's performance and those of the opposition has become normal, as has the gathering of stats. What has most often led to the goals you've scored this season? It may not be the quality of the striker but what wing is better, the feeder players from midfield, the importance of dead-ball opportunities.

Just as the ‘nay-sayers’ in football claimed that the numbers would have no role to play in performance, as it was all down to good coaches, trainers and scouts, so education claims that it is all down to good teachers. This is a stupid, silver-bullet response to a complex set of problems. It is partly down to good teachers but aided by good data, learners have the most to gain from other interventions. Education needs to take a far more critical look at pedagogic change and admit that critical analysis leads to better outcomes. This means using data, especially personal data, in real time to improve learner performance



 

Multimodal means many more moments for learning

Sora from OpenAI and Videopoet from Google shows that the LLM revolution will be televised. Multimodal models mean quick and cheap input, amending and output in any medium. The consequences for teaching and learning are huge. 

1. Free from tyranny of text

We can free ourselves from the tyranny of ‘text’. Far too much learning is text based. Schooling and Universities are almost obsessed by reading and wring text, thereby excluding most useful skills in life. E-learning with its blocks of text and graphics seems like a pale imitation of life. Education, in particular, is fundamentally text based, limiting teaching and learning opportunities.

2. Optimise media for learning

Each medium and its combinations have affordances in learning, in terms of retention and transfer.  Each has a set of DOs and DON’Ts. I wrote about this. in detail, in my book Learning Experience Design. These are backed up with research which shows how to use the right tools for the right job. The important point is to use the optimal medium for the learner and learning task at hand. We now have those options... and they are quick and cheap.

3. Quick resources for teachers

For teachers, the fast production of resources will be a boon to their craft. They are now literally able to create almost anything in any medium in seconds. An image from history, for geography, famous person from the past saying something. Teachers can breathe life into almost anything they teach in any medium. This freedom allows teachers to be both creative and more effective in the classroom and online.

4. Learners can do it for themselves

Perhaps the biggest lesson is the pendulum swing from teaching to learning. That is made easier. The user now has these tools at their fingertips. Generative learning, explored by Wittrock and others is one of the most powerful forms of learning. It can now be used as a generative tool by the learner. If you want to summarise a book, an image as a mnemonic to remember something, flashcards with images, a self-quiz, a checklist, job aid… do it yourself.

5. Context and cultural adaptation

If you want resources that are more appropriate for your context and culture, simply ask for it. This can be as simply as text in the appropriate language up to translating text into your first language. It can be images that show relevant, even local, elements or video relevant to your place and culture. All of this makes the learning more relevant to the learner.

6. Accessibility

This has already had profound beneficial effects in accessibility for those with sight impairments, with text, image and video to speech. Similarly for hearing impairments, with speech to text, images and video. Dyslexia, autism and ADHD, and other learning difficulties will also be aided by multimodal capabilities in the hand of both teachers and learners. Much more to be done on this front.

7. Media mix

Blended learning is so often just blended teaching, a bit of offline with online. At last we have the tools to move towards really blending teaching and learning experiences, using the right media mix for the right teaching and learning tasks. But solid, measured pedagogic rules should be applied. These are my top three:

Less is usually more.

Learning is a process not an event.

Doing really does matter.

Get to know

Get to know this list and consider all of these in your future use of media in teaching and learning. It is quite extraordinary how much is now possible through generative AI.

Text generation
Text-to-video
Text-to-image
Text to audio
Text editing
Text translation
Image generation
image editing
Image-to-video
Image to text
Image to audio
Image editing
Audio generation
Audio to text
Audio to image
Audio to video
Audio editing
Audio stylisation
Video generation 
Video-to-audio
Video frame continuation
Video inpainting
Video outpainting
Video stylisation
Data critiqued
Data to text
Data to image
Data to audio
Data to video

Finally - a warning

One must also be careful in over-producing and over-production. Just because one can do it, doesn’t mean you should do it! Combinations of media must also be considered. Again, research is clear on issues such as over-writing with text, using the wrong style of language, use of text in images, the danger of the transience effect in video, usefulness of audio only in podcasts. There are literally hundreds of things one needs to know to make media effective in learning, use them carefully.

Final thought

Above all this gives opportunities to teachers and learners in places where resources and money is tight. It gives power to the hands of teachers and learners in poorer countries. 

 

Wednesday, March 13, 2024

AI moves from 2D to 3D

Quite remarkable achievement by Deepmind. I wrote about this in my 'Learning in the Metaverse' book and the 2nd Edition of my book on GenAI coming out on May 4. The idea that AI accelerates the move from 2D to 3D.

This software takes language prompts into actions within 3D worlds. For the first time, the agent actually understands the 3D world in which it operates and can perform tasks just like a human.

How it works

All it needs are images from a screen of the game/environment and text instructions. It can therefor interact with Any virtual environment. Menus, navigation through the environments, actions and interactions with objects are all executed. They partnered with eight games companies to perform different tasks. SIMA is the AI agent that, after training, perceives and also understand a variety of environments, so that it can take actions to achieve an instructed goal.



Transfer

Even more remarkable is the fact that agents seem to transfer learning, so playing in one environment helps it succeed in others.



Multimodal now also 3D

Far too much debate around AI focus on text only LLM capabilities and not their expansion into multimodal capabilities, now including 3D worlds. The goal is to get agents to perform things in the virtual and/or real 3D world intelligently like a human.

Applications

Its obvious application is in performing risky tasks in high-risk environments but also in any 3D world. It can also be used in online 3D worlds to help with training. The early signs of a tutor within these worlds or buddy, patient, employee or customer in training. 

Its obvious application is in performing risky tasks in high-risk environments but also in any 3D world. It can also be used in online 3D worlds to help with training. The early signs of a tutor within these worlds or buddy, patient, employee or customer in training.

Full paper

The Strange Case of Altman V Board at OpenAI revealed

The New Yorker article on the drama at OpenAI has uncovered, not only the timeline but the dynamics of the drama. It was a plot worthy of an episode in Succession. Kendall Roy is Sam Altman, a charismatic, persuasive and experienced tech entrepreneur. Logan Roy is Microsoft, looking to get some zest into the business, as it has lost its mojo. Then there are the bit players, the winners and losers. 

Helen Toner, was the 'Effective Altruism' academic, with no real AI or technical experience, who had to apologise to the board for writing opinion piece articles criticising the organisation in which she was a board member. She apologised but Altman clearly had no time for her antics. He tried to get her ousted from the Board, playing them off against each other. It happens – I’ve seen it. Some on the board were inexperienced in business and couldn’t cope with the pressure, clearly tangled up in academic debates about AGI, an insider said “Every step we get closer to A.G.I., everybody takes on, like, ten insanity points.” The board felt threatened, panicked and sacked Altman. BIG MISTAKE

Establishing that there was “no malfeasance” Microsoft went apeshit, Altman took Brockman with him, the staff revolted in favour of Altman. This was a battle between lightweight academics and experienced AI and business brains. Used to ruling the roost in the their world, and with more than a little of the arrogance that comes with academic status, they completely misjudged the situation and overplayed their hand. In the end it was a rout. The board “agreed to leave” (cough), Altman was reinstated, and the usual inquiry was ordered (always a sop). As one tech journalist noted "A clod of a board stays consistent to its cloddery.”

Two other fascinating characters in all this are Kevin Scott, the Microsoft AI guy, and Mira Murati, the ex-Tesla Albanian, tech savvy  and known to be unflappable. They both came from tough, poor backgrounds and hold the belief that this tech really is a leveller - we'll see. They steered all of this to its conclusion. 

The board all went, apart from Adam D’Angelo, co-founder and CEO of Quora. A computer scientist and hugely successful entrepreneur.

Larry Summers was brought in. Fascinating choice, ex-academic, president of Harvard but sacked during an early salvo in the culture wars and now soaked in economics, politics and business. He’s one of the best connected figures in America.  




The board has also been considerably expanded with a range of professional expertise; 
Bret Taylor is the Chair, a real heavyweight:
Creator of Google Maps
CTO at Facebook
Chair of Twitter
Co-CEO at Salesforce. 









He has brought in:
Sue Desmond -Hellman Former CEO of Gates Foundation, physician and experienced corporate board member
Nicole Seligman heavyweight global lawyer
Fidji Simo and other tech entrepreneur 
....and, of course, the King is dead Long live the King!
.....Samuel Altman.

One figure lurks behind all of this, the genius that is Ilya Sutskever. He knows more about AI than anyone there and created the software yet survives as he IS OpenAI. Like the mad-genius Oracle, sitting quietly in the middle watching all of this, above all of these petty squabbles. He is now back to his day job – changing the world. 


PS
Thanks to Peter Shea for helping me with this piece.

also original article - well worth a read but paywalled 
https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai?fbclid=IwAR2kmNi0LLc3FaXY6s2C08YCaDS88hD4mBFauylAIJCgzJ4lBnHqyZ-ts6Y

Wednesday, March 06, 2024

Are the LMS & VLE dead! Accenture and Udacity draw new line in sand

Dead fish market

I have been saying for some time that the VLE and LMS market is in for a dramatic shift. These are two very different markets with two separate sets of products, both global and lucrative. Both are also crammed with legacy technologies and both encourage old and 'not fit for purpose' standards, like SCORM (not even supported), that cripple their ability to adapt to AI-driven approaches to learning. The sector is a bit of a dead fish.

The LMS and VLE market is set for a change, as new AI platforms emerge. The investors are ready, the need is there, we are now moving into the phase when they will be built. It will take time, as incumbents are locked in, often on 3 year licence deals, and they are integrated but things will change. They always do.

Investor hiatus

Investors have been in a hiatus, waiting to see how things shake out. Guess what - they’re starting to shake out. AI is not just the new kid on the block, it is the only new kid on the block. It is THE technology of the age. The top 7 tech stocks, all AI companies, now have a combined market capitalisation of $12.5 trillion, more than the collective gross domestic product of New York, Tokyo, LA, Paris, London, Seoul, Chicago, San Francisco, Osaka, Dallas and Shanghai. This is no fad, neither is it the future – it is now.

The analysts are also all at sea with their grids and lack of foresight. In truth investors that bought into the LMS market are struggling to realise the revenues and profits. Some very large companies are struggling with their shareprice and meeting revenue and profit expectations. Even at the medium and lower levels, there is suspicion that value is falling. The learning content creation companies should be using AI (and are) and so prices will plummet. It is difficult to see why investors would put big valuations on dated content or bespoke production. Would you invest in a video production learning company having seen Sora? A major Hollywood investor has just pulled $850 million from a studio build. Investors in online learning will be thinking along the same lines.

Accenture buys Udacity

That brings us to Accenture buying Udacity (for peanuts) and saying they plan to invest $1 billion (yes $1 BILLION) in LearnVantage – an AI-first learning platform. Interesting move. They say it will be an AI platform... then make the mistake of saying it will primarily teach AI. That makes no sense. It is the old thinking of - let’s build a pile of courses. Consultancies don’t build good tech – neither did Udacity - and if Accenture lose their objectivity as solutions consultants then they do themselves damage. You can’t be a consultant then turn and say – by the way the 'optimal' solution is our platform. 

However, this doesn’t really matter, as this is just the first line in the sand in a major market shift. If they don’t succeed, someone else will. The huge tech companies could do this and may well enter the market but their eyes are on bigger fish - productivity tools. They are never good in the learning market. They're not looking for gold, as they make a ton from selling the shovels.

The LMS is dead, long live the LMS!

Some love them, some hate them. Some love to hate them.

1. Zombie LMS

Some organisations have a Zombie LMS. At the very mention of its name, managers and learners roll their eyes. Organisations can get locked into LMS contracts that limit their ability and agility to adopt innovations. Many an LMS lies like an old fossil, buried in the enterprise software stack, churning away like an old heating system – slow, inefficient and in constant need of repair. Long term licences, inertia and the cost of change, see the organisation locked into a barely functional world of half-dead software and courses.

2. Functional creep

Our LMS does everything. “Social?” “Yes, that as well”. Once the LMS folk get their hooks into you, they extend their reach into all sorts of areas where they don’t belong. Suddenly they have a ‘chat’ offer, that is truly awful – but part of the ‘complete LMS solution’. For a few extra bucks they solve all of your performance support, corporate comms, HR and talent management problems, locking you bit by bit into the deep dungeon they’ve built for your learners, never to see the light again.

3. Courses, of course

The LMS also encourages an obsession with courses. I’m no fan of Maslow’s clichéd pyramid of needs but he did come up with a great line, ”If you only have a hammer, you tend to see every problem as a nail.” That is precisely the problem with the LMS - give an organisation an LMS and every problem is solved by a ‘course’. This has led to a culture of over-engineered, expensive and long-winded course production that aligned with the use of the LMS and not with organisational or business needs. What we end up with are a ton of crap leadership, DEI and complaince courses.

4. Cripples content

Throw stuff into some VLEs and LMSs and it spits out some really awful looking stuff. Encouraged to load up half-baked course notes, teachers and trainers knock out stuff that conforms solidly to that great law of content production – GIGO – garbage in garbage out.  Graphic, text, graphic, text, multiple-choice question….. repeat. The Disneyfication of learning has happened with tons of hokey, cartoon and speech bubble stuff. Out goes simulations and anything that doesn’t conform to the simple, flat, linear content that your LMS can deliver or even worse.... gamification - some infantile game that feels as though it os designed fro 10 year olds!

5. One size fits all

With the rise of AI, adaptive and personalised learning, the LMS becomes an irritation. They don’t cope well with systems that deliver smart, personalised learning pathways. The sophisticated higher-level learning experiences are locked out by the limited ability of the LMS to cope with such innovation. The LMS becomes a sort of cardboard SCORM template through which all content must fit. But it’s the ‘learn by doing’, performance support and experiential learning that most LMSs really squeeze out of the mix.

6. Compliance hell

We all know what happened in compliance training. L&D used the fallacious argument that the law and regulators demand oodles of long courses. In fact, no law and very few regulators demand long, bad, largely useless online courses. This doesn’t work. In fact, it is counterproductive, often creating a dismissive reaction among learners. Yet the LMS encourages this glib solutionism.

7. Completion cul-de-sacs

With the LMS, along came SCORM, a ‘standard’ that in one move pushed everyone towards ‘course completion’. Learning via an LMS was no longer a joyous thing. It became an endless chore, slogging through course after course until complete. Gone is the idea that learning journeys can be interesting, personal affairs. SCORM is a completion whip that is used to march learners in lock-step towards completion.

8. Limits data

Given the constraints of most LMSs, there is the illusion that valuable data is being gathered, when in fact, it’s merely who does what course, when, and did they complete. As the world gets more data hungry, the LMS may be the very thing that stops valuable data from being gathered, managed and used.

To be fair...

To be fair a VLE or LMS was often the prime mover for shifting people away from pure classroom delivery. This is still an issue in many organisations but at least they effected a move, at the enterprise level, away from often lacklustre and expensive classroom courses. In fact, with blended learning, you can manage your pantheon of delivery channels, including classroom delivery, through your LMS (classroom planning is often included). As enterprise software they also scale, control what can be chaos and duplication, provide consistency and strategic intent. You do need to identify and manage your people, store stuff, deliver stuff and manage data and nn LMS is simply a single integrated piece of software. You may want to do without one but you’ll end up integrating the other things you use – and that will be, a sort of LMS. There are also security issues which they handle 

There will always be a need for single solutions. We can seem however that this has descended into the mess that is the all-embracing, death-clutch that is ‘Talent management’.

Conclusion

Organisations need enterprise software. We’ve been through the course repository model, that got stuck in the rut of rather flat e-learning. The new model is more dialogue than monologue. The incumbent VLE and LMS models need to adapt quickly or be replaced by those who do AI well. The VLE and LMS market looks like something out of the early 2000s, that’s because it is something out of that era. Many of these companies started then and having moved from client-server structure to the cloud, still have legacy code and lack the flexibility to work in this new world. My guess is that some stand a chance, many do not. If all you have done is add some prompted creation tools to your offer – forget it.

We have a chance to break out of this repository of courses model, crippled by box-ticking compliance, impoverished on data by SCORM to create more dynamic platforms that cope with formal and informal learning, also performance support, Tutorbots and data that informs learning and personal development. AI is the technology that appears to promise some sort of escape velocity from these repositories. You can already feel the blood drain from the old model as the new tools become available and improve so quickly.

Tuesday, March 05, 2024

Is ‘Deepfake’ hysteria mostly fake news?

Deepfakes touch a nerve. They are easy to latch on to as an issue of ethical concern. Yet despite the technology being around for many years, there has been no deepfake apocalypse. The surprising thing about deepfakes is that there are so few of them. That is not to say it cannot happen. But it is an issue that demands some cool head thinking.

Deepfakes have been around for a long time. Roman emperors sometimes had their predecessors' portraits altered to resemble themselves, thereby rewriting history to suit their narrative or to claim a lineage. Fakes in print and photography have been around as long as those media have existed.

In my own field, learning, a huge number have for decades, used this deliberate fake. It is entirely made up, based on a fake citation, fake numbers put on a fake pyramid. Yet I have seen a Vice Principal of a University and no end of Keynotes at conferences and educationalist use it in their presentations. I have written about suck fakery for years and a lesson I learnt a long time ago was that we tend to ignore deepfakes when they suit our own agendas. No one complained when a flood of naked Trump images flooded the web, but if it’s from the Trump camp, people go apeshit. In other words, the debate often tends to be partisan.

When did AI deepfakes start?

Deepfakes, as they're understood today, refer specifically to media that's been altered or created using deep learning, a subset of artificial intelligence (AI) technology.

The more recent worries about AI creating deepfakes have been around since 2017 when ‘deepfake’ (portmanteau of deep learning & fake) was used to create images and videos. It was on Reddit that a user called ‘Deepfake’ starting positing videos in 2017 of videos with celebrities superimposed on other bodies.

Since then, the technology has advanced rapidly, leading to more realistic deepfakes that are increasingly difficult to detect. This has raised significant ethical, legal, and social concerns regarding privacy, consent, misinformation, and the potential for exploitation. Yet there is little evidence that they are having any effect of either beliefs or elections.

Deliberate deepfakes

The first widely known instance of a political AI deepfake surfaced in April 2018. This was a video of former U.S. President Barack Obama, made by Jordan Peele in collaboration with BuzzFeed and the director’s production company, Monkeypaw Productions. In the video, Obama appears to say a series of controversial statements. However, it was actually Jordan Peele's voice, an impressionist and comedian, using AI technology to manipulate Obama's lip movements to match his speech. We also readily forget that it was Obama who pioneered the harvesting of social media data to target voters with political messaging.

The Obama video was actually created as a public service announcement to raise awareness about the potential misuse of deepfake technology in spreading misinformation and the importance of media literacy. It wasn't intended to deceive but rather to educate the public about the capabilities and potential dangers of deepfake technology, especially concerning its use in politics and media.

In 2019, artists created deepfake videos of UK politicians including Boris Johnson and Jeremy Corbyn, in which they appeared to endorse each other for Prime Minister. These videos were made to raise awareness about the threat of deepfakes in elections and politics

In 2020, the most notable deepfake video of Belgian Prime Minister Sophie Wilmès showed her give a speech where she linked COVID-19 to environmental damage and the need to take action on climate change. This video was actually created by an environmental organization to raise awareness about climate change.

In other words, many of the most notable deepfakes have been for awareness, satire, or educational purposes.

Debunked deepfakes

Most deepfakes are quickly debunked. In 2022, during the Russia-Ukraine conflict, a deepfake video of Ukrainian President Volodymyr Zelensky was circulated. In the video, he appeared to be making a statement asking Ukrainian soldiers to lay down their arms. Deepfakes, like this, are usually quickly identified and debunked, but it shows how harmful misinformation during sensitive times like a military conflict, can be dangerous.

The recent images of Donald Trump were explicitly stated to be deepfakes by their author. They had missing fingers, odd teeth, a long upside down nail on his hand and weird words on hats and clothes, so quickly identified. At the moment they are easy to detect and debunk. That won’t always be the case, which brings us to detection.

Deepfake detection

As AI develops, deepfake production becomes more possible but so do advances in AI and digital forensics for detection. You can train models to tell the difference by analysing facial expressions, eye movement, lip sync and overall facial consistency. There are subtleties in facial movements and expressions, blood vessel giveaways, as well as eye blinking, breathing, blood poulses and other movements that are difficult to replicate in deepfakes. Another is checks for consistency, in lighting, reflections, shadows and backgrounds. Frame by frame checking can also reveal flickers and other signs of fakery. Then there’s audio detection, with a whole rack of its own techniques. On top of all this are forensic checks on the origins, metadata and compression artefacts that can reveal the creation, tampering or its unreliable source. Let’s also remember that humans can also be used to check, as our brains are fine-tuned to find these tell-tale signs, so human moderation still has a role. 

As deepfake technology becomes more sophisticated, the challenge of detecting them increase but these techniques are constantly evolving, and companies often use a combination of methods to improve accuracy and reliability. There is also a lot of sharing of knowledge across companies to keep ahead of the game.

So it is easier to detect deepfakes that many think. There are plenty of tell-tale signs that AI can use to detect, police and prevent them from being shown. These techniques have been honed for years and that is the reason why so few ever actually surface on social media platforms. Facebook, Google, X and others have been working on this for years. That is why they have not been caught flat-footed on the issue. 

Deepfakes in learning 

We should also remember that deepfakes can be useful. I have used them to create several avatars of myself, which speak languages I cannot speak. They have been used to recreate historical figures for educational documentaries and interactive learning experiences. You see and hear historical figures ‘come to life’, to make the learning process more engaging. Language courses have used them to create videos and immersive language learning experiences, as the lip-synch is now superb. Even museums and educational institutions have started using deepfake technology to create more immersive exhibits. On top of this real training projects in sectors like medicine, now use deepfake technology to create realistic training videos or simulations, where patients and healthcare staff can be represented.

Conclusion

We too readily jump to conclusions when it comes to AI and ethics, there is often a rush to simplistic moralising, when the truth is deeper and more complex. Technology almost always has multiple uses with varying degrees of beneficial and damaging uses. We tend to lean towards the negative through confirmation and negativity bias. This needs to be avoided by a more detailed discussion of the issues, not presenting everything in apocalyptic terms.


Monday, March 04, 2024

The Mind is Flat!

Nick Chater’s ‘The Mind is Flat: The Remarkable Shallowness of the Improvising Brain’ is an astonishing work, a book that is truly challenging. He argues against the common belief that our thoughts and behaviours are deeply rooted in our subconscious. Mental depth for him is an illusion. Instead, he suggests that our minds are flat, meaning that they operate on the surface level without deep, hidden motivations or unconscious processes.

Training is post-rationalisation

For me, he explains why most training is post-rationalisation, simplistic stories we tell ourselves about cognition. We latch on to abstract words like creativity, critical thinking and resilience then wrap them up in PowerPoints to create coherent stories that are quite simply fictions. This is why they are so ineffective in the real world. They make you think there are easy solutions, simple bromides for action, when there are not. He thinks this is all wrong and I think he is right

Cognition is improvisational

Chater supports his arguments fully by discussing various psychological studies and experiments. He proposes that our thoughts, feelings, and behaviours are largely improvisational and context-dependent. According to his theory, our responses to situations are not driven by inner beliefs or desires, but are rather ad-hoc constructions created on the spot. This idea challenges the traditional views of psychology and suggests that much of what we believe about our internal thought processes might be an illusion.

Attacks psychoanalytic and psychotherapeutic worlds

It is a direct attack on the whole psychoanalytic and psychotherapeutic world and if true, renders much of what passes for psychology as speculative rot. He challenges the whole notion of a complex, subconscious mind that can be unlocked or understood through psychoanalysis or similar therapeutic methods. Since our thoughts and behaviours are improvised on the surface level and are context-dependent, delving into the supposed depths of the subconscious to find hidden meanings or repressed memories, as is often the goal in psychoanalysis, is likely to be misguided. He suggests that the mind doesn't work in the way psychoanalysis proposes, with its emphasis on uncovering deep-seated, unconscious desires and motivations.

Over-rationalise

We over-rationalise when it comes to ideas about the brain, when it is fantastically complex and opaque. He touches upon Tolstoy, where Anna Karenina commits suicide – but why? The stories we tell ourselves about her motivations are, for Chater, quite wrong, as she would be incoherent about such things. Rationalism is the mistake here, the idea that there is a true answer for everything. Dennett has taken a similar position, where conscious rationalisation is always retrospective, delving back in to the brain. The brain does huge, complex, parallel computations and has no locus or simplistic causes, the same applies to LLMs, there is no pace you can point to for the production of an answer. The brain, like a LLM, is necessarily opaque.

Stories are misleading

We are improvisors and this is where our 'storied-self' is misleading. We simply make most things up and use simple and approximate models to get through our lives. These simulations are often crude. Geoffrey Hinton in 1979 talks about the shallowness of human inference, using imagined cubes as an example. Our simulations of the world are momentary and not wholly coherent. We build models of it (the cone of experience) trying to see it as consistent. In fact, we deal with very localised bits of the world, a sliver of reality. We can’t model the world in our brains as the world is much larger than us! It is all a matter of approximation, analogy and past experience.

We latch onto abstract models and essences but these are far too reductive. Human exceptionalism is a good example of this, with words like ‘creativity’ and in general 21st century skills. Chater thinks these are misleading terms as they are too abstract and exclude the complexity of actual cases. He and Wittgenstein are, I think, correct on this. Language is promiscuous and tends to over produce abstractions which we think are real but turn out to be just that – misleading abstractions.

Our sense of our own psychology is almost completely wrong, as we have incredibly limited perception of the world through our senses and our minds work very differently from how they think they work. Colour is unlikely to be essential out there in the world, as they are mental constructions, similarly with temperature, as experienced. 

Bayesian brains?

One could argue that there are fundamental models, like pure reason, mathematics and science – axiomisation does happen, often after huge amounts of effort, but very few things are, in practice, axiomised. We may have some of this axiomised knowledge in us but this is unlikely to be foundational in the way psychology or neuroscience thinks it is.

As they say - all models are wrong but some are useful. We can, for example, hypothesise about the brain being a Bayesian organ. This may be true but more likely to use things similar to Bayesian approaches to cognition. Tom Griffiths and Josh Tanenbaum follow this line but Chater thinks this is very localised and not sufficient for most cognition.

Conclusion

It has been a while since I read anything that so reversed my long-held beliefs. Heavily influenced by my reading and work in AI I had been coming to a similar, but ill-informed and badly-evidenced belief that this was indeed the case. It changes your whole perspective on cognition and behaviour. AI is showing us that much of what passes for human behaviours can be reproduced to a degree by LLMs and other forms of AI. This should not be so astonishing, if Chater is right, that we are quite shallow thinkers.