Thursday, June 27, 2024

Is the Dispatches C4 programme a completely faked experiment?

Dispatches on Channel 4. 12 households, supposedly undecided voters, were put to the test and force-fed AI deepfakes. Split into two groups, At the end they go through a mock election.

The programme lays its stall out from the start – they’re out to conform what they already believe. This is not research, it’s TV’s odd version of confirmation bias. The three experts are the usual suspects and, guess who, the famously fragile interviewer Cathy Newman. They are actually zealots for deepfake danger – they are willing this to happen. A dead-giveaway are the past examples of deepfakes they pull up – all of left-wing folk – Hillary Clinton and Sadiq Khan. One of the three experts is, of course, a Labour Party Comms expert!

Here’s the rub. Fed a limited diet of information that massively increases the ratio of fake to real news renders the whole experiment useless. The presence Channel 4, of camera crews, lighting as they are set up to watch the fakes adds to the confirmation that this is authoritative content. They are deliberately set up to receive this ‘leaked’ news. It completely destroys any notion of ecological neutrality in the experiment. In fact, you’re almost forcing htem into believing what you’re telling them. The programme actually becomes a case study in bad research and investigation – it actually becomes fake new in itself, a ridiculously biased experience masquerading as supposedly authoritative journalism.

Actual research

Hugo Mercier’s book Not Born Yesterday debunks the foundations of this moral panic about deepfakes, with research that shows it is marginal at the edges, they are quickly debunked and few actually believe them. He argues that humans are not as gullible as often portrayed. Instead, they are selective about the information they believe and share and explores how social dynamics and evolutionary history have shaped human reasoning and belief systems, making us more resistant to deception than commonly assumed. Most of us weren’t born yesterday. Language didn’t evolve to be immediately believed as true.

Brendan Nyhan a world class researcher, who has extensively studied the impact and implications of deepfakes for many years, is clear. His research focuses on the potential threats posed by deepfakes to democratic processes and public trust. Nyhan argues that while deepfakes represent a significant technological advancement, their real-world impact on public perception and misinformation might be more limited than often suggested. He emphasises that the most concerning scenarios, where deepfakes could substantially alter public opinion or significantly disrupt political processes, are less common and that the actual use and effectiveness of deepfakes in altering mass perceptions have been relatively limited so far.

Deepfakes touch a nerve

They are easy to latch on to as an issue of ethical concern. Yet despite the technology being around for many years, there has been no deepfake apocalypse. The surprising thing about deepfakes is that there are so few of them. That is not to say it cannot happen. But it is an issue that demands some cool thinking.

Deepfakes have been around for a long time. Roman emperors sometimes had their predecessors' portraits altered to resemble themselves, thereby rewriting history to suit their narrative or to claim a lineage. Fakes in print and photography have been around as long as those media have existed.

In my own field, learning, a huge number have for decades, Dale’s Cone is entirely made up, based on a fake citation, fake numbers put on a fake pyramid. Yet I have seen a Vice Principal of a University and no end of Keynotes at conferences and educationalist use it in their presentations. I have written about such fakery for years and a lesson I learnt a long time ago was that we tend to ignore deepfakes when they suit our own agendas. No one complained when a flood of naked Trump images flooded the web, but if it’s from the Trump camp, people go apeshit. In other words, the debate often tends to be partisan.

When did recent AI deepfake anxiety start?

Deepfakes, as they're understood today, refer specifically to media that's been altered or created using deep learning, a subset of artificial intelligence (AI) technology.

The more recent worries about AI creating deepfakes have been around since 2017 when ‘deepfake’ (portmanteau of deep learning & fake) was used to create images and videos. It was on Reddit that a user called ‘Deepfake’ starting positing videos in 2017 of videos with celebrities superimposed on other bodies.

Since then, the technology has advanced rapidly, leading to more realistic deepfakes that are increasingly difficult to detect. This has raised significant ethical, legal, and social concerns regarding privacy, consent, misinformation, and the potential for exploitation. Yet there is little evidence that they are having any effect of either beliefs or elections.

Deliberate deepfakes

The first widely known instance of a political AI deepfake surfaced in April 2018. This was a video of former U.S. President Barack Obama, made by Jordan Peele in collaboration with BuzzFeed and the director’s production company, Monkeypaw Productions. In the video, Obama appears to say a series of controversial statements. However, it was actually Jordan Peele's voice, an impressionist and comedian, using AI technology to manipulate Obama's lip movements to match his speech. We also readily forget that it was Obama who pioneered the harvesting of social media data to target voters with political messaging.

The Obama video was actually created as a public service announcement to raise awareness about the potential misuse of deepfake technology in spreading misinformation and the importance of media literacy. It wasn't intended to deceive but rather to educate the public about the capabilities and potential dangers of deepfake technology, especially concerning its use in politics and media.

In 2019, artists created deepfake videos of UK politicians including Boris Johnson and Jeremy Corbyn, in which they appeared to endorse each other for Prime Minister. These videos were made to raise awareness about the threat of deepfakes in elections and politics

In 2020, the most notable deepfake video of Belgian Prime Minister Sophie Wilmès showed her give a speech where she linked COVID-19 to environmental damage and the need to take action on climate change. This video was actually created by an environmental organization to raise awareness about climate change.

In other words, many of the most notable deepfakes have been for awareness, satire, or educational purposes.

Debunked deepfakes

Most deepfakes are quickly debunked. In 2022, during the Russia-Ukraine conflict, a deepfake video of Ukrainian President Volodymyr Zelensky was circulated. In the video, he appeared to be making a statement asking Ukrainian soldiers to lay down their arms. Deepfakes, like this, are usually quickly identified and debunked, but it shows how harmful misinformation during sensitive times like a military conflict, can be dangerous.

More recent images of Donald Trump were explicitly stated to be deepfakes by their author. They had missing fingers, odd teeth, a long upside down nail on his hand and weird words on hats and clothes, so quickly identified. At the moment they are easy to detect and debunk. That won’t always be the case, which brings us to detection.

Deepfake detection

As AI develops, deepfake production becomes more possible but so do advances in AI and digital forensics for detection. You can train models to tell the difference by analysing facial expressions, eye movement, lip sync and overall facial consistency. There are subtleties in facial movements and expressions, blood vessel giveaways, as well as eye blinking, breathing, blood pulses and other movements that are difficult to replicate in deepfakes. Another is checks for consistency, in lighting, reflections, shadows and backgrounds. Frame by frame checking can also reveal flickers and other signs of fakery. Then there’s audio detection, with a whole rack of its own techniques. On top of all this are forensic checks on the origins, metadata and compression artefacts that can reveal the creation, tampering or its unreliable source. Let’s also remember that humans can also be used to check, as our brains are fine-tuned to find these tell-tale signs, so human moderation still has a role. 

As deepfake technology becomes more sophisticated, the challenge of detecting them increase but these techniques are constantly evolving, and companies often use a combination of methods to improve accuracy and reliability. There is also a lot of sharing of knowledge across companies to keep ahead of the game.

So it is easier to detect deepfakes that many think. There are plenty of tell-tale signs that AI can use to detect, police and prevent them from being shown. These techniques have been honed for years and that is the reason why so few ever actually surface on social media platforms. Facebook, Google, X and others have been working on this for years. 

It is also the case, as Yann Lecun keeps telling us, that deepfakes are largely caught, quickly spotted and eliminated. AI does a good job in policing AI deepfakes. That is why they have not been caught flat-footed on the issue.


This blind trial paper raises some serious questions on assessment

In a rather astonishing blind trial study (markers were unaware) by Scarfe (2024),  they inserted GenAI written submissions into an existing examination system. They covered five undergraduate modules across all years of BSc Psychology at Reading University.

The results were, to my mind, not surprising, nevertheless, quite shocking.

94% AI submissions undetected

AI submission grades on average half grade higher than students

What lessons can we learn from this paper?

First, faculty can’t distinguish AI from student work in exams (94% undetected) and second which is predictable, also that AI outoerformed students, with a half grade higher, again unsurprising, as a much larger study, Ibrahim (2023) Perception, performance, and detectability of conversational artificial intelligence across 32 university courses showed that ChatGPT’s performance was comparable, if not superior, to that of students across 32 University courses. They added that AI-detectors cannot reliably detect ChatGPT’s, as they too often claim that human work is AI generated and the text can be edited to evade detection.

Framing everything as plagiarism?

More importantly, there is an emerging consensus among students to use the tool, while faculty tend to see it only through the lens of and among educators to treat its use as plagiarism. 

The paper positions itself as a ‘Turing test’ case study. In other words, the hypothesis was that GPT-4 exam outputs are largely indistinguishable from human. In fact, on average the AI submission scored higher. They saw this a a study about plagiarism but there are much bigger issues at stake. As long as we frame everything as a plagiarism problem, we will miss the more important, and hard, questions. 

This is becoming untenable.

Even primitive prompts suffice?

As Dominik Lukes at the University of Oxford, an expert in AI, noted about this new study; “The shocking thing is not that AI-generated essays got high grades and were not detected. Because of course, that would be the outcome. The (slight) surprise to me is that it took a minimal prompt to do it.”

We used a standardised prompt to GPT-4 to produce answers for each type of exam. For SAQ exams the prompt was:

Including references to academic literature but not a separate reference section, answer the following question in 160 words: XXX

For essay-based answers the prompt was:

Including references to academic literature but not a separate reference section, write a 2000 word essay answering the following question: XXX

In other word, wit the absolute minimal effort undetectable AI submissions are possible and produce better than student results.

Are the current methods of assessment fit for purpose?

Simply asking for short or extended text answers seems to miss many of the skills that an actual psychologist requires. Text assessment is often the ONLY form of assessment in Higher Education, yet psychology is a subject that deals with a much wider range of skills and modalities.

Can marking be automated?

I also suspect that the marking could be completely automated. The simple fact that a simply prompt to score higher than the average student suggests that the content is easily assessable. Rather than have expensive faculty assess, provide machine assed feedback to faculty for more useful student feedback.

Will this problem only get bigger?

It is clear that a simple prompt with the question suffices to exceed student performance. I would assume that with GPT-5 this will greatly exceed student performance. This leads into a general discussion about whether white collar jobs, in psychology or, more commonly the jobs psychology graduates actually get, require this time, expense (to both state and student). Wouldn’t we bet better focusing on training for more specific roles in healthcare and other field, such as HR and L&D, with modules in psychology?

Should so many doing a Psychology degree? 

Over 140,425 enrolments in psychology in 2022/23. It seems to conform to the gener stereotypical idea that females tend to choose subjects with a more human angle, as opposed to other sciences 77.2% are female and the numbers have grown massively over the years. Relatively few will get jobs in fields directly related to psychology, even indirectly, it is easy to claim that just because you studied psychology you have some actual ability to deal with people better.

The Office for Students (OfS) suggests that a minimum of 60% of graduates enter professional employment or further study. Graduate employability statistics for universities and the government are determined by the Graduate Outcomes survey. The survey measures a graduate's position, whether that's in employment or not, 15 months after finishing university. Yet the proportion of Psychology graduates undertaking further education or finding a professional job is relatively small compared with vocational degrees 15 months after graduation (Woolcock & Ellis, 2021).

Many graduates' career goals are still generic, centring on popular careers such as academia, clinical and forensic Psychology that limit their view of alternative and more easily accessible, but less well paid, jobs (Roscoe & McMahan, 2014). It actually takes years (3-5) for the very persistent, to get a job as a Psychology professional, due to postgraduate training and work experience requirements (Morrison Coulthard, 2017).

AI and white-collar jobs?

If GPT-4 performs at this level in the exams across the undergraduate degree, then it is reasonable, if not likely, top expect the technology to do a passable job now in jobs that require a psychology degree. Do we really need to put hundreds of thousands of young people through a full degree in this subject at such expense and with subsequent student debts when many of the tasks are likely to be seriously affected by AI? Many end up in jobs that do not actually require a Degree, certainly a Psychology degree. It would seem that we are putting more and more people through this subject but for fewer and fewer available jobs, and even then, their target market is likely to shrink. It seems like a mismatch between supply and demand. This is a complex issue but worthy of reflection.

Conclusion

Rather than framing everything as a plagiarism problem, seeing things as a cat and mouse game (where the mice are winning), there needs to be a shift towards better forms of assessment but more than this some serious reflection on why we are teaching and assessing, at great expense, in subjects, that are vocational in nature but have few jobs available.. 

Wednesday, June 26, 2024

Natural-born cyborgs - learnings new imperative with AI

A much more philosophical and practical book than Anil Seth’s Being You is The Experience Machine by Andy Clark. It takes deeper dives into conceptualising the predictive brain as it looks both inwards and outwards. The book explodes into life in Chapter 6, with an absolutely brilliant synthesis of predictive processing towards goals and the idea of the extended mind (he with Chalmers wrote the seminal paper in early 90s). He starts with Tabitha Godlstaudm, a successful entrepreneur and dyslexic, who uses SwiftKey and Grammarly, seeing speech to text as her saviour. This is an example, of the extended mind, where we naturally use tools and aids to get things done. She moved beyond the brain because she had to. But this is about all of us.

When I travel, I book online, get boarding pass for the airport, book my hotel and glide through an electronic gate at customs using face recognition, order an Uber, get list of things to see on my smartphone, check out restaurants and use Google maps to get places – this seamless weave of mind and technology puts maid to the neuro-chauvinism around the mind being unique. As the “weave between brain, body and external resources tightens” our lives become easier, faster, more frictionless and opportunities to act and lean expand. This looping by the brain through the digital world becomes second nature, biology and technology entwine. The same is true in learning.

Humility

These theorists recommend that we need some humility here. It is not the traditionalists that show humility, as they are addicted to biological-chauvinism. The brain is perhaps less deep than they or we care to admit. What goes on inside our heads is limited, leaky, full of bottlenecks, with a seriously narrow working memory and fallible long-term memory. It stutters along, improvising as it goes.

He makes the excellent point that we ALL have a form of dementia, in the sense of constantly failing and fallible memories. This also means we ALL have learning difficulties. If our benchmark is the average human, that is a poor benchmark compared to the digitally enhanced human, with just a smartphone, especially one with AI, which means them all. We use AI unwittingly when we search, use Google maps, translate and so on. Increasingly we are using it to enhance performance.

Our brains don’t really care if what we use is inside the skull or smartphone, if it gets the job done and one consequence is his recommendation that we embrace the idea of a more fluid use of tools and supportive environment to get tasks done and to improve our own performance. He asks us to imagine we had Alzheimer’s and needed labels, pictures and reminders and supportive environments to get through our day. This idea of linking task performance to performance support, makes sense. It chains learning to real world action and actual performance.

We need to move on and see the encouragement and use of AI as a core activity. If we get stuck in the mindset of ranting and railing against the extended mind, even worse seeing it as a threat, we will be bypassed. The extended mind has been made real and relevant by AI. That is our game.

Natural-Born Cyborgs

In Mollock’s 2023 paper on productivity with 758 consultants from the Boston Consulting Group across 18 tasks, 12.2% more tasks were completed, with 25.1% faster completion and an astonishing 40% higher quality. The more fascinating finding from the data was the emergence of a group of superusers (cyborgs), who integrated AI fully into their workflow. Low latency, multimodal support has made this even more potent. Compare these to the Centaurs who benefit, but less, as they saw it as an add-on, adjunct technology, not as the extended mind.

We need to be looking at AI in terms of what Andy Clark calls the “natural-born cyborg’. We now collaborate with technology as much as people to get things done. It is a rolling, extended process of collaboration, where we increasingly don’t see it as separate from thinking. We have to free ourselves from the tyranny of time, location, language limits, and embrace the bigger opportunities that technology now offers through cognitive extension.

The naked brain is no longer enough and our job should be to weave that brain into the web of resources at the right time to get our jobs done, not drag people off into windowless rooms for courses or subject them to over-long, over-linear and impoverished e-learning courses, where they feel as though they have no agency.

Neurodiversity

Another consequence of the predictive, computational model of the brain is its ability to explain autism, PTSD, dreams, mental illnesses such as schizophrenia and hallucinogenic drug experiences. In The Experience Machine (2023) Clark goes into detail, with real case studies, on how predictive processing models suggest that psychosis can arise from disruptions in the brain's ability to accurately predict sensory input, leading to hallucinations or delusions. Schizophrenia is one example, similarly, the effects of drugs on perception and cognition can be understood as alterations in the brain's predictive models, changing the way it interprets sensory information or the level of confidence it has in its predictions. Dreams can also be seen as a state where the brain generates internal predictions absent of external sensory input. Autism is predictive activity with less regulation. This could also explain the often bizarre and illogical nature of dreams, as the brain's predictive models operate without the grounding of real-world sensory data

Hacking the Prediction Machine

He finally encourages us to ‘hack our predictive minds’. These hacks include the use of meditation, technologies such as VR and mixed reality along with AI. Therapies also fall into this category, encouraging us to break the cycle of negative. Cycles of predictive behaviour in conditions such as depression and anxiety.

Conclusion

There is much to be gained from theorists who push the boundaries of the mind into its iterations with the world, others and technology. The Extended Mind is an incredibly useful idea as it explains both why and hoe we should be implementing technology for learning. This book gives us a cognitive bedrock with a computational theory of the mind but is also fruitful in pointing us in the right direction on implementation through task support and performance support.


Monday, June 24, 2024

Being You by Anil Seth - brilliant introduction to contemporary neuroscience

I’ve seen a lot of ‘Neuroscience’ talks at learning conferences, and am a bit weary of the old-school serotonin-dopamine story, strong conclusions and recommendations based what often seems to be correlation not causation (beware of slides with scans) and claims about neuroscience that are often cognitive science. I’ve also found a lack of real knowledge about the explosion in computational, cognitive and contemporary neuroscience in relation to new theorists and theory, the Connectionists, such as Daniel Dennett, Nick Chater, Karl Friston, Josh Tenenbaum, Andy Clark and Anil Seth.

Copernican inversion 

By far the best introductory book on this new movement in neuroscience, what I call the ‘Connectionists’, is Being You by Anil Seth. It is readable, explains some difficult, dense and opaque concepts in plain English, is comprehensive and all about what Seth calls a ‘Copernican inversion’ in neuroscience.

Starting with a stunning reflection on the complete dissolution of consciousness during general anaesthetics, he outlines the philosophical backdrop of idealism, dualism, panpsychism, transcendental realism, physicalism, functionalism and, what I really liked, the more obscure mysterianism (often ignored).

He’s also clear on the fields that prefigure and inform this new movement; NCC (Neural Correlates of Consciousness) and IIT (Integrated Information Theory). After a fascinating discussion of his LSD experiences, along with an explanation for their weirdness, he shows that the brain is a highly integrated entity, embodied and embedded in its environment.

Controlled Hallucination 

His Copernican Revolution in brain theory, that consciousness is ‘Controlled Hallucination’, builds on Plato, Kant, then Helmholtz’s idea of ‘perception as inference’. The brain is constantly making predictions, and sensory information provides data that we try to match against our existing models in a continual process of error minimalisation. This Copernican Inversion leaves the world as it is but sees the brain as an active, creative inferencing machine, not a passive receiver of sensory data. 

There is the usual, but informative notion that colour is in the hallucination not the real world and a series of illusions that prove active, predictive processing and active attention including the famous invisible Gorilla video experiment. 

He then covers most of the theories and concepts in this new area of neuroscience informed by the computational theory of the mind; abductive reasoning, generative modelling, Bayesian inference (particularly good), prediction error minimalization, free energy principle (also brilliantly explained), all under the unifying idea of a controlled hallucination as the explanation for consciousness.

Asides

There are some really well written asides in the book, one on art expanding on Riegel and Gombridge’s idea of the ‘Beholder’s Share’, where artists, such as the impressionists and cubists demand active interpretation by the viewer, confirming the perceptual inference he presents as his theory of perception and consciousness. Art surfaces this phenomenon. Another is a series of fascinating experiments on time, showing that it is related to perception and not an inner clock.

AI

The section on AI is measured. He separates intelligence from consciousness (rightly) as he is suspicious of functionalism, the basis for much of this theorising and is sceptical about runaway conscious AI, as an overextension. However the book was published in 2018 and AI has progressed faster on the intelligence scale than the book suggests. At the end of the section he introduces 'cerebral organoids', anticipating Geoffrey Hinton's Mortal Computer.

Conclusion

The only weak part of the book is his treatment of the ‘Self’. It is less substantial, not really dealing with the rich work of those who have looked at personal identity in detail, philosophically and psychologically. I was also surprised that he doesn’t mention Andy Clark, another ex-Sussex University theorist in the field, especially as he is closely associated with David Chalmers, who rightly gets lots of plaudits in the book. 

However, the fact that Anil lives in my home town Brighton is a plus! It covers a lot of the bases in the field and interleaves the hard stuff with more digestible introductions. A really fascinating and brilliant read.

PS

If you are generally interested in the theorists in this new field, John Helmer and I did a podcast on the Connectionists in the Netherlands, in front of a live audience. It was fun and covers many of the ideas presented in this book.


Friday, June 21, 2024

The DATA is in… AI is happening BIG TIME in organisations…

2024 is the year AI is having a massive impact on organisations in terms of productivity and use. Two reports from Microsoft and Duke, show massive take up. I showed this data for the first time this week at an event in London, where I also heard about GPT5 being tested as we speak.

The shift has been rapid, beyond the massive wave of initial adoption where people were largely playing with the technology. During this phase, some were also building product (that takes time). We’ve built several products for organisations, pushing fast from prototype to product, now in the market being used by real users in 2024. That's the shift.

The M&A activity is also at fever pitch. The problem is that most buyers don’t fully understand that startups are unlikely to have proven revenue streams in just 12 months. The analysts are miles behind, as they drive with their eyes in the rear-vie mirror. Don’t look to them for help. Large companies are looking for acquisitions but the sharper ones are getting on with it.

Microsoft - AI is Here

The Microsoft and Linked in report ‘AI is Here’ surprised even me.

The Survey & data of 31,000 people 31 countries covers labour & hiring trends, trillions of productivity signals and Fortune500 customers. The results clearly show that 2024 is year AI at work gets real and that employees are bringing AI to work. 75% of people are already using AI at work.



Who are using it? Everyone, the data shows everyone from Gen. Z to Boomers have jumped on board. 


And looking to the future, it is becoming a key skill on recruitment.

We have moved from employees informally bringing AI to work, to formal adoption, especially in large organisations. There's a serious interest in getting to know what to do and how to do it on scale. Next year will see the move from specific use cases, such as increasing productivity in processes to enterprise wide adoption. Some have already made that move.

Duke

CFOs that reported automating were also asked about whether their firms had utilised artificial intelligence (AI) to automate tasks over the last 12 months. 


CFOs that plan to automate over the next 12 months were asked about their plans to adopt AI over the this period. Fifty-four percent of all firms, and 76 percent of large firms, anticipate utilising AI to automate tasks, with a skew towards larger firms.

Conclusion

Anyone who thinks this is hype or a fad, needs to pay attention to the emerging data.

The problem is that it has a US skew. We’re all doing it but the US is doing it faster. As they shoot for the stars we’re shooting ourselves in both feet through negativity and bad regulation. The growth upside and savings in education and health are being ignored while we hold conferences on Ai and Ethics, where few even understand what an ‘ethical’ analysis means. It’s largely moralising, not ethics, with little understanding of the technology or actual ethics.

 

Thursday, June 20, 2024

British Library. Books look like museum pieces as that it what they are becoming?

Make it real! Can we actually deliver AI through current networks?


A talk and chat at the Nokia event held in the British Library. Wonderful venue and I made the point that we first abstracted our ideas onto shells 500000 years ago, invented writing 5000 years ago, printing 500 years ago and here we are discussing a technology that may eclipse them all – AI.

Bo heads up Nokia’s Bell Labs, who are working on lots of edge computing and other network research and we did what we do with ChatGPT – engaged in dialogue. I like this format, as it’s closer to a podcast, more informal and seems more real than a traditional keynote.

It was also great to be among real technology experts discussing the supply problems. There's something about focused practitioner events that make them more relevant. Microsoft telling us about GPT5 testing and some great case studies showing the massive impact AI is having on productivity.

Quantum computing was shown and discussed and an interesting focus on the backend network and telco problems in delivering AI. We have unprecedented demand for compute and the delivery of data at lower levels of latency. Yet much of the system was never designed for this purpose. 

Energy solutions

The race is on to find energy solutions such as:

Fusion is now on the horizon

Battery innovation progresses

AI to optimise power use now common

Low power Quantum computing begiining to be realised

Compute solutions

Models have to be trained but low latency dialogue also has to be delivered: 

Chip wars with increasing capability at lower costs

Quantum computers with massive compute power

Application-Specific Integrated Circuits (ASICs) and Field-Programmable Gate Arrays (FPGAs), optimised for AI workloads with lower power consumption

Edge computing moves processing closer to the data source at the edge of the network, reducing the need for centralised compute resources and it lowers latency

Federated learning allows multiple decentralised devices to collaboratively train models while keeping the data localised

Neuromorphic computing with chips that mimic neural structures, offering potential efficiency gains for AI workloads

Software efficiency

There’s also a ton of stuff on software and algorithmic efficiency, such as:

Model Compression through pruning, quantisation, and distillation to reduce the size and computational requirements of AI models

More efficient training methods like transfer learning, few-shot learning, and reinforcement learning to reduce the computational cost of building AI models.

Delivery

Network infrastructure moves towards 5G to provide high-speed, low-latency connectivity, essential for real-time AI applications and global delivery. Content Delivery Networks (CDNs) can cache AI models and results closer to users, reducing latency and bandwidth usage.

Two-horse race

Of course all of this has to be delivered and it is now clear that the biggest companies in the world are now AI companies. NVIDIA are now the most valuable company on the planet, at 3.34 Trillion delivering the spades to the gold miners, Microsoft at $3.32, Apple a touch less at $3.29, Google at $2.17 and Facebook at $1.27. In China Tencent $3.65 Trillion, Alibaba £1.43. This is a two horse race with Us well ahead and China chasing and copying. Europe is still in the paddock.

Conclusion

Afterwards, I went to the British Library’s Treasures of the British Library Collection. There lay the first books, written, then printed. A 2000 year old homework book, early Korans, The Gutenberg Bible. We made this work by developing paper and printing technologies, block printing, moveable type, book formats, networks for publishing and distribution. This was undermined by the internet but something much more profound has just happened.



It struck me that I that same building we had just witnessed a revolution that surpasses both. The sum total of all that written material, globally, is now being used to train a new technology, AI, that allows us to have dialogue with it to make the next leap in cultural advancement. We have invented a technology (books and printing were also technologies) that transcend even the digital presentation of print into a world where the limitations of that format are clear. We are almost returning to an oral world where we talk with our past achievements to move forward into the future.

We are no longer passive consumers of print but in active dialogue with its legacy. These books really did look like museum pieces as that is what print has become.

 

Friday, June 14, 2024

The 'Netflix of AI' that makes you a movie Director

Film and video production is big business. Movies are still going strong, Netflix, Prime, Disney, Apple and others have created a renaissance in Television. Box sets are the new movies. Social media has also embraced video with the meteoric rise of Tik Tok, Instagram, Facebook shorts and so on. YouTube is now an entertainment channel.

Similarly in learning. Video is everywhere. But it is still relatively time consuming and expensive to produce. Cut to AI…

We are on the lip of a revolution in video production. As part of a video production company then using Laserdiscs with video in interactive simulations, I used make corporate videos and interactive video simulations in the 80s/90s. The camera alone cost £35k, a full crew had to be hired, voiceovers in a professional studio (eventualy built our own in our basement), an edit suite in London. We even made a full feature film The Killer Tongue (don’t ask!).

With glimpses and demos of generated video, we are now seeing it move faster into full production, unsurprisingly from the US, where they have embraced AI and are applying it faster than any other nation.

1. Video animating an Image or prompt

I first started playing around with AI generated video from stills and it was pretty good. It’s now very good. Here’s a few examples. 

Now just type in a few words and it's done.

Turned this painting of my dog into a real dog...

Made skull turn towards viewer...


Pretty good so far...

2. Video from a Prompt

Then came prompted video, from text only. This got really good, really fast, with Sora and new players entering the market such as Luna.


Great for short video but no real long-form capability. In learning these short one-scene videos could be useful for performance support and single tasks or brief processes, even triger video as patients, customer, employees and so on. This is already happening with avatar production.

3. Netflix of AI

Meet Showrunner, where you can create your own show. Remember the Southpark episode created from AI? Same company has launched 10 shows where you can create your own episodes.

Showrunner released two episodes of Exit Valley, a Silicon Valley satire starring iconic figures like Musk, Zuck and Sam Altman. The show is an animated comedy targeting 22 episodes in its first season, some made by their own studio, the rest made by users and selected by a jury of filmmakers and creatives. The other shows, like Ikiru Shinu and Shadows over Shinjuku, are set in Neo-Tokyo, are set in distinct anime worlds, and will be available later this year.

They are using LLMs, as well as custom state-of-the art diffusion models, but what makes this different is the use of multi-agent simulation. Agents (we’ve been using these in learning projects) can build story progression and behavioural control.

This gives us a glimpse of what will be possible in learning. Tools such as these will be able to create any form of instructional video and drama, as it will be a ‘guided’ process, with the best writing, direction and editing built into the process. You are driving the creative car but there will be a ton of AI in the engine and self-driving features that allows the tricky stuff to be done to a high standard behind the scenes. Learners may even be able to create or ask for this stuff through nothing more than text requests, even spoken as you create your movie.

The AI uses character history, goals and emotions, simulation events and localities to generate scenes and image assets that are coherent and consistent with the existing story world. There is also behavioural control over agents, their actions and intentions, also in interactive conversations. The user's expectations and intentions are formed then funneled into a simple prompt to kick off the generation process.

You may think this is easy but the ‘slot-machine effect’, where things become too disjoined and random to be seen as a story, is a really difficult problem. So long-term goals and arcs are used to guide the process. Behind the scenes there is also a hidden ‘trial and error’ process, so that you do not see the misfires, wrong edits etc. The researchers likened this to Kahneman’s System 1 v System 2 thinking. Most LLM and diffusion models play to fast, quick, System 1 responses to prompts. For long-form media, you need System 2 thinking, so that more complex intentions, goals, coherence and consistency are given precedence.

Interestingly hallucinations can introduce created uncertainty, a positive thing, as happy accidents seem to be part of the creative process, as long as it does not lead to implausible outcomes. This is interesting – how to create non-deterministic creative works that are predictable but exciting, novel works.


This is what I meant by a POSTCREATION world, where creativity is not a simple sampling or remixing but a process of re-creation.

4. Live action videos

The next step, and we are surely on that Yellow Brick Road is to create your own live action movies from text and image prompts. Just prompt it with 10 to 15 words and you can generate scenes and episodes from 2 - 16 minutes. This includes AI dialogue, voice, editing, different shot types, consistent characters and story development. You can take it to another level by editing the episodes’ scripts, shots, voices and remaking episodes. We can all be live-action movie Directors.

Conclusion

With LLMs, in the beginning was the ‘word’, then image generation, audio generation, then short form video, now full-form creative storytelling. Using the strengths of the simulation, co-creating with the user, and the AI model, rich, interactive, and engaging storytelling experience are possible.

This is a good example of how AI has opened up a broad front attracting investment, innovation and entrepreneurship. At its hear are generative techniques but there are also lots of other approaches that form an ensemble of orchestrated approaches to solve problems.

You have probably already asked the question. Does it actually need us? Will wonderful, novel creative movies emerge without any human intervention. Some would say ‘I fear so’. I say ‘bring it on’. 


Wednesday, June 12, 2024

Apple solves privacy and security concerns around AI?


Apple Intelligence launched a set of AI features that had OpenAI’s GPT4 at the heart. It was a typical Apple move – focus on personalisation, integration and user performance.

The one thing that stood out for me was the announcement on privacy and ‘Edge’ computing. Their solution is clever and may give them real advantages in the real market. AI smartphones will be huge. Google led the way with the Pixel – I have one – it is excellent and cheap. But the real battle is between Apple and Samsung. The Galaxy is packed with AI features, as is the iPhone, but he who wins the AI device battle (currently 170 million units in 2024 and about to soar), will inherit users and revenue.

Privacy and security are now a big deal in AI. Whenever you shoot off to use a cloud service there is always the possibility of cybersecurity risks, losing data, even having your personal data looted.

Apple sell devices, so their device solution makes sense. It gives them ‘edge’ through ‘Edge Computing’.  A massive investment in their M3 chip and other hardware may give them further edge in the market.

In order to deliver real value to users the device needs to know what software and services you use across your devices, your emails, texts, messages, documents, emails, photos, audio files, videos, images, contacts, calendars, search history and AI chatbot use. Context really matters as if you are my ‘persona;’ assistant you need to know who I am, your friends and family, what I am doing and my present needs.

So what is Apple’s solution? They want to keep privacy on both device and when the cloud is accessed. Let’s be clear, Google, Microsoft, Meta, OpenAI and others will also solve this problem but it is apple who have been first above the parapet. This is because , unlike some of the others, they don’t sell ads and don’t sell your data. It pitches Apple against Microsoft but they are in different markets - one consumer, the other corporate.

‘Private Cloud Compute’ promises to use your data but not store and allow anyone access to your data, even Apple itself. Apple have promised to be transparent and have invited cybersecurity experts to scrutinise their solution. Note that they are not launching Apple Intelligence until the fall and even then only in the US. This makes sense, as this needs some serious scrutiny and testing.

Devices matter. Edge compute matters. As the new currency of ‘trust’ becomes a factor in sales, privacy and security matter. As always, technology finds a way to solve these problems, which is why I generally ignore superficial talk about ethics in AI, especially the doomsters. At almost every conference I attend I head misconceptions around data and privacy. Hope this small note helps.


Tuesday, June 11, 2024

Ethan Mollick’s 'CO-INTELLIGENCE' - a review

Just finished Ethan Mollick’s CO-INTELLIGENCE book. I like Ethan, as he shares stuff. His X feed is excellent, so was eager to give this a go.

It wasn’t what I expected, but that’s fine, because it’s pretty good. Ethan’s a Stanford academic, so I thought it would be a research-rich book with lots of examples but it is actually aimed at the basic, general reader, who knows little or nothing about AI; big font, big line spacing and no index but it does have some good, useful research.

It opens with his Three Nights Without Sleep revelation, that this shit is amazing! Why? Because it is a ‘General Purpose Technology’ pregnant with possibilities. I liked this. He writes well and is enthusiastic about its potential.

PART I 

That sense of wonder continues over PART I, with his musings on the Scary? Smart? Scary-smart? Nature of GenAI, seeing it as a sort of alien mind. Alignment he thinks is necessary but is not a doomster and avoids the sort of speculative sci-fi stuff that often appears whenever AI and ethics is mentioned. He ends this section with his Four Rules for Co-Intelligence:

Always invite AI to the table – like this

Be the human in the loop – OK but…

Treat AI like a person (but tell it what kind of person it is) – like this

Assume this is the worst AI you will ever use – yip!

PART II

This is the bulk of the book, with five chapters, where he sees AI as a:

Person

Creative

Coworker

Tutor

Coach

I have lots of quibbles but that’s fine. These are good, short readable discussions that open doors on its applications and potential. Each was well worth the read. I won’t go into detail, as I’d be in danger of providing one of those summaries that stops people buying the book!

It rounds off with a Chapter on AI as our future with four Scenarios; As Good As It Gets, Slow Growth, Exponential Growth or The Machine God. Then a short Epilogue, completed using ChatGPT – AI is US.

My own view is that the premise ‘CO-INTELLIGENCE’ is too simplistic and that it will do lots of things that will surprise us beyond the idea of just augmentation, a tool to enhance human creativity, decision-making, and productivity.

The problem with any book on AI, is that it is out of date before it is even printed. There were many points when I was thinking Yes… but… This is normal. The AI mindset demands fluidity and a recognition of the point Ethan makes in PART I – Assume this is the worst you will ever use.

Good introductory text – well worth a buy – but not for those who are looking for detail and depth of expertise.

 

Monday, June 10, 2024

Sam has ditched Satya for Tim – he’s so louche that lad! Apple Intelligence is here! New Siri and more...


Apple event features REALLY annoying presenters, but they finally join the GenAI club with Apple Intelligence. After showing the now compulsory ‘help me with my maths‘ example, they cut to the quick… it’s ChatGPT4 folks! 

Personal Intelligence

TTheir core idea is 'personal intelligence' as it understands your personal context. iPhone prioritises notifications, new writing tools (review, write and proofread etc) across all apps, even third party. Its email improvements, summaries of emails and so on, are super-cool. It will also intelligently prioritise your emails. Great for something I’ve been banging on about in learning – performance support. Apple are basically providing powerful, personal support across your entire online experience.

Images and video

On images it allows you to create images, including images of yourself, your friends and relatives, Sketch, Illustration and Animation styles built into apps across the system. Genmoji is a personal emoji creator, even an emoji that looks like your mates... that will be super-annoying. Image playground generation gives you styles, themes, costumes. Photoediting is super smart, getting things to disappear. Image wands allows you to circle, suggest and manipulate stuff.

There's also sound to text – great for student note taking and taking notes at work.

Search in video – clever. Great for performance support. Stories can be selected to a person and theme then strung together with music. Oh and there’s an API.

You can ask for personal stuff - remember that email I sent to... that picture I took of X last week... as it is personalising tools using personal data, the ‘who, what and whens’ of your life. 

Data privacy

This personal data is on-device processing so personal data is local. It can therefore use your personal data but with super-privacy features. An on-device semantic index helps keep it all local. Private cloud compute uses only data necessary for the task. It reaches out while still keeping your data private.

Siri

She’s gone from stupid to smart. Basically she’s now a chatbot that knows what you mean when you use ‘this’ and ‘that’ in sentences. It also has on-screen awareness and memory of what you have done. Siri knows your personal context – hotel bookings, photos you’ve taken, emails you’ve sent… porn you’ve accessed… no not that! 

Agentic

What’s interesting is its agentic capabilities. It goes off and find stuff relevant to your request, flight info, external websites, things you’ve done locally. This has legs.

This is Apple, so it is integrated, user friendly and personal

Conclusion

One thing they have done well is the M3 chip, giving device AI functionality - that lay behind much of what was delivered here and may be critical in terms of practical and secure delivery of AI. It literally gives them 'edge' in the market. They're reallly a consumer company, unlike Microsoft (apart from games), which makes edge computing and iPhone delivery more important. Lots of the features were consumer oriented.

This is AI for the rest of us – not just work but performance support for life. Well done. Every generation needs a revolution and through this revolution we become more of ourselves.

Saturday, June 08, 2024

7 success factors in real 'AI in learning' projects

With AI we are in the most interesting decade in the history of our species. I can think of no better field in which to think, write and work.

Ideas are easy, implementation hard

My first AI-like project was in the early 90s when I designed an intelligent tutoring system to teach interviewing skills. It had sentence construction as input and adaptivity in the sense of harvesting data as the learner used the system. Written in Pascal, it was clever but not yet smart, as the limitations of the hardware, in terms of processing power and memory, were extreme by today’s standards. Much of the effort went into making things work within these brutal constraints. Even then, we had controlled access to video clips (36 mins), thousands of stills and two audio tracks 112 mins) on Laserdiscs, which we used to good effect, simulating full interviews. You could feel the power of potential intelligence in software.

Jump to 2014 and those hardware limitations had gone. You could build an adaptive, personalised system, which we did at CogBooks. I invested personally in this system (twice) and brought investment in. We did oodles of research at Arizona State University and it was sold to University of Cambridge in 2021. It worked. For many years we had also been playing with AI within Learning Pool having bought an AI company. But my real project journey with modern AI started in 2014, when we build Wildfire, using 'entity analysis', open input and the semantic interpretation of open text answers. The whole thing was starting to taker shape.

Jump to November 2022 and things went a little crazy. I have barely been off the road speaking about GenAI in learning, written books on the subject, blogged like crazy, and recorded dozens of podcasts. Far more important, has been the real projects and product we have built for a number of clients and companies. This is the hard part, the really hard part. Ideas are easy, implementation is hard.

Optimal AI project

What makes a successful AI project? What are the factors that make them a success? The good news is that we have just completed a fascinating project in healthcare that had all the hallmarks of the optimal project. This was our experience.

1. Top-down support

The project started with top-down support, a goal and budget. Without top-down support, projects are always at risk of running out of support. That’s why I’m suspicious of Higher Education projects, grant-aided projects, hackathon stuff and so on. I prefer CEO, Senior Management or Entrepreneur driven initiatives, with real budgets. They tend to have push behind them, clear goals and, above all, they tend to be STRATEGIC. Far too many projects are mosquito project that fail because they end when the budget runs out and have no real impact or compelling use. Choose your use case(s) carefully and strategically. We have been through this with large Global companies - a rational approach to use cases and their prioritisation. Interestingly, AI can help.

2. Bottom-up understanding

This project also had a great client, grounded in a real workplace (a large teaching hospital), a clear budget and solid team. We made sure that everyone was on the same page, understanding what this technology was and could do. The two non-technical team members knew their process inside out but here’s where they really scored – they made the effort to understand the technology and did their homework. This meant we could get on with the work and not get bogged down in explaining basic concepts such as how an LLM works, context window and the need for clean data and data management.

Many AI projects flounder when the team has non-technical members that don’t know the technology, namely AI. It is not that they need competence in building AI, just that they need to understand what it is, the fact that it evolves quickly and that its capabilities grow rapidly.

3. Optimal team

The team also had a top-notch AI developer who has been though years of learning projects. This combination was useful, He had already built products in the learning field, understood the language of learning and its goals. The team was just three people. This really matters. Use Occam’s Razor to determine team size – the minimum number of team members to reach your stated goal. Too many AI projects include people with little or no knowledge of the basic technology. They often come with misconceptions about what they think it is and does, along with several myths.

4. Mindset matters

More than knowledge, is mindset. What cripples projects are people within the organisation who act as bottlenecks – sceptics, legal departments who do not understand data issues, old-school learning people who actually don’t like tech and anyone who is straight up sceptical of the power of AI to increase efficacy. Believe me there are plenty of those folk around. 

The mindset that leads to success is one that accepts and understands that the technology is probabilistic, data-driven, that functionality will increase during the project and things change very fast. I’d sum this up by saying you need team members who are both willing to learn fast and keep their minds open to rapid change. It also means accepting that most processes are too manual, that bottlenecks are hard to identify and that processes CAN be automated. 

5. Agency shift

You also have to let go and see that this technology has ‘agency’ and that you will have to hand agency over to AI. The technology itself will reveal the bottlenecks and insights. Don’t assume you know at the start of the project, they will be revealed if you use the technology well. This is no time for an obsession with fixed Gantt charts and designs that are fossilised then simply executed. It is like ‘agile on steroids’.

6. Manage expectations

AI is a strange, mercurial and fast moving technology. You have to dispel lots of myths about how it works, the data issues and its capabilities. You also have to communicate this to the people that matter.
You need to understand that what is hard is sometimes easy and what is easy, sometimes hard. The fact that things change quickly, for example, costs, is another problem. This happens to be a good problem as people often don't understand that token costs for fixed output are very low and even token costs for a service have plummeted in price. Expectations need to be managed by being clearly communicated.

7. Push beyond prototype to product

I can’t go into a huge amount of detail about the client but the topic was Surgical Support  - a life and death topic, with little room for error. It involved training and taking source material and turning it into usable support (not a course) for hospital staff. Processes were automated, SME (Subject Matter Expert) time reduced, delivery time to launch massively reduced so the team had more time to focus on quality, as opposed to just process and easier to maintain, as just updating the documents means the system is always current. The savings were enormous and increases in quality clear.

This success meant we could call upon the top-down support to push the project beyond prototypes into product with a broader set of goals and more focus on data management. It has given the organisation, management and team the confidence to forge ahead. With massive amounts of time saved and increased efficacy, we saw that success begets success.

Conclusion


If you don't have both TOP-DOWN and BOTTOM-UP support along with a tight team with the right mindset, you will struggle, even fail. This is a radically new and different species of technology, with immense power. It needs careful handling. The small team remained fixed on the strategic goal but was flexible enough to choose the optimal technology. Without all of the above the project would have floundered in no-man's land, with scope creep, longer timescales and the usual drop in momentum, even disappointment. 

The project exceeded expectations. How often can you say that about a learning technology project? This was a young team, astounded at what they had done, and this week, when they presented it at a learning conference, their authentic joy when expressing how it went, was truly heartening. “It was crazy!” said one when she describing the first results, then further inroads into automating in minutes, jobs that had traditionally taken them days, weeks even months. Everyone in the room felt the thrill of having achieved something. In 2024 AI has suddenly got very real.

PS

So many commentators and speakers on AI have never actually delivered a project or product. We need far more focus on practitioners who share what they think works and does not work.


Saturday, June 01, 2024

Postcreation: a new world. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.


Postproduction

There is an interesting idea from the French writer Bourriaud, that we’ve entered a new era, where art and cultural activity now interprets, reproduces, re-exhibits or utilises works made by others or from already available cultural products. He calls it ‘Postproduction’ I thank Rod J. Naquin for introducing me to this thinker and idea. 

Postproduction. Culture as Screenplay: How Art Reprograms the World (2002) was Bourriaud’s essay which examines the trend, emerging since the early 1990s, where a growing number of artists create art based on pre-existing works. He suggests that this "art of postproduction" is a response to the overwhelming abundance of cultural material in the global information age.

The proliferation of artworks and the art world's inclusion of previously ignored or disdained forms characterise this chaotic cultural landscape. Through postproduction, artists navigate and make sense of this cultural excess by reworking existing materials into new creations.

Postcreation

I’d like to universalise this idea of Postproduction to all forms of human endeavour that can now draw upon a vast common pool of culture; all text, images, audio and video, all of human knowledge and achievements – basically the fruits of all past human production to produce, in a way that can be described as ‘Postcreation’.

This is inspired by the arrival of multimodal LLMs, where vast pools of media representing the sum total of all history, all cultural output from our species, has been captured and used to train huge multimodal models that allow our species to create a new future. With new forms of AI, we are borrowing to create the new. It is a new beginning, a fresh start using technology that we have never seen before in the history of our species, something that seems strange but oddly familiar, thrilling but terrifying – AI.

Palimpsests

AI, along with us, does not simply copy, sample or parrot things from the past – together we create new outputs. Neither do they remix, reassemble or reappropriate the past – together we recreate the future. This moves us beyond simple curation, collages and mashups into genuinely new forms of production and expression. We should also avoid seeing it as the reproduction of hybrids, reinterpretations or simple syntheses.

Like a ‘palimpsest’, a page from a scroll or book that has been scraped clean for reuse, we can recover the original text if we scan it carefully enough, but it is the ground for a genuinely new work. It should not be too readily reduced to one word, rather pre-fixed with ‘re-’; to reimagine, reenvision, reconceptualise, recontextualise, revise, rework, revamp, reinterpret, reframe, remodel, redefine and reinvent new cultural capital. We should not pin it down like a broken butterfly with a simple pin, one word, but let the idea flutter and fly free from the prison of language.

Dialogue

We have also moved beyond seeing prompt engineering as some sort of way of translating what we humans do into AI speak. It is now, quite simply, about explaining. We really do engage and speak wto and with these systems. The move towards multimodality with generated and semantically understood audio, is a huge leap forward, especially in learning. That’s how we humans interact.

Romantic illusion

We have been doing this on a small scale for a long time under the illusion, reinforced by late 18th and 19th century Romanticism, that creation is a uniquely human endeavour, when all along it has been a drawing upon the past, therefore deeply rooted in what the brain has experienced and takes from its memories to create anything new. We are now, together, taking things from the entire memory of our cultural past to create the new in acts of Postcreation.

Communal future

This new world or new dawn is more communal, drawing from the well of a vast shared, public collective. We can have a common purpose of mutual effort that leads to a more co-operative, collaborative and unified effort. There were some historical dawns that hinted at this future, the Library at Alexandria, open to all containing the known world's knowledge, Wikipedia a huge, free communal knowledge base, but this is something much more profoundly communal.

The many peoples, cultures and languages of the world can be in this communal effort, not to fix some utopian idea of a common set of values or cultural output but creation beyond what just one group sees as good and evil. This was Nietzsche’s re-evaluative vision. Utopias are always fixed and narrow dystopias. This could be a more innovative and transformative era, a future of openness, a genuine recognition that the future is created by us, not determined wholly by the past. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.