Monday, June 24, 2024

Being You by Anil Seth - brilliant introduction to contemporary neuroscience

I’ve seen a lot of ‘Neuroscience’ talks at learning conferences, and am a bit weary of the old-school serotonin-dopamine story, strong conclusions and recommendations based what often seems to be correlation not causation (beware of slides with scans) and claims about neuroscience that are often cognitive science. I’ve also found a lack of real knowledge about the explosion in computational, cognitive and contemporary neuroscience in relation to new theorists and theory, the Connectionists, such as Daniel Dennett, Nick Chater, Karl Friston, Josh Tenenbaum, Andy Clark and Anil Seth.

Copernican inversion 

By far the best introductory book on this new movement in neuroscience, what I call the ‘Connectionists’, is Being You by Anil Seth. It is readable, explains some difficult, dense and opaque concepts in plain English, is comprehensive and all about what Seth calls a ‘Copernican inversion’ in neuroscience.

Starting with a stunning reflection on the complete dissolution of consciousness during general anaesthetics, he outlines the philosophical backdrop of idealism, dualism, panpsychism, transcendental realism, physicalism, functionalism and, what I really liked, the more obscure mysterianism (often ignored).

He’s also clear on the fields that prefigure and inform this new movement; NCC (Neural Correlates of Consciousness) and IIT (Integrated Information Theory). After a fascinating discussion of his LSD experiences, along with an explanation for their weirdness, he shows that the brain is a highly integrated entity, embodied and embedded in its environment.

Controlled Hallucination 

His Copernican Revolution in brain theory, that consciousness is ‘Controlled Hallucination’, builds on Plato, Kant, then Helmholtz’s idea of ‘perception as inference’. The brain is constantly making predictions, and sensory information provides data that we try to match against our existing models in a continual process of error minimalisation. This Copernican Inversion leaves the world as it is but sees the brain as an active, creative inferencing machine, not a passive receiver of sensory data. 

There is the usual, but informative notion that colour is in the hallucination not the real world and a series of illusions that prove active, predictive processing and active attention including the famous invisible Gorilla video experiment. 

He then covers most of the theories and concepts in this new area of neuroscience informed by the computational theory of the mind; abductive reasoning, generative modelling, Bayesian inference (particularly good), prediction error minimalization, free energy principle (also brilliantly explained), all under the unifying idea of a controlled hallucination as the explanation for consciousness.

Asides

There are some really well written asides in the book, one on art expanding on Riegel and Gombridge’s idea of the ‘Beholder’s Share’, where artists, such as the impressionists and cubists demand active interpretation by the viewer, confirming the perceptual inference he presents as his theory of perception and consciousness. Art surfaces this phenomenon. Another is a series of fascinating experiments on time, showing that it is related to perception and not an inner clock.

AI

The section on AI is measured. He separates intelligence from consciousness (rightly) as he is suspicious of functionalism, the basis for much of this theorising and is sceptical about runaway conscious AI, as an overextension. However the book was published in 2018 and AI has progressed faster on the intelligence scale than the book suggests. At the end of the section he introduces 'cerebral organoids', anticipating Geoffrey Hinton's Mortal Computer.

Conclusion

The only weak part of the book is his treatment of the ‘Self’. It is less substantial, not really dealing with the rich work of those who have looked at personal identity in detail, philosophically and psychologically. I was also surprised that he doesn’t mention Andy Clark, another ex-Sussex University theorist in the field, especially as he is closely associated with David Chalmers, who rightly gets lots of plaudits in the book. 

However, the fact that Anil lives in my home town Brighton is a plus! It covers a lot of the bases in the field and interleaves the hard stuff with more digestible introductions. A really fascinating and brilliant read.

PS

If you are generally interested in the theorists in this new field, John Helmer and I did a podcast on the Connectionists in the Netherlands, in front of a live audience. It was fun and covers many of the ideas presented in this book.


Friday, June 21, 2024

The DATA is in… AI is happening BIG TIME in organisations…

2024 is the year AI is having a massive impact on organisations in terms of productivity and use. Two reports from Microsoft and Duke, show massive take up. I showed this data for the first time this week at an event in London, where I also heard about GPT5 being tested as we speak.

The shift has been rapid, beyond the massive wave of initial adoption where people were largely playing with the technology. During this phase, some were also building product (that takes time). We’ve built several products for organisations, pushing fast from prototype to product, now in the market being used by real users in 2024. That's the shift.

The M&A activity is also at fever pitch. The problem is that most buyers don’t fully understand that startups are unlikely to have proven revenue streams in just 12 months. The analysts are miles behind, as they drive with their eyes in the rear-vie mirror. Don’t look to them for help. Large companies are looking for acquisitions but the sharper ones are getting on with it.

Microsoft - AI is Here

The Microsoft and Linked in report ‘AI is Here’ surprised even me.

The Survey & data of 31,000 people 31 countries covers labour & hiring trends, trillions of productivity signals and Fortune500 customers. The results clearly show that 2024 is year AI at work gets real and that employees are bringing AI to work. 75% of people are already using AI at work.



Who are using it? Everyone, the data shows everyone from Gen. Z to Boomers have jumped on board. 


And looking to the future, it is becoming a key skill on recruitment.

We have moved from employees informally bringing AI to work, to formal adoption, especially in large organisations. There's a serious interest in getting to know what to do and how to do it on scale. Next year will see the move from specific use cases, such as increasing productivity in processes to enterprise wide adoption. Some have already made that move.

Duke

CFOs that reported automating were also asked about whether their firms had utilised artificial intelligence (AI) to automate tasks over the last 12 months. 


CFOs that plan to automate over the next 12 months were asked about their plans to adopt AI over the this period. Fifty-four percent of all firms, and 76 percent of large firms, anticipate utilising AI to automate tasks, with a skew towards larger firms.

Conclusion

Anyone who thinks this is hype or a fad, needs to pay attention to the emerging data.

The problem is that it has a US skew. We’re all doing it but the US is doing it faster. As they shoot for the stars we’re shooting ourselves in both feet through negativity and bad regulation. The growth upside and savings in education and health are being ignored while we hold conferences on Ai and Ethics, where few even understand what an ‘ethical’ analysis means. It’s largely moralising, not ethics, with little understanding of the technology or actual ethics.

 

Thursday, June 20, 2024

British Library. Books look like museum pieces as that it what they are becoming?

Make it real! Can we actually deliver AI through current networks?


A talk and chat at the Nokia event held in the British Library. Wonderful venue and I made the point that we first abstracted our ideas onto shells 500000 years ago, invented writing 5000 years ago, printing 500 years ago and here we are discussing a technology that may eclipse them all – AI.

Bo heads up Nokia’s Bell Labs, who are working on lots of edge computing and other network research and we did what we do with ChatGPT – engaged in dialogue. I like this format, as it’s closer to a podcast, more informal and seems more real than a traditional keynote.

It was also great to be among real technology experts discussing the supply problems. There's something about focused practitioner events that make them more relevant. Microsoft telling us about GPT5 testing and some great case studies showing the massive impact AI is having on productivity.

Quantum computing was shown and discussed and an interesting focus on the backend network and telco problems in delivering AI. We have unprecedented demand for compute and the delivery of data at lower levels of latency. Yet much of the system was never designed for this purpose. 

Energy solutions

The race is on to find energy solutions such as:

Fusion is now on the horizon

Battery innovation progresses

AI to optimise power use now common

Low power Quantum computing begiining to be realised

Compute solutions

Models have to be trained but low latency dialogue also has to be delivered: 

Chip wars with increasing capability at lower costs

Quantum computers with massive compute power

Application-Specific Integrated Circuits (ASICs) and Field-Programmable Gate Arrays (FPGAs), optimised for AI workloads with lower power consumption

Edge computing moves processing closer to the data source at the edge of the network, reducing the need for centralised compute resources and it lowers latency

Federated learning allows multiple decentralised devices to collaboratively train models while keeping the data localised

Neuromorphic computing with chips that mimic neural structures, offering potential efficiency gains for AI workloads

Software efficiency

There’s also a ton of stuff on software and algorithmic efficiency, such as:

Model Compression through pruning, quantisation, and distillation to reduce the size and computational requirements of AI models

More efficient training methods like transfer learning, few-shot learning, and reinforcement learning to reduce the computational cost of building AI models.

Delivery

Network infrastructure moves towards 5G to provide high-speed, low-latency connectivity, essential for real-time AI applications and global delivery. Content Delivery Networks (CDNs) can cache AI models and results closer to users, reducing latency and bandwidth usage.

Two-horse race

Of course all of this has to be delivered and it is now clear that the biggest companies in the world are now AI companies. NVIDIA are now the most valuable company on the planet, at 3.34 Trillion delivering the spades to the gold miners, Microsoft at $3.32, Apple a touch less at $3.29, Google at $2.17 and Facebook at $1.27. In China Tencent $3.65 Trillion, Alibaba £1.43. This is a two horse race with Us well ahead and China chasing and copying. Europe is still in the paddock.

Conclusion

Afterwards, I went to the British Library’s Treasures of the British Library Collection. There lay the first books, written, then printed. A 2000 year old homework book, early Korans, The Gutenberg Bible. We made this work by developing paper and printing technologies, block printing, moveable type, book formats, networks for publishing and distribution. This was undermined by the internet but something much more profound has just happened.



It struck me that I that same building we had just witnessed a revolution that surpasses both. The sum total of all that written material, globally, is now being used to train a new technology, AI, that allows us to have dialogue with it to make the next leap in cultural advancement. We have invented a technology (books and printing were also technologies) that transcend even the digital presentation of print into a world where the limitations of that format are clear. We are almost returning to an oral world where we talk with our past achievements to move forward into the future.

We are no longer passive consumers of print but in active dialogue with its legacy. These books really did look like museum pieces as that is what print has become.

 

Friday, June 14, 2024

The 'Netflix of AI' that makes you a movie Director

Film and video production is big business. Movies are still going strong, Netflix, Prime, Disney, Apple and others have created a renaissance in Television. Box sets are the new movies. Social media has also embraced video with the meteoric rise of Tik Tok, Instagram, Facebook shorts and so on. YouTube is now an entertainment channel.

Similarly in learning. Video is everywhere. But it is still relatively time consuming and expensive to produce. Cut to AI…

We are on the lip of a revolution in video production. As part of a video production company then using Laserdiscs with video in interactive simulations, I used make corporate videos and interactive video simulations in the 80s/90s. The camera alone cost £35k, a full crew had to be hired, voiceovers in a professional studio (eventualy built our own in our basement), an edit suite in London. We even made a full feature film The Killer Tongue (don’t ask!).

With glimpses and demos of generated video, we are now seeing it move faster into full production, unsurprisingly from the US, where they have embraced AI and are applying it faster than any other nation.

1. Video animating an Image or prompt

I first started playing around with AI generated video from stills and it was pretty good. It’s now very good. Here’s a few examples. 

Now just type in a few words and it's done.

Turned this painting of my dog into a real dog...

Made skull turn towards viewer...


Pretty good so far...

2. Video from a Prompt

Then came prompted video, from text only. This got really good, really fast, with Sora and new players entering the market such as Luna.


Great for short video but no real long-form capability. In learning these short one-scene videos could be useful for performance support and single tasks or brief processes, even triger video as patients, customer, employees and so on. This is already happening with avatar production.

3. Netflix of AI

Meet Showrunner, where you can create your own show. Remember the Southpark episode created from AI? Same company has launched 10 shows where you can create your own episodes.

Showrunner released two episodes of Exit Valley, a Silicon Valley satire starring iconic figures like Musk, Zuck and Sam Altman. The show is an animated comedy targeting 22 episodes in its first season, some made by their own studio, the rest made by users and selected by a jury of filmmakers and creatives. The other shows, like Ikiru Shinu and Shadows over Shinjuku, are set in Neo-Tokyo, are set in distinct anime worlds, and will be available later this year.

They are using LLMs, as well as custom state-of-the art diffusion models, but what makes this different is the use of multi-agent simulation. Agents (we’ve been using these in learning projects) can build story progression and behavioural control.

This gives us a glimpse of what will be possible in learning. Tools such as these will be able to create any form of instructional video and drama, as it will be a ‘guided’ process, with the best writing, direction and editing built into the process. You are driving the creative car but there will be a ton of AI in the engine and self-driving features that allows the tricky stuff to be done to a high standard behind the scenes. Learners may even be able to create or ask for this stuff through nothing more than text requests, even spoken as you create your movie.

The AI uses character history, goals and emotions, simulation events and localities to generate scenes and image assets that are coherent and consistent with the existing story world. There is also behavioural control over agents, their actions and intentions, also in interactive conversations. The user's expectations and intentions are formed then funneled into a simple prompt to kick off the generation process.

You may think this is easy but the ‘slot-machine effect’, where things become too disjoined and random to be seen as a story, is a really difficult problem. So long-term goals and arcs are used to guide the process. Behind the scenes there is also a hidden ‘trial and error’ process, so that you do not see the misfires, wrong edits etc. The researchers likened this to Kahneman’s System 1 v System 2 thinking. Most LLM and diffusion models play to fast, quick, System 1 responses to prompts. For long-form media, you need System 2 thinking, so that more complex intentions, goals, coherence and consistency are given precedence.

Interestingly hallucinations can introduce created uncertainty, a positive thing, as happy accidents seem to be part of the creative process, as long as it does not lead to implausible outcomes. This is interesting – how to create non-deterministic creative works that are predictable but exciting, novel works.


This is what I meant by a POSTCREATION world, where creativity is not a simple sampling or remixing but a process of re-creation.

4. Live action videos

The next step, and we are surely on that Yellow Brick Road is to create your own live action movies from text and image prompts. Just prompt it with 10 to 15 words and you can generate scenes and episodes from 2 - 16 minutes. This includes AI dialogue, voice, editing, different shot types, consistent characters and story development. You can take it to another level by editing the episodes’ scripts, shots, voices and remaking episodes. We can all be live-action movie Directors.

Conclusion

With LLMs, in the beginning was the ‘word’, then image generation, audio generation, then short form video, now full-form creative storytelling. Using the strengths of the simulation, co-creating with the user, and the AI model, rich, interactive, and engaging storytelling experience are possible.

This is a good example of how AI has opened up a broad front attracting investment, innovation and entrepreneurship. At its hear are generative techniques but there are also lots of other approaches that form an ensemble of orchestrated approaches to solve problems.

You have probably already asked the question. Does it actually need us? Will wonderful, novel creative movies emerge without any human intervention. Some would say ‘I fear so’. I say ‘bring it on’. 


Wednesday, June 12, 2024

Apple solves privacy and security concerns around AI?


Apple Intelligence launched a set of AI features that had OpenAI’s GPT4 at the heart. It was a typical Apple move – focus on personalisation, integration and user performance.

The one thing that stood out for me was the announcement on privacy and ‘Edge’ computing. Their solution is clever and may give them real advantages in the real market. AI smartphones will be huge. Google led the way with the Pixel – I have one – it is excellent and cheap. But the real battle is between Apple and Samsung. The Galaxy is packed with AI features, as is the iPhone, but he who wins the AI device battle (currently 170 million units in 2024 and about to soar), will inherit users and revenue.

Privacy and security are now a big deal in AI. Whenever you shoot off to use a cloud service there is always the possibility of cybersecurity risks, losing data, even having your personal data looted.

Apple sell devices, so their device solution makes sense. It gives them ‘edge’ through ‘Edge Computing’.  A massive investment in their M3 chip and other hardware may give them further edge in the market.

In order to deliver real value to users the device needs to know what software and services you use across your devices, your emails, texts, messages, documents, emails, photos, audio files, videos, images, contacts, calendars, search history and AI chatbot use. Context really matters as if you are my ‘persona;’ assistant you need to know who I am, your friends and family, what I am doing and my present needs.

So what is Apple’s solution? They want to keep privacy on both device and when the cloud is accessed. Let’s be clear, Google, Microsoft, Meta, OpenAI and others will also solve this problem but it is apple who have been first above the parapet. This is because , unlike some of the others, they don’t sell ads and don’t sell your data. It pitches Apple against Microsoft but they are in different markets - one consumer, the other corporate.

‘Private Cloud Compute’ promises to use your data but not store and allow anyone access to your data, even Apple itself. Apple have promised to be transparent and have invited cybersecurity experts to scrutinise their solution. Note that they are not launching Apple Intelligence until the fall and even then only in the US. This makes sense, as this needs some serious scrutiny and testing.

Devices matter. Edge compute matters. As the new currency of ‘trust’ becomes a factor in sales, privacy and security matter. As always, technology finds a way to solve these problems, which is why I generally ignore superficial talk about ethics in AI, especially the doomsters. At almost every conference I attend I head misconceptions around data and privacy. Hope this small note helps.


Tuesday, June 11, 2024

Ethan Mollick’s 'CO-INTELLIGENCE' - a review

Just finished Ethan Mollick’s CO-INTELLIGENCE book. I like Ethan, as he shares stuff. His X feed is excellent, so was eager to give this a go.

It wasn’t what I expected, but that’s fine, because it’s pretty good. Ethan’s a Stanford academic, so I thought it would be a research-rich book with lots of examples but it is actually aimed at the basic, general reader, who knows little or nothing about AI; big font, big line spacing and no index but it does have some good, useful research.

It opens with his Three Nights Without Sleep revelation, that this shit is amazing! Why? Because it is a ‘General Purpose Technology’ pregnant with possibilities. I liked this. He writes well and is enthusiastic about its potential.

PART I 

That sense of wonder continues over PART I, with his musings on the Scary? Smart? Scary-smart? Nature of GenAI, seeing it as a sort of alien mind. Alignment he thinks is necessary but is not a doomster and avoids the sort of speculative sci-fi stuff that often appears whenever AI and ethics is mentioned. He ends this section with his Four Rules for Co-Intelligence:

Always invite AI to the table – like this

Be the human in the loop – OK but…

Treat AI like a person (but tell it what kind of person it is) – like this

Assume this is the worst AI you will ever use – yip!

PART II

This is the bulk of the book, with five chapters, where he sees AI as a:

Person

Creative

Coworker

Tutor

Coach

I have lots of quibbles but that’s fine. These are good, short readable discussions that open doors on its applications and potential. Each was well worth the read. I won’t go into detail, as I’d be in danger of providing one of those summaries that stops people buying the book!

It rounds off with a Chapter on AI as our future with four Scenarios; As Good As It Gets, Slow Growth, Exponential Growth or The Machine God. Then a short Epilogue, completed using ChatGPT – AI is US.

My own view is that the premise ‘CO-INTELLIGENCE’ is too simplistic and that it will do lots of things that will surprise us beyond the idea of just augmentation, a tool to enhance human creativity, decision-making, and productivity.

The problem with any book on AI, is that it is out of date before it is even printed. There were many points when I was thinking Yes… but… This is normal. The AI mindset demands fluidity and a recognition of the point Ethan makes in PART I – Assume this is the worst you will ever use.

Good introductory text – well worth a buy – but not for those who are looking for detail and depth of expertise.

 

Monday, June 10, 2024

Sam has ditched Satya for Tim – he’s so louche that lad! Apple Intelligence is here! New Siri and more...


Apple event features REALLY annoying presenters, but they finally join the GenAI club with Apple Intelligence. After showing the now compulsory ‘help me with my maths‘ example, they cut to the quick… it’s ChatGPT4 folks! 

Personal Intelligence

TTheir core idea is 'personal intelligence' as it understands your personal context. iPhone prioritises notifications, new writing tools (review, write and proofread etc) across all apps, even third party. Its email improvements, summaries of emails and so on, are super-cool. It will also intelligently prioritise your emails. Great for something I’ve been banging on about in learning – performance support. Apple are basically providing powerful, personal support across your entire online experience.

Images and video

On images it allows you to create images, including images of yourself, your friends and relatives, Sketch, Illustration and Animation styles built into apps across the system. Genmoji is a personal emoji creator, even an emoji that looks like your mates... that will be super-annoying. Image playground generation gives you styles, themes, costumes. Photoediting is super smart, getting things to disappear. Image wands allows you to circle, suggest and manipulate stuff.

There's also sound to text – great for student note taking and taking notes at work.

Search in video – clever. Great for performance support. Stories can be selected to a person and theme then strung together with music. Oh and there’s an API.

You can ask for personal stuff - remember that email I sent to... that picture I took of X last week... as it is personalising tools using personal data, the ‘who, what and whens’ of your life. 

Data privacy

This personal data is on-device processing so personal data is local. It can therefore use your personal data but with super-privacy features. An on-device semantic index helps keep it all local. Private cloud compute uses only data necessary for the task. It reaches out while still keeping your data private.

Siri

She’s gone from stupid to smart. Basically she’s now a chatbot that knows what you mean when you use ‘this’ and ‘that’ in sentences. It also has on-screen awareness and memory of what you have done. Siri knows your personal context – hotel bookings, photos you’ve taken, emails you’ve sent… porn you’ve accessed… no not that! 

Agentic

What’s interesting is its agentic capabilities. It goes off and find stuff relevant to your request, flight info, external websites, things you’ve done locally. This has legs.

This is Apple, so it is integrated, user friendly and personal

Conclusion

One thing they have done well is the M3 chip, giving device AI functionality - that lay behind much of what was delivered here and may be critical in terms of practical and secure delivery of AI. It literally gives them 'edge' in the market. They're reallly a consumer company, unlike Microsoft (apart from games), which makes edge computing and iPhone delivery more important. Lots of the features were consumer oriented.

This is AI for the rest of us – not just work but performance support for life. Well done. Every generation needs a revolution and through this revolution we become more of ourselves.

Saturday, June 08, 2024

7 success factors in real 'AI in learning' projects

With AI we are in the most interesting decade in the history of our species. I can think of no better field in which to think, write and work.

Ideas are easy, implementation hard

My first AI-like project was in the early 90s when I designed an intelligent tutoring system to teach interviewing skills. It had sentence construction as input and adaptivity in the sense of harvesting data as the learner used the system. Written in Pascal, it was clever but not yet smart, as the limitations of the hardware, in terms of processing power and memory, were extreme by today’s standards. Much of the effort went into making things work within these brutal constraints. Even then, we had controlled access to video clips (36 mins), thousands of stills and two audio tracks 112 mins) on Laserdiscs, which we used to good effect, simulating full interviews. You could feel the power of potential intelligence in software.

Jump to 2014 and those hardware limitations had gone. You could build an adaptive, personalised system, which we did at CogBooks. I invested personally in this system (twice) and brought investment in. We did oodles of research at Arizona State University and it was sold to University of Cambridge in 2021. It worked. For many years we had also been playing with AI within Learning Pool having bought an AI company. But my real project journey with modern AI started in 2014, when we build Wildfire, using 'entity analysis', open input and the semantic interpretation of open text answers. The whole thing was starting to taker shape.

Jump to November 2022 and things went a little crazy. I have barely been off the road speaking about GenAI in learning, written books on the subject, blogged like crazy, and recorded dozens of podcasts. Far more important, has been the real projects and product we have built for a number of clients and companies. This is the hard part, the really hard part. Ideas are easy, implementation is hard.

Optimal AI project

What makes a successful AI project? What are the factors that make them a success? The good news is that we have just completed a fascinating project in healthcare that had all the hallmarks of the optimal project. This was our experience.

1. Top-down support

The project started with top-down support, a goal and budget. Without top-down support, projects are always at risk of running out of support. That’s why I’m suspicious of Higher Education projects, grant-aided projects, hackathon stuff and so on. I prefer CEO, Senior Management or Entrepreneur driven initiatives, with real budgets. They tend to have push behind them, clear goals and, above all, they tend to be STRATEGIC. Far too many projects are mosquito project that fail because they end when the budget runs out and have no real impact or compelling use. Choose your use case(s) carefully and strategically. We have been through this with large Global companies - a rational approach to use cases and their prioritisation. Interestingly, AI can help.

2. Bottom-up understanding

This project also had a great client, grounded in a real workplace (a large teaching hospital), a clear budget and solid team. We made sure that everyone was on the same page, understanding what this technology was and could do. The two non-technical team members knew their process inside out but here’s where they really scored – they made the effort to understand the technology and did their homework. This meant we could get on with the work and not get bogged down in explaining basic concepts such as how an LLM works, context window and the need for clean data and data management.

Many AI projects flounder when the team has non-technical members that don’t know the technology, namely AI. It is not that they need competence in building AI, just that they need to understand what it is, the fact that it evolves quickly and that its capabilities grow rapidly.

3. Optimal team

The team also had a top-notch AI developer who has been though years of learning projects. This combination was useful, He had already built products in the learning field, understood the language of learning and its goals. The team was just three people. This really matters. Use Occam’s Razor to determine team size – the minimum number of team members to reach your stated goal. Too many AI projects include people with little or no knowledge of the basic technology. They often come with misconceptions about what they think it is and does, along with several myths.

4. Mindset matters

More than knowledge, is mindset. What cripples projects are people within the organisation who act as bottlenecks – sceptics, legal departments who do not understand data issues, old-school learning people who actually don’t like tech and anyone who is straight up sceptical of the power of AI to increase efficacy. Believe me there are plenty of those folk around. 

The mindset that leads to success is one that accepts and understands that the technology is probabilistic, data-driven, that functionality will increase during the project and things change very fast. I’d sum this up by saying you need team members who are both willing to learn fast and keep their minds open to rapid change. It also means accepting that most processes are too manual, that bottlenecks are hard to identify and that processes CAN be automated. 

5. Agency shift

You also have to let go and see that this technology has ‘agency’ and that you will have to hand agency over to AI. The technology itself will reveal the bottlenecks and insights. Don’t assume you know at the start of the project, they will be revealed if you use the technology well. This is no time for an obsession with fixed Gantt charts and designs that are fossilised then simply executed. It is like ‘agile on steroids’.

6. Manage expectations

AI is a strange, mercurial and fast moving technology. You have to dispel lots of myths about how it works, the data issues and its capabilities. You also have to communicate this to the people that matter.
You need to understand that what is hard is sometimes easy and what is easy, sometimes hard. The fact that things change quickly, for example, costs, is another problem. This happens to be a good problem as people often don't understand that token costs for fixed output are very low and even token costs for a service have plummeted in price. Expectations need to be managed by being clearly communicated.

7. Push beyond prototype to product

I can’t go into a huge amount of detail about the client but the topic was Surgical Support  - a life and death topic, with little room for error. It involved training and taking source material and turning it into usable support (not a course) for hospital staff. Processes were automated, SME (Subject Matter Expert) time reduced, delivery time to launch massively reduced so the team had more time to focus on quality, as opposed to just process and easier to maintain, as just updating the documents means the system is always current. The savings were enormous and increases in quality clear.

This success meant we could call upon the top-down support to push the project beyond prototypes into product with a broader set of goals and more focus on data management. It has given the organisation, management and team the confidence to forge ahead. With massive amounts of time saved and increased efficacy, we saw that success begets success.

Conclusion


If you don't have both TOP-DOWN and BOTTOM-UP support along with a tight team with the right mindset, you will struggle, even fail. This is a radically new and different species of technology, with immense power. It needs careful handling. The small team remained fixed on the strategic goal but was flexible enough to choose the optimal technology. Without all of the above the project would have floundered in no-man's land, with scope creep, longer timescales and the usual drop in momentum, even disappointment. 

The project exceeded expectations. How often can you say that about a learning technology project? This was a young team, astounded at what they had done, and this week, when they presented it at a learning conference, their authentic joy when expressing how it went, was truly heartening. “It was crazy!” said one when she describing the first results, then further inroads into automating in minutes, jobs that had traditionally taken them days, weeks even months. Everyone in the room felt the thrill of having achieved something. In 2024 AI has suddenly got very real.

PS

So many commentators and speakers on AI have never actually delivered a project or product. We need far more focus on practitioners who share what they think works and does not work.


Saturday, June 01, 2024

Postcreation: a new world. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.


Postproduction

There is an interesting idea from the French writer Bourriaud, that we’ve entered a new era, where art and cultural activity now interprets, reproduces, re-exhibits or utilises works made by others or from already available cultural products. He calls it ‘Postproduction’ I thank Rod J. Naquin for introducing me to this thinker and idea. 

Postproduction. Culture as Screenplay: How Art Reprograms the World (2002) was Bourriaud’s essay which examines the trend, emerging since the early 1990s, where a growing number of artists create art based on pre-existing works. He suggests that this "art of postproduction" is a response to the overwhelming abundance of cultural material in the global information age.

The proliferation of artworks and the art world's inclusion of previously ignored or disdained forms characterise this chaotic cultural landscape. Through postproduction, artists navigate and make sense of this cultural excess by reworking existing materials into new creations.

Postcreation

I’d like to universalise this idea of Postproduction to all forms of human endeavour that can now draw upon a vast common pool of culture; all text, images, audio and video, all of human knowledge and achievements – basically the fruits of all past human production to produce, in a way that can be described as ‘Postcreation’.

This is inspired by the arrival of multimodal LLMs, where vast pools of media representing the sum total of all history, all cultural output from our species, has been captured and used to train huge multimodal models that allow our species to create a new future. With new forms of AI, we are borrowing to create the new. It is a new beginning, a fresh start using technology that we have never seen before in the history of our species, something that seems strange but oddly familiar, thrilling but terrifying – AI.

Palimpsests

AI, along with us, does not simply copy, sample or parrot things from the past – together we create new outputs. Neither do they remix, reassemble or reappropriate the past – together we recreate the future. This moves us beyond simple curation, collages and mashups into genuinely new forms of production and expression. We should also avoid seeing it as the reproduction of hybrids, reinterpretations or simple syntheses.

Like a ‘palimpsest’, a page from a scroll or book that has been scraped clean for reuse, we can recover the original text if we scan it carefully enough, but it is the ground for a genuinely new work. It should not be too readily reduced to one word, rather pre-fixed with ‘re-’; to reimagine, reenvision, reconceptualise, recontextualise, revise, rework, revamp, reinterpret, reframe, remodel, redefine and reinvent new cultural capital. We should not pin it down like a broken butterfly with a simple pin, one word, but let the idea flutter and fly free from the prison of language.

Dialogue

We have also moved beyond seeing prompt engineering as some sort of way of translating what we humans do into AI speak. It is now, quite simply, about explaining. We really do engage and speak wto and with these systems. The move towards multimodality with generated and semantically understood audio, is a huge leap forward, especially in learning. That’s how we humans interact.

Romantic illusion

We have been doing this on a small scale for a long time under the illusion, reinforced by late 18th and 19th century Romanticism, that creation is a uniquely human endeavour, when all along it has been a drawing upon the past, therefore deeply rooted in what the brain has experienced and takes from its memories to create anything new. We are now, together, taking things from the entire memory of our cultural past to create the new in acts of Postcreation.

Communal future

This new world or new dawn is more communal, drawing from the well of a vast shared, public collective. We can have a common purpose of mutual effort that leads to a more co-operative, collaborative and unified effort. There were some historical dawns that hinted at this future, the Library at Alexandria, open to all containing the known world's knowledge, Wikipedia a huge, free communal knowledge base, but this is something much more profoundly communal.

The many peoples, cultures and languages of the world can be in this communal effort, not to fix some utopian idea of a common set of values or cultural output but creation beyond what just one group sees as good and evil. This was Nietzsche’s re-evaluative vision. Utopias are always fixed and narrow dystopias. This could be a more innovative and transformative era, a future of openness, a genuine recognition that the future is created by us, not determined wholly by the past. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.