Thursday, August 08, 2024

Strawberry Fields forever? Is GPT5 really to change the world?

Let me take you down
'Cause I'm going to strawberry fields
Nothing is real
And nothing to get hung about
Strawberry fields forever...

The Beatles had it sussed! Replace Strawberry Fields with AI, where the virtual intelligence is real then the distinction between what mind and machine can do dissolves. Embrace this leap, don’t get hung up on cynicism. This will be forever, a species changing event.

I’m taking a punt but we keep an eye on model leaks and something’s brewing. ‘Strawberry’ is name of OpenAI’s reasoning project and reasoning in some leaked models is getting very good. Playing around in Chatbot Arena, there seems to be signs that models, like ‘sus-column-r’ are reasoning quite well. You never really know but even Sam Altman has been leaking strawberry symbols. This is a simple example, but you get the point..











With agents and reasoning it’s not just a game changer, it’s a new game. It moves tools into real human analysis and decision making, in ways we haven’t seen so far.

Her

The strawberry reference is actually from the movie ‘Her’, the best movie ever made on AI. Directed by Spike Jonze, it refers to strawberries during a scene where Theodore, is talking to Samantha (Her). Their conversation is about a book that Theodore is writing, where he mentions that he writes about a character who remembers "perfectly ripe strawberries" from his childhood. Strawberries serve as a symbol of nostalgia and the longing for a simple, perfect moment from the past. It highlights the theme of human experiences and memories, which are central to the film's exploration of relationships and the nature of human connection, even with artificial intelligence. The strawberries symbolise the poignant, sensory details that make memories so vivid and meaningful. 

It's the movie equivalent of the famous moment in Marcel Proust's Remembrance of Things Past. where a madeleine biscuit, dipped in tea, triggers a flood of memories for the narrator. 

"And at once the vicissitudes of life had become indifferent to me, its disasters innocuous, its brevity illusory—this new sensation having had on me the effect which love has of filling me with a precious essence; or rather this essence was not in me it was myself. I had ceased now to feel mediocre, accidental, mortal. Whence could it have come to me, this all-powerful joy? I sensed that it was connected with the taste of tea and cake, but that it infinitely transcended those savours, could not, indeed, be of the same nature. Whence did it come? What did it signify? How could I seize and apprehend it?"


How can Large Language Models (LLMs) like GPT-4 be improved on inference and reasoning?

You can have different and more focused data. Here scale may help as larger datasets with targeted synthetic data, from specific domains and contexts, may well help with more nuanced and reasoned output. If that data is selected or annotated for specific reasoning tasks, such as logical reasoning, mathematical problem-solving, or common sense reasoning, it should help the model learn specific reasoning skills. But that is not enough. Reasoning needs memory over longer contexts to use relevant information from other parts of a conversation or text. Attention also needs to focus on the input or identified intentions and goal.

But the big need is in fine-tuning the model on tasks that specifically require inference and reasoning. This is complex, beyond just questions and answers into problem-solving and decision-making. It also seems likely that actual databases of agreed knowledge would be useful. There’s also the gnarly problem of integrating different modalities

In other words, there’s an ensemble of techniques that need orchestrated to do what we do, only better. Humans are actually quite poor at reasoning, as we have inbuilt biases, limited short term memories and fallible long-term memories. We don’t even have a simply calculator model. The brain can barely cope with times-tables and that takes years of training.

Conclusion

If this happens, and I think it is only a matter of time, we will have moved into another era of AI. Current LLMs are like us, too like us. The next generation of AI will be better than us. This has huge implications for productivity, employment and the future trajectory of our species.


No comments: