Whenever I hear someone say GenAI is a 'stochastic parrot' my BS meter goes off. They are usually parroting this as a meme they've read on social media, with no reference to the 2021 paper it came from. I often ask if they've read the paper - not one person in education has known of the Bender paper.
It appears that GPT4 is NOT a 'stochastic parrot'. I argued this at BETT two days ago, based on its ability to produce top-down, Wittgensteinian language games. The fact that it can play language games and be good at new ones, was a sign that it can generalise to produce these new skills. This team, I think,
have proven it mathematically, bottom-up.
The model can generate text that it couldn’t possibly have seen in the training data, displaying skills that add up to what some would argue is understanding.
What is really interesting is the other conclsuion one cxan draw from this research.
The authors add that the work doesn’t say anything about the accuracy of what LLMs write as they are by implication 'creative'.
“In fact, it’s arguing for originality,” he said. “These things have never existed in the world’s training corpus. Nobody has ever written this. It has to hallucinate.”
Another implication is that larger numbers of parameters will allow more skills. I am sure we'll see this in 2024.
No comments:
Post a Comment