Thursday, October 17, 2024

Who put the Silicon in Silicon Valley? The place where rocks were made to think...

My hobby, since I was a boy is geology., with a house full of rocks and fossils, a huge geological map of the UK on my kitchen wall and a geological hammer in my car. My other great provocation is AI, which is a long-standing interest, since University.

So I’m glad I have lived long enough to see rocks that think. OK, before you attack the verb, they think not in terms of human thinking but It is clear they can outdo us on many tasks which we ‘think’ are ‘thinking’. If, like me, you believe in the computational theory of the mind, the hardware and software thinks.

Silicon, the rock that thinks is the second most abundant element on earth, making up 27.7% of the Earth's crust, commonly found in sand, quartz, and various silicate minerals.

In the mid-1950s, among the orchards of Mountain View, California, a revolution took root. Shockley Semiconductor Laboratory, was founded in 1956 by physicist William Shockley, co-inventor of the transistor and Nobel Prize Winner. His was the first semiconductor company in what would later be known as Silicon Valley, attracting some of the brightest minds in the new field of electronics.

However, discontent was brewing. Shockley may have been a genius but was autocratic and abrasive. He stifled creativity and bred frustration. By 1957, tensions had reached a breaking point. Eight of Shockley's top engineers and scientists left. The group approached Sherman Fairchild, who saw their potential and agreed to back them and Fairchild Semiconductor was born in Palo Alto, a pivotal moment in technological history.

Shockley, feeling betrayed, dubbed them the ‘Traitorous Eight’. 

Their success attracted a wave of talent and investment to the region, setting off a chain reaction of entrepreneurship. The ‘Fairchildren’ founded numerous other companies that fuelled Silicon Valley's expansion.

Robert Noyce co-invented the integrated circuit. Jean Hoerni developed the planar process, a manufacturing technique that allowed for the mass production of reliable silicon transistors and integrated circuits. These ignited explosive growth in Silicon Valley.

Intel, founded in 1968 by Gordon Moore and Robert Noyce after they left Fairchild. Eugene Kleiner, another of the eight, co-founded Kleiner Perkins, a venture capital firm that became instrumental in funding and nurturing countless tech startups, including giants like Google and Amazon.

 The Traitorous Eight championed an innovation culture that valued flat structures, minimising bureaucracy to allow creativity and engineering prowess to flourish. The emphasis was on risk-taking and entrepreneurship with bold ideas and collaborative effort could lead to monumental success.

Silicon valley remains the powerhouse of tech. Almost all of the innovation in AI came and still comes from that one place. We had te recent example of a bunch of smart people in OpenAI, eventually breaking out to form new enterprises. This is what made the Valley great. It is why it remains the power house of global tech.

Screentime - another in the Sisyphean cycle of technology panics?

Josh MacAllister is a new Labour MP. As with many nebies , he’s keen to make his mark with legislation and has proposed a Bill that would:

• Raise the minimum age of "internet adulthood" (to create social media profiles, email accounts, etc) from 13 to 16

• Legally ban smartphones from classrooms

• Strengthen Ofcom's powers to protect children from apps designed to be ‘addictive’

• Committing government to review further regulation if needed of the design, supply, marketing and use of mobile phones by children under 16

We have a problem

I have been following the screentime debate since 2009, when I read Susan Greenfields’s scaremongering book, where she claimed screentime was making is cognitively stupid. I blogged about it then and the debate has only got worse. Year after year potboiler self-help books appear demanding we digital detox, limit screen time, ban screens in schools. How we deal with technology, especially around children, is an important issue but its is so often reduced to self-help platitudes.

I thought then, and think now, that the idea that smartphones damage our cogntitive systems suggests that the evolved mind is so delicate that they could be damaged by a switch in modality. We don’t say this about books or the cinema. The evolution of our minds has taken million sof years of selection, if it were that easy to make us stupid we’d never have got here.

I’ve seen plenty of people make money by writing about the dangers of ‘screentime’. Whethere it’s smartphones, video games or social media, there’s always some moraliser who wants to tell us to digitally detox (it doesn’t work) and what to do with our time. Susan Greenfield was one, Jean Twenge another – there’s a long list. You can’t help but feel they start with almost an almost religious zeal and end up preaching.

The story they tell themselves is ‘screentime – bad, f2f – good’. Yet there’s rarely  any real definition of what is meant by ‘screentime’. It is a complex issue. Neither is there much breadth to the research they quote – often the same cherrypicked pieces, mainly surverys that show correlation and weak effects, sometimes neutral, even positive! Turns out the evidence that screentime is harmful is as thin as gruel.

Unlocked

So I found myself screaming through the book ‘Unlocked’ by Pete Etchells, a psychologists who is an expert in the field. 

He claims there is almost no evidence to say that screens are bad for us. On the contrary, up to a certain limit, the use of social media correlates with wellbeing, and that some is better for us than none. And where there are negative correlations, such as that between social media and depression, or the amount of time we sit at a computer each day and our sense of our overall wellbeing, they are almost vanishingly weak. 

Our children already inhabit a landscape that is unrecognisable in the context of an earlier version of childhood. But this isn’t something to be afraid of - and isn’t something we should feel guilty about. Screens are ubiquitous and here to stay.

There is a problem with ‘screentime’, as there are lots of different types, with different uses, in different contexts. Etchells thinks we have nothing to fear, and a great deal to gain, by establishing a positive relationship with our screens (and our children’s screens) and thinking about screen time sensibly and critically. Screentime is NOT the key driver behind apparent declines in mental health and wellbeing. People tend to bring their own biases to the ‘screentime’ debate, so we rush to conclusions and point the finger at the nearest candidate. Indeed, the Royal College of Paediatric and Mental Health came to the same conclusion as there was no clear ‘evidence’ for the toxic effect of screentime.

Distraction, attention & sleep

Turns out the research on attention and distractibility, Parry And Roux (2021), is incredibly weak. South Korea tried banning the internet between midnight and 6am. – it actually increased the amount of time they spent on the internet during the day! And don't fall for the blue light arguments – it is not true. One study from Montana University (2022) showed blue block glasses reducing the amount of sleep, another showed no differences in subjective sleepiness the morning after. Other research showed small effects. The research is not worth losing sleep over.

Digital detox

Clearly derived from the dieting industry. Shaw (2020) looked studies in this area and found that fe colected real data from smartphones and devices. – they are almost all questionnaires. Her clever experiments showed that people tended to ‘report’ mental health issues when asked. The results quadrupled and tripled when surveys were used, as opposed to data collection. Thomee (2018) showed that 70% of studies on screen time relied on questionnaires and not real data from devices. There is a puritanical strain in all of this – wanting to control others.

Addiction

The debate is not helped by calling it an ‘addiction’. Etchells explains why this is medically wrong. Equating smartphone use to heroin is not helpful. Let’s make this clear – you are not ‘addicted’ to your smartphone. Technology is not a pharmaceutical and when we get the reductive talk about dopamine – one should really despair. That is far more complex that shallow PPTs at learning conferences make out. Talk of addiction is overused  and implies a lack of agency, as it it is a biological phenomenon.  

The 2017 article by Jean Twenge (picked up by Haidt) was the catalysts for the panic. It was based on her wpork with data sets, that did indeed show correlations but the results were weak (on scale -1 to 1 for correlation – 0.01 to 0.06). The problem with correlations, like ice crem sales and crime (both go up in Summer) is they don’t tell us much about causality. Orgers & Jensen (2019) showed mixed results in the studies looking at the connection between mental health and screentime – some positive, some negative, sone neutral. Even in the positive studies the results were weak. They note the difficulty in establishing a link on such a multivariant topic. A further meta-study by Ferguson (2021) concluded thare was no established proof of the link between smartphones, social media and mental health. Thay also noted an absence of rigour in the studies. 

Conclusion

Orben called it the ‘Sisyphean cycle of technology panics”. I’ve been though many. We need to look at the evidence and stay calm. Screentime is an unhelpful concept and Etchells recommends looking at screen ‘habits’ there are problems with misuse and certainly harm to children through inappropriate use and content. It is important to be precise and not react to angry, knee jerk reactions. Don't buy into these narratives assuming they’re all true. We have a long history of politicians trying to make this claim from Foulkes in 1981 who had a bill called ‘Control of Space Invaders'. It was narrowly defeated. MacAllister is the new Foulkes.

 

Tuesday, October 15, 2024

GOOGLE GO NUCLEAR!

Google made the headlines today, signing a groundbreaking deal to power its data centres with six or seven mini-nuclear reactors, known as Small Modular Reactors (SMRs). To meet the electricity demands driven by the rise of artificial intelligence and cloud computing, they have ordered SMRs from California-based Kairos Power. This is the first commission of a new type of reactor for half a century and the first reactor is expected to be operational by 2030, the rest coming online by 2035

Pretty ambitious move, as the company sees nuclear power as a "clean, round-the-clock power source" that can reliably meet its growing energy needs. Michael Terrell, Google's senior director for energy and climate, emphasised that new electricity sources are essential to support AI technologies fueling scientific advances and economic growth.

Nuclear power provides a consistent and reliable source of carbon-free electricity 24/7. Unlike solar and wind, which are variable and depend on weather conditions, nuclear energy can meet continuous electricity demands, crucial for powering data centres and AI tech that needs uninterrupted energy supply. This allows for more predictable project delivery and deployment in a wider range of locations. Also, the smaller size and modular design of SMRs shorten construction timelines and lower costs. This all makes nuclear energy more accessible and economically viable. The deal was signed when Kairos met their necessary milestones. Google already use a ton of solar/wind/geothermal - it ain't enough.

Google isn't alone in turning to nuclear options. Microsoft recently struck a deal to source energy from Pennsylvania's Three Mile Island, reactivating the plant after a five-year hiatus. Amazon also acquired a nuclear-powered data center earlier this year, signalling a broader industry shift toward embracing nuclear energy.

The UK is also witnessing a competitive push among companies to develop SMR technologies as the government seeks to rejuvenate its nuclear industry. Rolls-Royce SMR recently gained momentum by being selected by the Czech government to build a fleet of reactors. One wonders where the Labour Gov are on this - strangely silent?

This could be the start of something quite big, as it taps into the innovation, risk taking and problem solving that Governments seem to have lost on energy.

Energy and crypto

Another area we should look at is the waste in Crypto. I am no pure techno-optimist and have argued against Cryptocurrencies for years. It serves no useful purpose and is the purest form of speculation, driven by greed, often fuelled by fraud and crime. It is of no benefit to our species, a plague on our financial system and should be banned. But do we have an EU Crypto Act? China did it, the West did not. Do we have an army of Crypto safety people writing papers and attacking it day and night – no.

Yet its energy consumption way outweighs that of AI. Even back in 2022 it had the energy of a large country like the Netherlands and it has grown massively. Its energy consumption is way beyond that of AI even with projections to 2026.

Odd that we don't see well-funded anti-crypto institutions, safety summits, hard-core legislation (except China), hundreds of papers and thousands of 'Responsible' anti-Crypto 'Safety' bods?

Conclusion

AI is here to stay. It does have energy needs but these are dwarfed by other wasteful activities, such as crypto. We are seeing AI help solve that problem it has created. That's what technology matters. We can stare into the abyss of climate change or get on and do something about it. 

Friday, October 11, 2024

Robot teaching assistants – I’ve changed my mind…


This is wild. AI is much more than just text generation. It has revived robotics with real dialogue. If this hits the market at sub-car costs, it’s a winner.

Domestic goods changed the world forever, making washing and drying clothes and dishes much easier, as did vacuum cleaners and central heating but the gap remained between static tasks and mobile tasks. We still have to put out the rubbish, get things into these machines, dust, clean, all of that other domestic crap.

I’ve been a sceptic of robot teachers for some time but am starting to change my mind. Why couldn’t a domestic robot play a role in child rearing, talk to children, encourage their curiosity, extend their vocabulary, even teach them to be polite, say thanks and generally be nice to others.

A robot that encouraged a student to do the assignments or homework could work. Then there’s helping to learn in or outside of school. A robot that is endlessly patient could at least perform the function of a teaching assistant, ready to help with specific tasks.

I can’t help but conclude that at some point, probably sooner than we think, these robots will be commonplace. As teaching and learning are still largely 'one to many' why shouldn't they be introduced to pay more attention to individual needs. They could be aware of learning difficulties, such as dyslexia, that an individual learner may have, be sensitive to their personality, know where they are in terms of competence on different subjects.

Then there’s the ability to teach physical tasks. We have stripped this out of the curriculum but learning to do things would be very cool, whether it is playing a musical instrument, cooking or magic! Sports coaches?


Imagine a robot exam invigilator that could block all mobile signals, have a full view of all candidates, hand out and collect papers, check from a database of cheat devices and methods? Just a thought!

These robots have degrees in every subject, so asking them for help is not a problem. They can read your handwriting, hear what you say, speak in any language, at any level, 24/7. It’s OK not to want this sort of future but it’s a choice. This future has something to offer for all ages. I'm in.

Thursday, October 10, 2024

Learning theorist gets Nobel Prize….

When people use the word ‘AI’ these days they rarely understand the breadth of the field. One horse may have won the Derby by a golden mile recently, GenAI, but there’s a ton of other stuff in the race.

In the pre GenAI days, way back in 2014-2021 I used to regularly talk about Alphafold as an astonishing, measurable breakthrough for our species. This one tool alone remains a monumental achievement and by far the most important person in the tripartite award is Demis Hassabis.

AlphaFold, developed by DeepMind in 2020, predicts protein structure prediction. It both accelerates and opens up a vast array of research opportunities. They thrashed the competition in the CASP14 competition, outperforming the other 100 other teams, with a gargantuan leap in the field. It literally shocked everyone.

DeepMind had released a database containing over 200 million protein structures. This includes structures for nearly all cataloged proteins known to science. This database is FREE to the global scientific community, democratising access to high-quality protein structures.

The productivity gain is mindblowing. The traditional methods using incredibly expensive equipment and expertise took years for just one protein. Alphafold does it in hours. This allows researchers to focus on further experimentation, not groundwork. It has literally saved centuries of research.

For example, during the COVID pandemic, AlphaFold predicted structures of proteins related to the SARS-CoV-2 virus. This led to the rapid development of treatments and vaccines. This is generally true in this important, and some feel, neglected field. 

Back to Demis Hassabis, the British entrepreneur, neuroscientist and Artificial intelligence researcher. A chess prodigy and games designer, he was the lead programmer and co-designer of Theme Park, well known in the games world. After a spell as an academic publishing a series of papers, he started an AI company based on his understanding of how the brain and memory works. That company, DeepMind, was sold in 2014 to Google for $628 million.

Learning (memory) theory

Hassabis focused on the hippocampus, as that is where episodic memory is consolidated. He found, through a study of five brain-damaged patients, that memory loss, caused by damage to the hippocampus, was accompanied by loss of imagination (the ability to plan and think into the future). This was a fascinating insight, as he then realised that the process of reinforcement, was the real force in learning, practice makes perfect. This link between episodic memory and imagination was backed up by other studies in brain scanning and experiments with rats. He proposed a ‘scene construction’ model for recalling memories, which on scale sees the mind as a simulation engine. This focus on the reinforcement and consolidation of learnt practice, deliberate practice, as it is known, when captured and executed algorithmically, generates expertise. This led to him setting up a machine learning AI company in 2010 - Deepmind.

Deep Learning algorithms become experts

DeepMind focused on deep learning algorithms that could take on complex tasks, and here’s the rub - without prior knowledge and training. This is the key point – AI that can ‘learn’ to do anything. They stunned the AI community when their system played a number of computer games and became expert gamers. In Breakout their system not only got as good as any human, it devised a technique of breaking round the edge and attacking from above that humans had not encountered. The achievement was astonishing, as the software didn’t know about these games when it started. It looked at the display, seeing how the scoring worked and just learning from trial and error. Deep Learning takes some aspects of human learning, but combines deep learning with reinforcement learning, called deep reinforcement learning to solve problems. 

AlphaGo beat the Go World Champions Lee Sedol in Seoul 5-1, the game that is the Holy Grail in AI, reckoned to be the most complicated games we play, the pinnacle of games. Lee Sedolm was playing for humanity. The number of possible moves is greater than the number of atoms in the universe. It is trained by many games played by good amateurs. Deep neural networks that mimic the brain, with enormous computing power, trained to perform a task, can go beyond human capabilities. In game two it made moves that no human would and became creative. It learns and goes on learning. Far from seeing this as a defeat Lee Sedol saw it as a wonderful experience and GO has never been so popular.

Conclusion

One of the leading companies in the world, where humans have created some of the smartest software in the world, built that success on the back of learning theory, going back to Hebb and his successors. This should matter to learning professionals as AI now plays a significant role in learning. Software ‘learns’, or can be ‘trained’ using data. In addition to human teachers and learners, we now have software teachers and software that learns. It is not that a machine can beat a human but that it can learn to do even better. It is a sign of things to come, a sign of as yet unknown but astounding things to come in learning. The cutting edge of AI is the cutting edge of learning. His Nobel Prize is well deserved, as it is of such great benefit to the future of our species.


Wednesday, October 09, 2024

Academia sneering at Hinton's Nobel Prize for Physics shows a level of distasteful jealousy.... he's a genius

Certain parts of academia really hate AI. It's a provocation they can't handle, undermining a sometimes (not always) lazy attitude towards teaching and assessment. AI is an injection of subversion that is badly needed in education, as it throws light on so many poor practices.

Geoffrey Hinton (1948- ) is most noted for his work on artificial neural networks. He applied to Cambridge, was accepted, tried a few subjects and eventually focused on Experimental Psychology. On graduating he became a carpenter for six years but inspired by, Hebbs he formed his ideas in Islington Library and applied to Edinburgh to do a PhD in AI at a time when it was unfashionable.

He then spent time teaching and researching at various institutions, including the University of Sussex and Carnegie Mellon University but it was at the University of Toronto that Hinton contributed significantly to the field of neural networks and deep learning. Hinton's contributions to AI have earned him numerous accolades, including the Turing Award in 2018, which he shared with Yann LeCun and Yoshua Bengio for their work on deep learning.

In 2013, Hinton was hired by Google to work at Google Brain, their deep learning research team. He took a part-time status at the University of Toronto to accept this position but is now the chief scientific advisor at the Vector Institute in Toronto, which specializes in research on artificial intelligence and deep learning.

Connections

Geoffrey Hinton claims his interest in the brain arose when he was on a bus going to school,, sitting on a sloping furry seat where a penny actually moved uphill! This puzzled him and Hinton is a man who likes puzzles, especially around how the brain works. What drove him was the simple fact that the brain was, to a large degree, a ‘black box’.

In California he worked with connectionists to build a network of artificial neurons. But the brain has a layered structure and these layers began to be constructed. ‘Nettalk’ was an early text to speech neural network whose layered networks improved and progress was steady. Computing power and training data were needed for more substantial advances. 

Hinton's research has been pivotal in the development of neural networks and machine learning. His work in the 1980s and 1990s on backpropagation, a method for training artificial neural networks, was groundbreaking. Alongside colleagues Yann LeCun and Yoshua Bengio, Hinton is credited with the development of deep learning techniques that have led to significant advances in technology, particularly in fields such as computer vision and speech recognition.

Backpropagation

In the paper by Rumelhart, Hinton and Williams, Learning representations by back-propagating errors (1986). You can climb a hill by feeling around with your foot and finding the steepest direction and on you go to the top. Similarly on the descent, you feel around for the steepest step down and on you go. The gradient descent in perceptrons tweaks the weights to lower the error rate. You do this layer by layer. But suppose you’re climbing a mountain with little peaks, the task is more complex. It can be used for sophisticated computer  learning. Its method, the backward propagation of errors, allows neural networks to be trained, quickly and easily, so that deep neural networks do well in error prone areas like speech or image recognition.

Deep learning

Neural networks and backpropagation have had innumerable successes. NETtalk started by babbling then progressed to almost human-like speech. Stock market prediction was another, Self-driving cars benefited in the famous DARPA Challenges in 2004 and 2005. This work has been essential for the progress of deep learning.

With the internet, compute and data became plentiful and in 2012, the Imagenet competition, which put convolutional neural nets to the test, was easily won by Hinton, Ilya Sutskiver and Alex Krotesky. Their paper ImageNet classification with deep convolutional neural networks (2017), changed AI forever.

Baidu and Google, Deepmind, Microsoft approached the group, so Hinton set up an auction in a Casino in Lake Tahoe. Bids came in over several days. At $44 million, Hinton chose Google. In retrospect, it was a snip. Other companies then began to build teams, the networks and data sets got bigger bet $1 million that their system could beat a named Master at GO. AlphaGO 100 human matches, played itself in a process of self-supervised, reinforcement learning, millions of times. It got good, very good.

Brains

Hinton, as a psychologist, has remained interested in the inner workings and capabilities of the black box. After quitting his job at Google in 2023 he has become fascinated again with real brains. Our view of the brain as an inner theatre is, he thinks, wrong.

He denies the existence of qualia as the subjective, individual experiences of sensations and perceptions. They refer to the inner, private experiences that are felt by a person when they encounter sensory stimuli, like the redness of a rose, the taste of honey, or the pain of a headache. Qualia are often used in discussions within philosophy of mind to explore the nature of consciousness and the mind-body problem but the concept of qualia poses questions about how and why certain physical processes in the brain give rise to subjective experiences that are felt in a particular way. For instance, why does the wavelength of light perceived as red feel the way it does? Qualia are inherently private and subjective, making them difficult to fully describe or measure, so they are often cited in arguments against purely physical explanations of consciousness.

Thomas Nagel, for example, in his seminal paper What is it Like to be a Bat? (1980) argued that there is something that it is like to experience being a bat, which is inaccessible to humans; these experiences are ‘qualia’. He emphasizes that an organism has a point of view and that the subjective character of experience is a key aspect of mind. David Chalmers is a more contemporary philosopher of mind, well-known for discussing the "hard problem" of consciousness, which directly relates to qualia. He argues that physical explanations of the brain processes do not fully account for how subjective experiences occur, indicating the mysterious nature of qualia. Although a critic of the traditional concept of qualia, Dennett's discussions are also pivotal, as he argues against the notion of qualia as ineffable, intrinsic, private, and directly apprehensible properties of experience. His perspective is important in the debate over qualia because he challenges their philosophical utility and existence. 

Hinton also has interesting views on AI and creativity. Move 37 was ‘intuitive’ for Alphago - it is creative. LLMs know. We have 100 trillion synapses, A LLM has much less, at around 1 trillion connections but they are good at seeing similarities, even analogies, more than any one person knows about and that is creativity.

Hinton has a computational model of the brain, seeing it as driven by models inaccessible but predictive and Bayesian in nature. This has led him to speculate on the possibility of a mortal computer, combining brain neurons with technology.

Critique

Hinton's approach, particularly with the development of backpropagation and deep learning, has often been critiqued for lacking biological plausibility. Critics argue that the brain does not seem to learn in the same way that backpropagation algorithms do. For example, the human brain appears to employ local learning rules rather than the global error minimization processes used in backpropagation. Despite these criticisms, Hinton and his colleagues have made efforts to draw more connections between biological processes and artificial neural networks. Concepts such as capsules and attention mechanisms are steps towards more biologically plausible models. Furthermore, the success of deep learning in practical applications suggests that while the methods may not be biologically identical, they capture some essential aspects of intelligent processing.

Influence

Geoffrey Hinton's views on the brain, as reflected in his work on neural networks and AI, have been both groundbreaking and controversial. While there are valid critiques regarding biological plausibility, computational efficiency, interpretability, and societal implications, Hinton's contributions have undeniably advanced the field of AI. His work continues to inspire and challenge researchers to develop more sophisticated, efficient, and ethical AI systems. His  work continues to profoundly influence the field of artificial intelligence. His research has helped to propel neural networks to the forefront of AI technology, leading to practical applications that are used by millions of people daily.

SEE ALSO PODCAST ON CONNECTIONISTS
https://greatmindsonlearning.libsyn.com/gmols6e34-connectionists-with-donald-clark-0

Tuesday, October 08, 2024

An AI provocation! How biased are WE on AI? Fascinating paper…

I work exclusively in this area but as soon as I mention my work, the mere mention of the two letters ‘AI’ result in an emotional reaction, often expressed as “but surely it’s all biased", "we’ll lose the ability to think" whatever… alarmist opinions are thrown about with little or no evidence or analysis. I wrote about our human biases when first encountering AI in my book 'Artificial Intelligence in Learning', as I'd experienced it so often.

STUDY

So it was interesting to come across this strange but fascinating paper that investigated how bias affects the perception of AI-generated versus human-generated content. (Thanks Rod @rodjnaquin)

They conducted three Experiments:

  1. Participants evaluated reworded passages
  2. Summaries of news articles were assessed
  3. Evaluations of persuasive essays were gathered.

Some texts were simply labeled as either ‘AI Generated’ or ‘Human Generate’, other texts were presented without any labels.

RESULTS

First, in blind tests (unlabeled content), raters could not reliably differentiate between AI and human-generated texts.

With labeled Content, things got far more interesting. Participants showed a strong preference for content simply labeled as ‘Human Generated’ over ‘AI Generated’. This preference was over 30% higher for texts labeled as human-created. The same bias persisted even when the labels were intentionally swapped, indicating a preconceived bias rather than an assessment based on content quality.

Oddly, for those who bang on about bias in AI, the study reveals a significant human bias against AI-generated content, not based on content quality but on the label assigned.

CONSEQUENCES

I believe that much of the debate around some topics on ethics and AI follows this pattern. As soon as people hear those two letters their own bias kicks in. People come with confirmation bias around human exceptionalism, the belief that AI can't match up to human writing skills. This research uncovers these biases dives into whether people's biases are messing with their judgments in the realm of writing.

As human biases affect perceptions of AI-generated text. This leads people to assume that humans outperform AI in creative writing. Their blind tests, with deliberately swapped labels, assessed the depth of that bias.

This really matters, and this is an area that is really worthy of m orer research, rather than the outpouring of alarmist rhetoric. By shedding light on these biases, we can pave the way for better collaboration between humans and AI, especially in creative fields.

Paper: https://arxiv.org/pdf/2410.03723

 

Saturday, October 05, 2024

AI will not take your job but someone using AI will – it may well replace Doctors?

This paper (Influence of a Large Language Model on Diagnostic Reasoning: A Randomized Clinical Vignette Study by Goh et al.) on ‘diagnostic reasoning’ hasn’t had enough attention. The authors fully expected Doctors plus GenAI to win. But GPT 4 on its own beat Doctors hands down.

One of the authors made the point that the surprise was that the results broke that oft quoted trope that “AI will not take your job but someone using AI will”.

GenAI, for some time, has been beating medical students hands down on clinical exams. But can it outperform real Doctors?

They used a randomised design, with 50 physicians from various medical institutions. Three approaches were compared. The Doctors were randomised into two groups, then compared to GPT4 used on its own:

1. Docs + conventional resources
2. Docs + GPT-4 & conventional resources
3. GPT-4 alone

Each had 60 mins to complete up to 6 clinical problems. Their diagnostic reasoning was measured on differential diagnosis accuracy, supporting/opposing factors and their next diagnostic steps.

SHOCK RESULTS

When used WITHOUT human input, GPT4 scored 15.5% higher than the conventional resources group, outperforming both physicians and hybrid methods.



1. Docs + conventional resources only (73.7%)
2. Docs + GPT4 & conventional resource (76.3%)
3. GPT4 alone (89.2%)

Doctors using GPT4 alongside conventional resources showed only a marginal improvement in diagnostic accuracy over the conventional resources group. The GPT4 group also took less time per case.

It showed that GPT4 not only excels at real-world diagnostic reasoning, it also measured diagnostic reasoning through structured reflection, giving richer insights than simple accuracy. Remember those complaints about transparency and AI. Well, here we have it.

Sure, a limited sample at 50, also puzzling that GPT4 is better on its own that when used as an aid by the Doctors. Suggest that the Doctors are the confounding factor here? Turns out they were often doing is using GPT4 as a search engine.

GenAI is here to stay in medicine and may surpass that of trained Doctors.

CONCLUSION

Diagnosis rates among General Physicians stand at around 5%, sounds worse when you say 1 in 20. Let’s suppose AI, on its own, a UNIVERSAL DOCTOR has a misdiagnosis rate of only less than 1%. I'm sure this will happen, now that reasoning has arrived. At this point you’d be a damn fool to go to your Doctor.

Friday, October 04, 2024

Straight from the imagination to the screen.... Meta's video release

This is the promise of AI. And we’re getting tHere faster than anyone could have imagined. You become a video director, without the eye watering production costs. GenAI has moved way beyond text to the image, video, audio space.

Meta’s new ‘cast’ of models allow you to create but also edit and personalise images, videos and audio. Personalised means based on your or anyone’s face.


This is another astounding milestone... models that generate:

·     1080p HD

·     Up to 16 secs (16FPS)

·     Different aspect ratios

·     Synchronized audio 48 kHz

·     Instruction-based video editing

·     Personalised videos based on a user’s image


Note that 16 seconds doesn’t sound long but call them takes and they’re very long but TikTok built a global business on videos with a 15 second limit (now longer).


All from text prompts.


https://ai.meta.com/research/movie-gen/


LEARNING

Any teacher can create videos of them showing students how to do anything. That’s surely useful, especially in vocational learning but also in science, art, music and many other subjects. Call this personalised teaching.

 

It gives anyone the ability to create short (here’s 10 for startes):

Instructional step-by-step procedures

Explainer videos

Trigger videos

Branched scenario-based

Simulation videos

Animations

Video flashcards

Scenes too risky to film

Scenes impossible to film

Short messages from CEO etc

Ads for new initiatives


Long-form video suffers from the transience effect, but there are ways to make it more relevant and effective in learning, beyond the lecture and talking head.


I’ve been involved in many scenario-based video sims and this has just made them attainable on a low budget. Your video production costs on management training sims, such as interviewing, having difficult conversations and so on has just plumetted.


In performance support, huge numbers of short videos can be created ready for delivery at any moment of need in the workflow.


CONCLUSION

I can see this move towards longer takes, even drama. To be honest that may already be here. You’ll see a lot of creative fun stuff, like personalised birthday messages. The personalisations is interesting. Thought also has to go into impersonation and the avoidence of explicit material.