Tuesday, October 15, 2024

GOOGLE GO NUCLEAR!

Google made the headlines today, signing a groundbreaking deal to power its data centres with six or seven mini-nuclear reactors, known as Small Modular Reactors (SMRs). To meet the electricity demands driven by the rise of artificial intelligence and cloud computing, they have ordered SMRs from California-based Kairos Power. This is the first commission of a new type of reactor for half a century and the first reactor is expected to be operational by 2030, the rest coming online by 2035

Pretty ambitious move, as the company sees nuclear power as a "clean, round-the-clock power source" that can reliably meet its growing energy needs. Michael Terrell, Google's senior director for energy and climate, emphasised that new electricity sources are essential to support AI technologies fueling scientific advances and economic growth.

Google isn't alone in turning to nuclear options. Microsoft recently struck a deal to source energy from Pennsylvania's Three Mile Island, reactivating the plant after a five-year hiatus. Amazon also acquired a nuclear-powered data center earlier this year, signaling a broader industry shift toward embracing nuclear energy.

The UK is also witnessing a competitive push among companies to develop SMR technologies as the government seeks to rejuvenate its nuclear industry. Rolls-Royce SMR recently gained momentum by being selected by the Czech government to build a fleet of reactors. One wonders where the Labour Gov are on this - strangely silent?

This could be the start of something quite big, as it taps into the innovation, risk taking and problem solving that Governments seem to have lost on energy.

Energy and crypto

Another area we should look at is the waste in Crypto. I am no pure techno-optimist and have argued against Cryptocurrencies for years. It serves no useful purpose and is the purest form of speculation, driven by greed, often fuelled by fraud and crime. It is of no benefit to our species, a plague on our financial system and should be banned. But do we have an EU Crypto Act? China did it, the West did not. Do we have an army of Crypto safety people writing papers and attacking it day and night – no.

Yet its energy consumption way outweighs that of AI. Even back in 2022 it had the energy of a large country like the Netherlands and it has grown massively. Its energy consumption is way beyond that of AI even with projections to 2026.

Odd that we don't see well-funded anti-crypto institutions, safety summits, hard-core legislation (except China), hundreds of papers and thousands of 'Responsible' anti-Crypto 'Safety' bods?

Conclusion

AI is here to stay. It does have energy needs but these are dwarfed by other wasteful activities, such as crypto. We are seeing AI help solve that problem it has created. That's wht technology matters. We can stare into the abyss of climate change or get on and do something about it. 

Friday, October 11, 2024

Robot teaching assistants – I’ve changed my mind…


This is wild. AI is much more than just text generation. It has revived robotics with real dialogue. If this hits the market at sub-car costs, it’s a winner.

Domestic goods changed the world forever, making washing and drying clothes and dishes much easier, as did vacuum cleaners and central heating but the gap remained between static tasks and mobile tasks. We still have to put out the rubbish, get things into these machines, dust, clean, all of that other domestic crap.

I’ve been a sceptic of robot teachers for some time but am starting to change my mind. Why couldn’t a domestic robot play a role in child rearing, talk to children, encourage their curiosity, extend their vocabulary, even teach them to be polite, say thanks and generally be nice to others.

A robot that encouraged a student to do the assignments or homework could work. Then there’s helping to learn in or outside of school. A robot that is endlessly patient could at least perform the function of a teaching assistant, ready to help with specific tasks.

I can’t help but conclude that at some point, probably sooner than we think, these robots will be commonplace. As teaching and learning are still largely 'one to many' why shouldn't they be introduced to pay more attention to individual needs. They could be aware of learning difficulties, such as dyslexia, that an individual learner may have, be sensitive to their personality, know where they are in terms of competence on different subjects.

Then there’s the ability to teach physical tasks. We have stripped this out of the curriculum but learning to do things would be very cool, whether it is playing a musical instrument, cooking or magic! Sports coaches?


Imagine a robot exam invigilator that could block all mobile signals, have a full view of all candidates, hand out and collect papers, check from a database of cheat devices and methods? Just a thought!

These robots have degrees in every subject, so asking them for help is not a problem. They can read your handwriting, hear what you say, speak in any language, at any level, 24/7. It’s OK not to want this sort of future but it’s a choice. This future has something to offer for all ages. I'm in.

Thursday, October 10, 2024

Learning theorist gets Nobel Prize….

When people use the word ‘AI’ these days they rarely understand the breadth of the field. One horse may have won the Derby by a golden mile recently, GenAI, but there’s a ton of other stuff in the race.

In the pre GenAI days, way back in 2014-2021 I used to regularly talk about Alphafold as an astonishing, measurable breakthrough for our species. This one tool alone remains a monumental achievement and by far the most important person in the tripartite award is Demis Hassabis.

AlphaFold, developed by DeepMind in 2020, predicts protein structure prediction. It both accelerates and opens up a vast array of research opportunities. They thrashed the competition in the CASP14 competition, outperforming the other 100 other teams, with a gargantuan leap in the field. It literally shocked everyone.

DeepMind had released a database containing over 200 million protein structures. This includes structures for nearly all cataloged proteins known to science. This database is FREE to the global scientific community, democratising access to high-quality protein structures.

The productivity gain is mindblowing. The traditional methods using incredibly expensive equipment and expertise took years for just one protein. Alphafold does it in hours. This allows researchers to focus on further experimentation, not groundwork. It has literally saved centuries of research.

For example, during the COVID pandemic, AlphaFold predicted structures of proteins related to the SARS-CoV-2 virus. This led to the rapid development of treatments and vaccines. This is generally true in this important, and some feel, neglected field. 

Back to Demis Hassabis, the British entrepreneur, neuroscientist and Artificial intelligence researcher. A chess prodigy and games designer, he was the lead programmer and co-designer of Theme Park, well known in the games world. After a spell as an academic publishing a series of papers, he started an AI company based on his understanding of how the brain and memory works. That company, DeepMind, was sold in 2014 to Google for $628 million.

Learning (memory) theory

Hassabis focused on the hippocampus, as that is where episodic memory is consolidated. He found, through a study of five brain-damaged patients, that memory loss, caused by damage to the hippocampus, was accompanied by loss of imagination (the ability to plan and think into the future). This was a fascinating insight, as he then realised that the process of reinforcement, was the real force in learning, practice makes perfect. This link between episodic memory and imagination was backed up by other studies in brain scanning and experiments with rats. He proposed a ‘scene construction’ model for recalling memories, which on scale sees the mind as a simulation engine. This focus on the reinforcement and consolidation of learnt practice, deliberate practice, as it is known, when captured and executed algorithmically, generates expertise. This led to him setting up a machine learning AI company in 2010 - Deepmind.

Deep Learning algorithms become experts

DeepMind focused on deep learning algorithms that could take on complex tasks, and here’s the rub - without prior knowledge and training. This is the key point – AI that can ‘learn’ to do anything. They stunned the AI community when their system played a number of computer games and became expert gamers. In Breakout their system not only got as good as any human, it devised a technique of breaking round the edge and attacking from above that humans had not encountered. The achievement was astonishing, as the software didn’t know about these games when it started. It looked at the display, seeing how the scoring worked and just learning from trial and error. Deep Learning takes some aspects of human learning, but combines deep learning with reinforcement learning, called deep reinforcement learning to solve problems. 

AlphaGo beat the Go World Champions Lee Sedol in Seoul 5-1, the game that is the Holy Grail in AI, reckoned to be the most complicated games we play, the pinnacle of games. Lee Sedolm was playing for humanity. The number of possible moves is greater than the number of atoms in the universe. It is trained by many games played by good amateurs. Deep neural networks that mimic the brain, with enormous computing power, trained to perform a task, can go beyond human capabilities. In game two it made moves that no human would and became creative. It learns and goes on learning. Far from seeing this as a defeat Lee Sedol saw it as a wonderful experience and GO has never been so popular.

Conclusion

One of the leading companies in the world, where humans have created some of the smartest software in the world, built that success on the back of learning theory, going back to Hebb and his successors. This should matter to learning professionals as AI now plays a significant role in learning. Software ‘learns’, or can be ‘trained’ using data. In addition to human teachers and learners, we now have software teachers and software that learns. It is not that a machine can beat a human but that it can learn to do even better. It is a sign of things to come, a sign of as yet unknown but astounding things to come in learning. The cutting edge of AI is the cutting edge of learning. His Nobel Prize is well deserved, as it is of such great benefit to the future of our species.


Wednesday, October 09, 2024

Academia sneering at Hinton's Nobel Prize for Physics shows a level of distasteful jealousy.... he's a genius

Certain parts of academia really hate AI. It's a provocation they can't handle, undermining a sometimes (not always) lazy attitude towards teaching and assessment. AI is an injection of subversion that is badly needed in education, as it throws light on so many poor practices.

Geoffrey Hinton (1948- ) is most noted for his work on artificial neural networks. He applied to Cambridge, was accepted, tried a few subjects and eventually focused on Experimental Psychology. On graduating he became a carpenter for six years but inspired by, Hebbs he formed his ideas in Islington Library and applied to Edinburgh to do a PhD in AI at a time when it was unfashionable.

He then spent time teaching and researching at various institutions, including the University of Sussex and Carnegie Mellon University but it was at the University of Toronto that Hinton contributed significantly to the field of neural networks and deep learning. Hinton's contributions to AI have earned him numerous accolades, including the Turing Award in 2018, which he shared with Yann LeCun and Yoshua Bengio for their work on deep learning.

In 2013, Hinton was hired by Google to work at Google Brain, their deep learning research team. He took a part-time status at the University of Toronto to accept this position but is now the chief scientific advisor at the Vector Institute in Toronto, which specializes in research on artificial intelligence and deep learning.

Connections

Geoffrey Hinton claims his interest in the brain arose when he was on a bus going to school,, sitting on a sloping furry seat where a penny actually moved uphill! This puzzled him and Hinton is a man who likes puzzles, especially around how the brain works. What drove him was the simple fact that the brain was, to a large degree, a ‘black box’.

In California he worked with connectionists to build a network of artificial neurons. But the brain has a layered structure and these layers began to be constructed. ‘Nettalk’ was an early text to speech neural network whose layered networks improved and progress was steady. Computing power and training data were needed for more substantial advances. 

Hinton's research has been pivotal in the development of neural networks and machine learning. His work in the 1980s and 1990s on backpropagation, a method for training artificial neural networks, was groundbreaking. Alongside colleagues Yann LeCun and Yoshua Bengio, Hinton is credited with the development of deep learning techniques that have led to significant advances in technology, particularly in fields such as computer vision and speech recognition.

Backpropagation

In the paper by Rumelhart, Hinton and Williams, Learning representations by back-propagating errors (1986). You can climb a hill by feeling around with your foot and finding the steepest direction and on you go to the top. Similarly on the descent, you feel around for the steepest step down and on you go. The gradient descent in perceptrons tweaks the weights to lower the error rate. You do this layer by layer. But suppose you’re climbing a mountain with little peaks, the task is more complex. It can be used for sophisticated computer  learning. Its method, the backward propagation of errors, allows neural networks to be trained, quickly and easily, so that deep neural networks do well in error prone areas like speech or image recognition.

Deep learning

Neural networks and backpropagation have had innumerable successes. NETtalk started by babbling then progressed to almost human-like speech. Stock market prediction was another, Self-driving cars benefited in the famous DARPA Challenges in 2004 and 2005. This work has been essential for the progress of deep learning.

With the internet, compute and data became plentiful and in 2012, the Imagenet competition, which put convolutional neural nets to the test, was easily won by Hinton, Ilya Sutskiver and Alex Krotesky. Their paper ImageNet classification with deep convolutional neural networks (2017), changed AI forever.

Baidu and Google, Deepmind, Microsoft approached the group, so Hinton set up an auction in a Casino in Lake Tahoe. Bids came in over several days. At $44 million, Hinton chose Google. In retrospect, it was a snip. Other companies then began to build teams, the networks and data sets got bigger bet $1 million that their system could beat a named Master at GO. AlphaGO 100 human matches, played itself in a process of self-supervised, reinforcement learning, millions of times. It got good, very good.

Brains

Hinton, as a psychologist, has remained interested in the inner workings and capabilities of the black box. After quitting his job at Google in 2023 he has become fascinated again with real brains. Our view of the brain as an inner theatre is, he thinks, wrong.

He denies the existence of qualia as the subjective, individual experiences of sensations and perceptions. They refer to the inner, private experiences that are felt by a person when they encounter sensory stimuli, like the redness of a rose, the taste of honey, or the pain of a headache. Qualia are often used in discussions within philosophy of mind to explore the nature of consciousness and the mind-body problem but the concept of qualia poses questions about how and why certain physical processes in the brain give rise to subjective experiences that are felt in a particular way. For instance, why does the wavelength of light perceived as red feel the way it does? Qualia are inherently private and subjective, making them difficult to fully describe or measure, so they are often cited in arguments against purely physical explanations of consciousness.

Thomas Nagel, for example, in his seminal paper What is it Like to be a Bat? (1980) argued that there is something that it is like to experience being a bat, which is inaccessible to humans; these experiences are ‘qualia’. He emphasizes that an organism has a point of view and that the subjective character of experience is a key aspect of mind. David Chalmers is a more contemporary philosopher of mind, well-known for discussing the "hard problem" of consciousness, which directly relates to qualia. He argues that physical explanations of the brain processes do not fully account for how subjective experiences occur, indicating the mysterious nature of qualia. Although a critic of the traditional concept of qualia, Dennett's discussions are also pivotal, as he argues against the notion of qualia as ineffable, intrinsic, private, and directly apprehensible properties of experience. His perspective is important in the debate over qualia because he challenges their philosophical utility and existence. 

Hinton also has interesting views on AI and creativity. Move 37 was ‘intuitive’ for Alphago - it is creative. LLMs know. We have 100 trillion synapses, A LLM has much less, at around 1 trillion connections but they are good at seeing similarities, even analogies, more than any one person knows about and that is creativity.

Hinton has a computational model of the brain, seeing it as driven by models inaccessible but predictive and Bayesian in nature. This has led him to speculate on the possibility of a mortal computer, combining brain neurons with technology.

Critique

Hinton's approach, particularly with the development of backpropagation and deep learning, has often been critiqued for lacking biological plausibility. Critics argue that the brain does not seem to learn in the same way that backpropagation algorithms do. For example, the human brain appears to employ local learning rules rather than the global error minimization processes used in backpropagation. Despite these criticisms, Hinton and his colleagues have made efforts to draw more connections between biological processes and artificial neural networks. Concepts such as capsules and attention mechanisms are steps towards more biologically plausible models. Furthermore, the success of deep learning in practical applications suggests that while the methods may not be biologically identical, they capture some essential aspects of intelligent processing.

Influence

Geoffrey Hinton's views on the brain, as reflected in his work on neural networks and AI, have been both groundbreaking and controversial. While there are valid critiques regarding biological plausibility, computational efficiency, interpretability, and societal implications, Hinton's contributions have undeniably advanced the field of AI. His work continues to inspire and challenge researchers to develop more sophisticated, efficient, and ethical AI systems. His  work continues to profoundly influence the field of artificial intelligence. His research has helped to propel neural networks to the forefront of AI technology, leading to practical applications that are used by millions of people daily.

SEE ALSO PODCAST ON CONNECTIONISTS
https://greatmindsonlearning.libsyn.com/gmols6e34-connectionists-with-donald-clark-0

Tuesday, October 08, 2024

An AI provocation! How biased are WE on AI? Fascinating paper…

I work exclusively in this area but as soon as I mention my work, the mere mention of the two letters ‘AI’ result in an emotional reaction, often expressed as “but surely it’s all biased", "we’ll lose the ability to think" whatever… alarmist opinions are thrown about with little or no evidence or analysis. I wrote about our human biases when first encountering AI in my book 'Artificial Intelligence in Learning', as I'd experienced it so often.

STUDY

So it was interesting to come across this strange but fascinating paper that investigated how bias affects the perception of AI-generated versus human-generated content. (Thanks Rod @rodjnaquin)

They conducted three Experiments:

  1. Participants evaluated reworded passages
  2. Summaries of news articles were assessed
  3. Evaluations of persuasive essays were gathered.

Some texts were simply labeled as either ‘AI Generated’ or ‘Human Generate’, other texts were presented without any labels.

RESULTS

First, in blind tests (unlabeled content), raters could not reliably differentiate between AI and human-generated texts.

With labeled Content, things got far more interesting. Participants showed a strong preference for content simply labeled as ‘Human Generated’ over ‘AI Generated’. This preference was over 30% higher for texts labeled as human-created. The same bias persisted even when the labels were intentionally swapped, indicating a preconceived bias rather than an assessment based on content quality.

Oddly, for those who bang on about bias in AI, the study reveals a significant human bias against AI-generated content, not based on content quality but on the label assigned.

CONSEQUENCES

I believe that much of the debate around some topics on ethics and AI follows this pattern. As soon as people hear those two letters their own bias kicks in. People come with confirmation bias around human exceptionalism, the belief that AI can't match up to human writing skills. This research uncovers these biases dives into whether people's biases are messing with their judgments in the realm of writing.

As human biases affect perceptions of AI-generated text. This leads people to assume that humans outperform AI in creative writing. Their blind tests, with deliberately swapped labels, assessed the depth of that bias.

This really matters, and this is an area that is really worthy of m orer research, rather than the outpouring of alarmist rhetoric. By shedding light on these biases, we can pave the way for better collaboration between humans and AI, especially in creative fields.

Paper: https://arxiv.org/pdf/2410.03723

 

Saturday, October 05, 2024

AI will not take your job but someone using AI will – it may well replace Doctors?

This paper (Influence of a Large Language Model on Diagnostic Reasoning: A Randomized Clinical Vignette Study by Goh et al.) on ‘diagnostic reasoning’ hasn’t had enough attention. The authors fully expected Doctors plus GenAI to win. But GPT 4 on its own beat Doctors hands down.

One of the authors made the point that the surprise was that the results broke that oft quoted trope that “AI will not take your job but someone using AI will”.

GenAI, for some time, has been beating medical students hands down on clinical exams. But can it outperform real Doctors?

They used a randomised design, with 50 physicians from various medical institutions. Three approaches were compared. The Doctors were randomised into two groups, then compared to GPT4 used on its own:

1. Docs + conventional resources
2. Docs + GPT-4 & conventional resources
3. GPT-4 alone

Each had 60 mins to complete up to 6 clinical problems. Their diagnostic reasoning was measured on differential diagnosis accuracy, supporting/opposing factors and their next diagnostic steps.

SHOCK RESULTS

When used WITHOUT human input, GPT4 scored 15.5% higher than the conventional resources group, outperforming both physicians and hybrid methods.



1. Docs + conventional resources only (73.7%)
2. Docs + GPT4 & conventional resource (76.3%)
3. GPT4 alone (89.2%)

Doctors using GPT4 alongside conventional resources showed only a marginal improvement in diagnostic accuracy over the conventional resources group. The GPT4 group also took less time per case.

It showed that GPT4 not only excels at real-world diagnostic reasoning, it also measured diagnostic reasoning through structured reflection, giving richer insights than simple accuracy. Remember those complaints about transparency and AI. Well, here we have it.

Sure, a limited sample at 50, also puzzling that GPT4 is better on its own that when used as an aid by the Doctors. Suggest that the Doctors are the confounding factor here? Turns out they were often doing is using GPT4 as a search engine.

GenAI is here to stay in medicine and may surpass that of trained Doctors.

CONCLUSION

Diagnosis rates among General Physicians stand at around 5%, sounds worse when you say 1 in 20. Let’s suppose AI, on its own, a UNIVERSAL DOCTOR has a misdiagnosis rate of only less than 1%. I'm sure this will happen, now that reasoning has arrived. At this point you’d be a damn fool to go to your Doctor.

Friday, October 04, 2024

Straight from the imagination to the screen.... Meta's video release

This is the promise of AI. And we’re getting tHere faster than anyone could have imagined. You become a video director, without the eye watering production costs. GenAI has moved way beyond text to the image, video, audio space.

Meta’s new ‘cast’ of models allow you to create but also edit and personalise images, videos and audio. Personalised means based on your or anyone’s face.


This is another astounding milestone... models that generate:

·     1080p HD

·     Up to 16 secs (16FPS)

·     Different aspect ratios

·     Synchronized audio 48 kHz

·     Instruction-based video editing

·     Personalised videos based on a user’s image


Note that 16 seconds doesn’t sound long but call them takes and they’re very long but TikTok built a global business on videos with a 15 second limit (now longer).


All from text prompts.


https://ai.meta.com/research/movie-gen/


LEARNING

Any teacher can create videos of them showing students how to do anything. That’s surely useful, especially in vocational learning but also in science, art, music and many other subjects. Call this personalised teaching.

 

It gives anyone the ability to create short (here’s 10 for startes):

Instructional step-by-step procedures

Explainer videos

Trigger videos

Branched scenario-based

Simulation videos

Animations

Video flashcards

Scenes too risky to film

Scenes impossible to film

Short messages from CEO etc

Ads for new initiatives


Long-form video suffers from the transience effect, but there are ways to make it more relevant and effective in learning, beyond the lecture and talking head.


I’ve been involved in many scenario-based video sims and this has just made them attainable on a low budget. Your video production costs on management training sims, such as interviewing, having difficult conversations and so on has just plumetted.


In performance support, huge numbers of short videos can be created ready for delivery at any moment of need in the workflow.


CONCLUSION

I can see this move towards longer takes, even drama. To be honest that may already be here. You’ll see a lot of creative fun stuff, like personalised birthday messages. The personalisations is interesting. Thought also has to go into impersonation and the avoidence of explicit material.