Monday, October 28, 2024

Holy Trinity of AI, Fusion and Quantum computing could create new future

In combination, the individual paradigms of AI, fusion and quantum could change the trajectory of history. The timescales are now more certain, as milestones are being met. This has released more capital for research, investment and actual builds. It now seems possible in the 2030-40s that this holy trinity could save us, as well as give unimaginable leaps in productivity.

Multipliers

There are several multipliers with technology that are often missing when we see technology as linear progress of a singular technologies; writing, printing, steam, oil, electricity, radio, TV, personal computer internet, smartphone and now AI. They are often presented as a series on a graph. This linear thinking misses combinatorial technology. 

Technology begets technology

The iPhone was truly combinatorial as was the printing press. Gutenberg did not ‘invent’ printing. He brought together several different technologies that had crossed usable thresholds. Movable Type, which came from China and Korea, relied on metal alloy development using lead, tin and antimony, along with new casting techniques and precision moulds. The press was adapted from wine/olive presses with a screw mechanism for applying even pressure. Then there was paper technology, brought from China, using European paper mills that has designed manufacturing processes based on linen-based paper production, far more economical than parchment. And don’t forget the new oil-based ink, far more durable than water-based inks, with better adhesion to metal type, that lasted longer on paper. Finally, the need for socially agreed standardisation on type; letters, punctuation, capitalisation, agreed standardisation of type sizes and consistent letter heights. It was the combination of all of these elements, some taking centuries to develop, others quick spurts, that came together in a systematic way to allow the mass production with efficient workflows that created printing in the 15th and 16th centuries. This combinatory technology had a profound effect on our culture, politics, the trajectory of history, as well as unleashing our ability to create new technologies.

Will AI hit walls of energy & compute?

Other technologies can combine with other gargantuan leaps to make exponential progress – with much bigger and faster leaps. Some technologies, like micro-processors, have intrinsic exponential progress (Moore’s Law), as they got better and faster. There are all sorts of laws that define exponential growth across networks. But one thing that halts growth is the sudden collapse of a technology as it hits a major constraint.

With AI continuing its exponential ascent, many see it hitting the brick walls of energy and/or compute. Yet, a push on one technology often results in the pull of another. We see Google, Amazon and others invest in Fusion. Google and others have also strongly invested in Quantum computing. This can be seen as the Holy Grail of contemporary tech – AI, Fusion and Quantum.

Fusion

Emission free energy is a constraint on AI. A stepping stone to Fusion, practically, not technologically, are SMR nuclear reactors. That’s solves the emissions problem. It does not necessarily solve the energy demand problem. 

Fusion comes, not from splitting the atom but fusing atoms – fusion. Push Hydrogen atoms together to form helium and huge amounts of energy are released. Target gain was achieved recently (more energy produced than the amount required to perform the experiment) and they now have the brains and money to make fusion happen. 

Major tech companies have invested in Fusion, which promises unlimited energy at lower cost. Microsoft have a major investment in Commonwealth Fusion Systems and have partnered with ITER for computing support. They are keen on providing computer support for start-ups and have several hundred million invested. Google invested in TAE Technologies, who use AI for plasma control research and computational support for fusion modelling. They also have a partnership with Princeton Plasma Physics Lab. Bezos Expeditions invested in General Fusion and AWS is also providing computing resources to fusion projects. Amazon itself has made multiple rounds of funding into Commonwealth Fusion. Eric Lander led the Human Genome Project has a MacArthur ‘genius’ grant is also head of the head of the US Office of Science and Technology Policy for Biden. Recent breakthroughs have led him to become CEO of Pacific Fusion.

Quantum leap

Conventional computing is a constraint on AI. Quantum computing promises to do in minutes what took thousands of years to solve problems. It changes the whole idea of what is computable. Once again, like Deepmind’s Alphafold saving centuries of research with one piece of AI software, we have a technology (quantum computing) that could accelerate another technology’s (AI) capabilities. This promises enormous productivity gains in research and problem solving.

AI could be accelerated with quantum neural networks, better pattern recognition and much faster solving of optimisation problems. Climate change could be modelled with weather prediction, climate change simulations, as well as atmospheric modelling and ocean current analysis. Our energy production and grid management could also be far better managed with power flow optimisation, load balancing, grid stability analysis and better renewable energy integration. This reduces the load needed for global use of AI.

Leaps in productivity include cryptography and security, to keep personal and other data safe through quantum key distribution. Drug discovery and materials science through simulating molecular interactions and designing new materials with specific properties, for example, in battery development. Supply chain and logistics could use route optimisation across delivery networks, warehouse optimisation and fleet management.

It can also help solve several critical challenges in both fusion and nuclear reactor design. In fusion on plasma physics simulation and materials science, also modelling in fusion. Small Nuclear Reactors (SMRs) may also benefit from modelling and materials science. AI solves problems, Fusion fuels AI. It is a virtuous circle.

Quantum computers still have limited qubits and they still have high error rates that require error correction but they exist and AI companies like Google have significant investments in the field. Incredibly difficult to build, they require super-cooling to temperatures unknown to us in the past. They are also sensitive to environmental interference. This raises the problem of cost and access to quantum hardware. But progress has been made.

Quantum computing could also potentially help solve several critical challenges in both fusion and nuclear reactor design. 



Conclusion

In combination, the individual paradigms of AI, fusion and quantum will change the trajectory of history. The timescales are now becoming more certain, as milestones are being met. This releases more capital for research and investment. It now seems possible in the 2030s and 2040s, this holy trinity is likely to save us, as well as give unimaginable leaps in productivity.

Note that AI is here. That's a given. Fusion is a known process (Sun is one giant Fusion reactor burnitruning 4.5 million tons of mass to energy a second) and Quantum computing (known in physics) has been built. This is not blue-sky research on theory but design and engineering problems. Those can be solved. Time will tell.



Sunday, October 27, 2024

Solid paper on personalised learning

A solid paper on the advantages for of self-paced learning. The promise of AI is to deliver personalised learning, sensitive to the learner’s AR (acquisition rate). 

The acquisition rate in learning refers to the speed or efficiency with which a learner acquires new knowledge, skills, or behaviors. It is a measure of how quickly a person can grasp new information or master new concepts. This rate can vary significantly depending on factors such as the complexity of the material, the learner’s prior knowledge, motivation, cognitive abilities, and the methods of instruction being used.

STUDY

tested different ways of teaching multiplication tables (6s, 7s, and 8s) to 3rd & 4th graders

split into three groups: 

Group 1: learned 2 facts at a time

Group 2: learned 8 facts at a time

Group 3: learned facts based on their personal "acquisition rate" (how much they could handle)

RESULTS

Key Findings (from the table and text):

Learning time varied a lot: 

2 facts group: 3 minutes

Personal pace group: 7 minutes

8 facts group: 14 minutes


What worked best: 



Personal pace method (acquisition rate) worked really well

Teaching 8 facts at once was least effective (only 39% remembered)

Teaching 2 facts at a time was middle ground (47% remembered)

Students remembered 76% of facts when taught at their personal pace

One size does not fit all when it comes to learning multiplication facts. The average student could handle learning about 4 new facts at a time. Tailoring the number of facts to each student's learning capacity was most effective. Personalised, self-paced learning works.

Bloom's lesser known research

Bloom is known for his pyramid (he never drew one and it is hopelessly simplistic) but researched this in detail. His other research, rarely known, led him to believe, in Human Characteristics and School Learning (1976), that learners could master knowledge and skills given enough time. It is not that learners are good or bad but fast and slow. The artificial constraint of time in timed periods of learning, timetables and fixed-point final exams, is a destructive filter on many. The solution is to loosen up on time to democratise learning to suit the many not the few. Learning is a process not a timed event. Learning, free from the tyranny of time, allows learners to proceed at their own pace.

Bloom proposed three things could make mastery learning fulfil its potential:

1. Entry diagnosis and adaption (50%) - diagnose, predict and recommend. David Asubel made the point that “The most important single factor influencing learning is what the learner already knows” yet pre-testing is relatively rare form of assessment.

2. Address maturation (25%) - personalise and adapt. AI can deliver personalised learning based on continuous assessment and adaption.

3. Instructional methods (25%) - match to different types of learning experiences and time. Sensitivity to the type of learning, a far more complex issue that Blook thought with his falsely progressive pyramid and tripartite distinction of Cognitive domain (knowledge-based), Psychomotor (action-based) and Affective (emotion-based)

https://www.researchgate.net/figure/Retention-and-Efficiency-Data-by-Condition_tbl1_281801350


Saturday, October 26, 2024

Ai-Da: Challenging creativity's boundaries in a post-humanist, post-creation era?

AI-Da is the world’s first artist robot. It creates art through a combination of algorithms, cameras, and robotic movements. Created in 2019, she has become famous (or infamous), exhibiting at the Design Museum and the Venice Biennale. AI-Da’s art challenges traditional ideas about creativity and the role of the artist, especially as we grapple with the integration of AI into creative fields. This is the sort of subversion I love in art but which the reactionary art world loathes.

AI-Da raises crucial questions about the nature and boundaries of art. She creates, draws and paints through the use of AI cameras and robotic precision. Is creativity a uniquely human trait? Is this intersection of art and technology, evolving art? Should we be judging the art or artist?

My own view is that we have become trapped in the late 18th Romantic view of authorship, the unique, divinely-inspired, creative spark of the individual. Traditional art since then has valued this mysterious homunculus of creativity. Does AI-Da’s work instead represent a collaboration between humans and artificial intelligence, a reflection of our current technological era rather than a purely autonomous creation? A stronger proposition is that a machine be credited as the creative force, even when it lacks personal experiences or emotions to draw from.

AI-Da is a subversive challenge to the Romantic, human-centric view of creativity as a uniquely human trait. But as I’ve claimed, AI-Da’s very existence suggests we might be entering a post-humanist era, where machines become active participants in cultural production. I have challenged the Romantic view of aesthetics here.

Postproduction

There is an interesting idea from the French writer Bourriaud, that we’ve entered a new era, where art and cultural activity now interprets, reproduces, re-exhibits or utilises works made by others or from already available cultural products. He calls it ‘Postproduction’ I thank Rod J. Naquin for introducing me to this thinker and idea. 

Postproduction. Culture as Screenplay: How Art Reprograms the World (2002) was Bourriaud’s essay which examines the trend, emerging since the early 1990s, where a growing number of artists create art based on pre-existing works. He suggests that this "art of postproduction" is a response to the overwhelming abundance of cultural material in the global information age.

The proliferation of artworks and the art world's inclusion of previously ignored or disdained forms characterise this chaotic cultural landscape. Through postproduction, artists navigate and make sense of this cultural excess by reworking existing materials into new creations.

Postcreation

I’d like to universalise this idea of Postproduction to all forms of human endeavour that can now draw upon a vast common pool of culture; all text, images, audio and video, all of human knowledge and achievements – basically the fruits of all past human production to produce, in a way that can be described as ‘Postcreation’.

This is inspired by the arrival of multimodal LLMs, where vast pools of media representing the sum total of all history, all cultural output from our species, has been captured and used to train huge multimodal models that allow our species to create a new future. With new forms of AI, we are borrowing to create the new. It is a new beginning, a fresh start using technology that we have never seen before in the history of our species, something that seems strange but oddly familiar, thrilling but terrifying – AI.

AI, along with us, does not simply copy, sample or parrot things from the past – together we create new outputs. Neither do they remix, reassemble or reappropriate the past – together we recreate the future. This moves us beyond simple curation, collages and mashups into genuinely new forms of production and expression. We should also avoid seeing it as the reproduction of hybrids, reinterpretations or simple syntheses.

It should not be too readily reduced to one word, rather pre-fixed with ‘re-’; to reimagine, reenvision, reconceptualise, recontextualise, revise, rework, revamp, reinterpret, reframe, remodel, redefine and reinvent new cultural capital. We should not pin it down like a broken butterfly with a simple pin, one word, but let the idea flutter and fly free from the prison of language.

We have been doing this on a small scale for a long time under the illusion, reinforced by late 18th and 19th century Romanticism, that creation is a uniquely human endeavour, when all along it has been a drawing upon the past, therefore deeply rooted in what the brain has experienced and takes from its memories to create anything new. We are now, together, taking things from the entire memory of our cultural past to create the new in acts of Postcreation.

Communal future

This new world or new dawn is more communal, drawing from the well of a vast shared, public collective. We can have a common purpose of mutual effort that leads to a more co-operative, collaborative and unified effort. There were some historical dawns that hinted at this future, the Library at Alexandria, open to all containing the known world's knowledge, Wikipedia a huge, free communal knowledge base, but this is something much more profoundly communal.

The many peoples, cultures and languages of the world can be in this communal effort, not to fix some utopian idea of a common set of values or cultural output but creation beyond what just one group sees as good and evil. This was Nietzsche’s re-evaluative vision. Utopias are always fixed and narrow dystopias. This could be a more innovative and transformative era, a future of openness, a genuine recognition that the future is created by us, not determined wholly by the past. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.

Some questions

This is not a stated position, merely the start of a complex debate. So here's a few questions.


What defines creativity—can it exist without human emotion or intention? I think it can, especially if it draws, as humans draw on previous experiences in language or images, as humans do.

Does the value of art diminish if it is created by an AI rather than a human? No. AI-Da's paintings are already selling and slate for six-figure sums. The art establishment have made their judgement.

Is AI-Da truly an autonomous creator, or merely an advanced tool shaped by its programming? It is both. It has been shaped by humans but is autonomous to a degree.

How does AI-Da’s art alter our perception of the boundaries between human and machine creativity? Art moves on and this moves art on. Art has no boundaries. That is the whole point of art, to push boundaries.

Can we still view art as a human-centric endeavour, or is there a shift towards accepting non-human contributors as legitimate artists? This is a shift, albeit the first example of a shift.

How does AI-Da’s art reflect broader societal trends towards technological integration and reliance on algorithms in decision-making? It moves us on. We have something concrete to discuss. That’s good.


Thursday, October 24, 2024

Taming the AI & Assessment Conflict: 20 Practical Resolutions

There is a battle underway, with assessors in education and institutions on one side and Generative AI on the other. It often descends into a cat and mouse game but there are too many mice, and the mice are winning. Worse still is when it descends into a tug-of war, tech showdown or brutal legal collision.


We clearly need to harmonise assessment with our evolving technology and navigate our way through AI and assessment to avoid friction, with some practical solutions to minimise disruption. Assessment is at a crossroads.

Legal confrontations between assessors and the assessed will do great damage to institutions. When Sandra Borch, Norway’s Minister of research and higher education cracked down on plagiarism, taking a student to the Supreme Court, another student uncovered serious plagiarism on her Master’s theses – she had to resign. A more distasteful case was Claudine Gay, President of Harvard, who had to resign after an attack by a right-wing politician, Bill Ackerman. The whole thing turned toxic as people uncovered plagiarism in the work of his academic wife. As many US kids have lawyers as parents, a slew of cases are hitting the courts putting the reputations of schools and colleges at risk. This is not the way forward.

Problem

Assessment stands right in the middle of the earthquake zone as the two tectonic plates of AI and traditional education collide and rub up against each other. This always happens with new technology; printing, photocopying, calculators, internet, smartphones… now AI.



We are currently in the position where the mass use of AI is common, because it is a fabulous tool, but is being used on the SLY. There is widespread use of unsanctioned AI tools to save time. Learners and employees are using it in their hundreds of millions, yet educational institutions and organisations are holding out by ignoring the issue, or doing little more than issuing a policy document. AI is therefore seeping into organisations like water, a rising tide that never ebbs.

The problem has just gotten way more challenging as AI agents are here (automation). The Claude Sonnet 3.5 upgrade has just gone 'agentic', in a very practical way. It can use computers the way people do. It can look at your screen, go find stuff, analyse stuff, complete a chain of tasks, move your cursor, click on buttons and type text. To be clear, it understands and interacts with your computer just as you are doing now. This shift is in ability means it can to do open-ended functions like; sit tests, do assignments, even open research.

In truth…

We need to be more honest about these problems and stop shouting down from moral high horses, such as ‘academic integrity’. Human nature determines that if people find they are being played, or have an imperative that is different, they will take paths of least resistance. E.O. Wilson’s observation that we have “ Palaeolithic minds, Medieval institutions & Godlike technology” leads people to take shortcuts, through fear of failure, financial consequences, even panic. It is pointless to continually say 'They shall not pass' if the system has below par teaching and assessment is poorly constructed. We cannot simply blame students for the systemic failures of a system. Education must surely be a domian where teachers and learners are in some sort of harmony.

It doesn't help that we delude ourselves about the past. Cheating has been and still is common. We’re kidding ourselves if we think parents don’t do stuff for their kids at school and in universities. Essay mills and individuals writing assessments, even Master’s theses, exist in their tens of thousands in Nairobi and other places, feeding European and US middle-class kids with essays. Silk cloths with full essays go back to Confucian times. Technology can be bougt on the internet with button cameras and full comms into invisible earpieces. Cheating is everywhere. That's not to condone it, just to recognise tat it is ALWAYS a problem, nout just an AI problem.

In truth, we also have to be honest as accept that assessment is far too ‘text’ based. Much of it does not assess real skills or performance – even critical thinking. Writing an essay in an exam does not test critical thinking. No one writes critically starting top left, finishing bottom right – that’s why students memorise essays and regurgitate them in exams. Essay setting is easy, actual assessment is hard. We also have to be honest and accept that most educators designing and delivering assessment know little about it.

In the workplace, few take assessment seriously. At best it is multiple choice or e-learning thinly peppered with MCQs. L&D doesn’t take assessment seriously because they are not driven by credentials, nor do make much effort to evaluate training. With MCQs, you can guess, (1 in 4), distractors are often poor or simply distract, are difficult to write, easy to design badly, often too factual or unreal, require little cognitive effort and can be cheated, (longest, opposites etc.). An additional problem is that online authoring tools lock us into MCQs.

Assessments are odd. People settle for 70-80% (often an arbitrary threshold) as tests are seen as an end-point. They should have pedagogic import and give the learners momentum, yet there is nothing meaningful on improvement in most assessment and marking. Even with high scorers, full competence rarely the aim as a high mark is seen as enough, not full competence. The aim is to pass the test not master the subject. 

Plagiarism checkers do not work, so DO NOT use detectors. Neither should you depend on your gut – that is just as bad, if not worse. There are too many false positives and they consistently accuse non-native speakers of cheating. Students KNOW this tech better than you. They will always be one step ahead and even if they are not, there will be places and tools they can use to get round you.

Neither does setting traps for the mice, like "Include in your work icitations from <fictional name>", as when employed, once the trap is revealed, the learners can use the tech to reveal the fictional trap.

In a study by Scarfe et al., GenAI submissions were seeded into the exam system for five undergraduate modules across all years of BSc Psychology in a UK University. 94% of the AI submissions went undetected, with the AI submissions getting grades a half grade boundary higher than real students.

In a recent survey of Harvard students, some had made different course choice because of AI, other reported a sense of purposelessness in their education. They were struck by the fact that they were often learning what will be done differently in a world of AI. It is not just assessment that needs to change but also what is taught to be assessed. Assessment is a means to an end, assessing what is known, learnt or taught. If what we need to know, learn to teach changes, so should assessment.

Solutions

Calculators generate numbers, GenAI generates text. We now live in a post-generative AI world where this is the norm. Most writing is also now digital, so why are so many exams written? 

Most writing in the workplace is not a postmodern critique of Macbeth but fairly brief, bureaucratic and banal, getting things done by email, comms, plans, docs and reports. Management is currently laden with admin and GenAI promises to free us from admin to focus on what matters - the goal. It is here to stay because there is a massive need in the real world to raise productivity on speed and quality and not nget bogged down on redrafting or pretending you are Proust.  Why expect everyone to be writers of brilliant prose, when the goal is to simply get things done.

1. We have to move beyond text-only assessment into more performance-based assessments. Kids go to school at 5 and come out up to 20 years later having did little else other than read, write and comment on text. There is this illusion that one can assess skills through text – which is plainly ridiculous. Accept that people use GenAI to improve their writing beyond their own threshold. Encourage them to use AI to help them make their writing morce concise through summarisations. Allow them to critique their own work through AI.

2. Build 'pedagogy' into creating assessments with AI. We have done this by taking the research, on say transfer and action, then building that into the AI assessment creation process. You get better, more relevant assessment items, along with relevant rubrics.

3. Also build good assessment design practices into creating assessments with AI. There are clear DOs and DON’Ts in the design of assessment items. Build these into the AI creation process. Go further and match assessments to quality assessments standards. Believe me, this can be done in AI.

4. Match assessments more closely to what was actually taught. This alignment can be done using AI, including the identification of gaps, representative coverage, weaknesses on emphasis identified. The documents and transcripts used in teaching and/or the curriculum, can be used by AI to create better quality assessments.

5. Do more pre-assessments. David Asubel said “The most important single factor influencing learning is what the learner already knows.” I totally agree, yet it is rarely done. This gives assessment real pedagogic import – it propels or feeds forward into the learning process and helps teachers. These can be created quickly by AI.

6. Let's have more retrieval practice. This can be created quickly by teachers, even learners themsleves. We know that this works better than underlining notes and highlighting. Making leatrners or learners themselves making the effort to recall ideas and solutions intheir own minds helps get stuff into long-term memory.

7. Move away from MCQs towards short open text, assessed by AI. Open text is intrinsically superior as it demands recall from memory rather than identification and discrimination (correct answer in MCQs is there on the paper or screen). Open response more accurately reflects actual knowledge and skills.

8. Move away from authoring tools that lock you into fixed, templated MCQ assessment items. They also template things into rather annoying cartoon content with speech bubbles etc. Here's 25 ways I think bad design makes content and assessments suck.

9. Use more scenario and simulation assessment. They match real world skills, have more relevance and can set more sophisticated assessments on decision making and other skills. AI can create such scenario and sims assessment and the content you need to populate the scenarios.

10. On formative assessment, 'test out' more. Testing out means allowing people to progress if they pass the text. they may be able to skip modules, even the entire course. This should be the norm in compliance training, or training full stop, where people get the same courses year after year.

11. Get AI to create mnemonics and question-based flashcards for learners to self-assess, practice and revise. 

12. Get AI to create personalised spaced-practice assessment, so they can get learning embedded. 

13. On formative assessment, use more AI designed and delivered adaptive and personalised learning. Adaptive learning can be 1) PRE-COURSE: Student data or pre-tests define pathways. (Never use learning styles or preferences). 2) IN-COURSE: Continuous adaption which needs specialised AI software (this is difficult but systems can do it). 3) POST-COURSE: With shared data across courses/programmes, also Adaptive assessment, Adaptive retention such a personalised spaced practice and performance support.

14. AI created Avatars can be built into assessments. These can be customers in sales or customer care training; employees in leadership. management and specific skills such as recruitment interviewing; or patients in medical education, where you can interact with people in assessments to provide realism.

15. Automate marking. Most lecturers, teachers and trainers have heavy workloads, many rightly complain about this, so focus on teaching not marking. Automated marking will also give you insights into individual performance and gaps. The Henkel study (2024), in a series of experiments in different domains at Grade levels 5-16 (Key Stage 2/3/4), showed that AI was as good at marking as humans.

16. Make assessments more accessible on language and content. Far too many assessments have overly academic language or assessment items that are to abstract and can be turned into better expressed and relevant prose and problems. Translate overly-academic language to readable and vocational using AI. Critique and translate into more readable prose.

17. AI has revolutionised accessibility thorough text-to-speech and speech to text. It has now provided text and speech chatbots and automatically created podcasts (free NotebookLM) AI has also given us live captioning and real-time transcription. For dyslexics (5–15% of population), T2S & S2T, spell checks, Grammar Assistants, Predictive Text and voice dictation have been incredibly useful in reducing fear and embarrassment. AI can do wonders in making assessment more accessible.

18. Use AI for data analysis on your assessment data. It is as simple as loading up a spreadsheet and asking questions you want answered.

19. Stop being so utopian. Most people at school and University will not become researchers and academics. Don’t assess them as if that is ‘their’ goal.

20. What are the skills left over for education to focus on after you get GenAI into common use both in education, the workplace and life? The pat answer is too often - soft skills. I disagree. Move deeper into hard expertise and skills, with a broad perspective, that can be enhanced, magnified, even executed and automated by ChatGPT.

Conclusion

AI is moving steadily away from prompting to automation. This is happening in writing, coding spreadsheet analysis, image creation even moving image creation. It is happening with avatars and in real-time advanced dialogue as speech.

Just like calculators generated numbers GenAI generates text. We need to recognise this and change what we expect of learners. Turning it into a cat and mouse game will not work, there are too many mice and they're winning.

PS
This is a sort of summary of my talk in Berlin at the ATP European Assessment Conference.

 

Wednesday, October 23, 2024

Agentic AI can now use your computer

 Agentic AI can now use your computer

About to give a keynote in Berlin about 'AI and Assessment', and as often happens these days, I have to revamp my presentation, as while we slept, another AI release hit the streets. This time from Anthropic.

AI agents are here (automation)
Things have just got more challenging for assessment but, at the same time, exciting for AI. Claude Sonnet 3.5 upgrade has just gone agentic, in a very practical way.


It can use computers the way people do. It can look at your screen, go find stuff, analyse stuff, complete a chain of tasks, move your cursor, click on buttons, type text, fill in forms. To be clear, it understands and interacts with your computer just as you are doing now. This is practical agency.


It's a bit frightening as we're starting to see glimpses of automation that are much more powerful than we thought possible. There are all sorts of startups that may ge flooded out by this move as it does what they're trying to do. But it is white collar automation that this challenges the most.

Assessment
Any way, back to assessment. The shift is in AI's ability allows it to do open-ended functions like:
•      Sit tests
•      Do assignments
•      Do open research

Provocations
My challenge to the audience includes:

Assessment is far too ‘text’ based
Cheating has been and still is common
L&D don’t take assessment seriously
Essay setting is easy, assessment is hard
Calculators generate numbers, GenAI generates text

The idea that AI has peaked is nonsense. It is, like as rising tide drowning what were human tasks by automating processes. This is getting more profound and consequential.

Thursday, October 17, 2024

Who put the Silicon in Silicon Valley? The place where rocks were made to think...

My hobby, since I was a boy is geology., with a house full of rocks and fossils, a huge geological map of the UK on my kitchen wall and a geological hammer in my car. My other great provocation is AI, which is a long-standing interest, since University.

So I’m glad I have lived long enough to see rocks that think. OK, before you attack the verb, they think not in terms of human thinking but It is clear they can outdo us on many tasks which we ‘think’ are ‘thinking’. If, like me, you believe in the computational theory of the mind, the hardware and software thinks.

Silicon, the rock that thinks is the second most abundant element on earth, making up 27.7% of the Earth's crust, commonly found in sand, quartz, and various silicate minerals.

In the mid-1950s, among the orchards of Mountain View, California, a revolution took root. Shockley Semiconductor Laboratory, was founded in 1956 by physicist William Shockley, co-inventor of the transistor and Nobel Prize Winner. His was the first semiconductor company in what would later be known as Silicon Valley, attracting some of the brightest minds in the new field of electronics.

However, discontent was brewing. Shockley may have been a genius but was autocratic and abrasive. He stifled creativity and bred frustration. By 1957, tensions had reached a breaking point. Eight of Shockley's top engineers and scientists left. The group approached Sherman Fairchild, who saw their potential and agreed to back them and Fairchild Semiconductor was born in Palo Alto, a pivotal moment in technological history.

Shockley, feeling betrayed, dubbed them the ‘Traitorous Eight’. 

Their success attracted a wave of talent and investment to the region, setting off a chain reaction of entrepreneurship. The ‘Fairchildren’ founded numerous other companies that fuelled Silicon Valley's expansion.

Robert Noyce co-invented the integrated circuit. Jean Hoerni developed the planar process, a manufacturing technique that allowed for the mass production of reliable silicon transistors and integrated circuits. These ignited explosive growth in Silicon Valley.

Intel, founded in 1968 by Gordon Moore and Robert Noyce after they left Fairchild. Eugene Kleiner, another of the eight, co-founded Kleiner Perkins, a venture capital firm that became instrumental in funding and nurturing countless tech startups, including giants like Google and Amazon.

 The Traitorous Eight championed an innovation culture that valued flat structures, minimising bureaucracy to allow creativity and engineering prowess to flourish. The emphasis was on risk-taking and entrepreneurship with bold ideas and collaborative effort could lead to monumental success.

Silicon valley remains the powerhouse of tech. Almost all of the innovation in AI came and still comes from that one place. We had te recent example of a bunch of smart people in OpenAI, eventually breaking out to form new enterprises. This is what made the Valley great. It is why it remains the power house of global tech.

Screentime - another in the Sisyphean cycle of technology panics?

Josh MacAllister is a new Labour MP. As with many nebies , he’s keen to make his mark with legislation and has proposed a Bill that would:

• Raise the minimum age of "internet adulthood" (to create social media profiles, email accounts, etc) from 13 to 16

• Legally ban smartphones from classrooms

• Strengthen Ofcom's powers to protect children from apps designed to be ‘addictive’

• Committing government to review further regulation if needed of the design, supply, marketing and use of mobile phones by children under 16

We have a problem

I have been following the screentime debate since 2009, when I read Susan Greenfields’s scaremongering book, where she claimed screentime was making is cognitively stupid. I blogged about it then and the debate has only got worse. Year after year potboiler self-help books appear demanding we digital detox, limit screen time, ban screens in schools. How we deal with technology, especially around children, is an important issue but its is so often reduced to self-help platitudes.

I thought then, and think now, that the idea that smartphones damage our cogntitive systems suggests that the evolved mind is so delicate that they could be damaged by a switch in modality. We don’t say this about books or the cinema. The evolution of our minds has taken million sof years of selection, if it were that easy to make us stupid we’d never have got here.

I’ve seen plenty of people make money by writing about the dangers of ‘screentime’. Whethere it’s smartphones, video games or social media, there’s always some moraliser who wants to tell us to digitally detox (it doesn’t work) and what to do with our time. Susan Greenfield was one, Jean Twenge another – there’s a long list. You can’t help but feel they start with almost an almost religious zeal and end up preaching.

The story they tell themselves is ‘screentime – bad, f2f – good’. Yet there’s rarely  any real definition of what is meant by ‘screentime’. It is a complex issue. Neither is there much breadth to the research they quote – often the same cherrypicked pieces, mainly surverys that show correlation and weak effects, sometimes neutral, even positive! Turns out the evidence that screentime is harmful is as thin as gruel.

Unlocked

So I found myself screaming through the book ‘Unlocked’ by Pete Etchells, a psychologists who is an expert in the field. 

He claims there is almost no evidence to say that screens are bad for us. On the contrary, up to a certain limit, the use of social media correlates with wellbeing, and that some is better for us than none. And where there are negative correlations, such as that between social media and depression, or the amount of time we sit at a computer each day and our sense of our overall wellbeing, they are almost vanishingly weak. 

Our children already inhabit a landscape that is unrecognisable in the context of an earlier version of childhood. But this isn’t something to be afraid of - and isn’t something we should feel guilty about. Screens are ubiquitous and here to stay.

There is a problem with ‘screentime’, as there are lots of different types, with different uses, in different contexts. Etchells thinks we have nothing to fear, and a great deal to gain, by establishing a positive relationship with our screens (and our children’s screens) and thinking about screen time sensibly and critically. Screentime is NOT the key driver behind apparent declines in mental health and wellbeing. People tend to bring their own biases to the ‘screentime’ debate, so we rush to conclusions and point the finger at the nearest candidate. Indeed, the Royal College of Paediatric and Mental Health came to the same conclusion as there was no clear ‘evidence’ for the toxic effect of screentime.

Distraction, attention & sleep

Turns out the research on attention and distractibility, Parry And Roux (2021), is incredibly weak. South Korea tried banning the internet between midnight and 6am. – it actually increased the amount of time they spent on the internet during the day! And don't fall for the blue light arguments – it is not true. One study from Montana University (2022) showed blue block glasses reducing the amount of sleep, another showed no differences in subjective sleepiness the morning after. Other research showed small effects. The research is not worth losing sleep over.

Digital detox

Clearly derived from the dieting industry. Shaw (2020) looked studies in this area and found that fe colected real data from smartphones and devices. – they are almost all questionnaires. Her clever experiments showed that people tended to ‘report’ mental health issues when asked. The results quadrupled and tripled when surveys were used, as opposed to data collection. Thomee (2018) showed that 70% of studies on screen time relied on questionnaires and not real data from devices. There is a puritanical strain in all of this – wanting to control others.

Addiction

The debate is not helped by calling it an ‘addiction’. Etchells explains why this is medically wrong. Equating smartphone use to heroin is not helpful. Let’s make this clear – you are not ‘addicted’ to your smartphone. Technology is not a pharmaceutical and when we get the reductive talk about dopamine – one should really despair. That is far more complex that shallow PPTs at learning conferences make out. Talk of addiction is overused  and implies a lack of agency, as it it is a biological phenomenon.  

The 2017 article by Jean Twenge (picked up by Haidt) was the catalysts for the panic. It was based on her wpork with data sets, that did indeed show correlations but the results were weak (on scale -1 to 1 for correlation – 0.01 to 0.06). The problem with correlations, like ice crem sales and crime (both go up in Summer) is they don’t tell us much about causality. Orgers & Jensen (2019) showed mixed results in the studies looking at the connection between mental health and screentime – some positive, some negative, sone neutral. Even in the positive studies the results were weak. They note the difficulty in establishing a link on such a multivariant topic. A further meta-study by Ferguson (2021) concluded thare was no established proof of the link between smartphones, social media and mental health. Thay also noted an absence of rigour in the studies. 

Conclusion

Orben called it the ‘Sisyphean cycle of technology panics”. I’ve been though many. We need to look at the evidence and stay calm. Screentime is an unhelpful concept and Etchells recommends looking at screen ‘habits’ there are problems with misuse and certainly harm to children through inappropriate use and content. It is important to be precise and not react to angry, knee jerk reactions. Don't buy into these narratives assuming they’re all true. We have a long history of politicians trying to make this claim from Foulkes in 1981 who had a bill called ‘Control of Space Invaders'. It was narrowly defeated. MacAllister is the new Foulkes.

 

Tuesday, October 15, 2024

GOOGLE and AMAZON GO NUCLEAR!

Google made the headlines today, signing a groundbreaking deal to power its data centres with six or seven mini-nuclear reactors, known as Small Modular Reactors (SMRs). To meet the electricity demands driven by the rise of artificial intelligence and cloud computing, they have ordered SMRs from California-based Kairos Power. This is the first commission of a new type of reactor for half a century and the first reactor is expected to be operational by 2030, the rest coming online by 2035

Pretty ambitious move, as the company sees nuclear power as a "clean, round-the-clock power source" that can reliably meet its growing energy needs. Michael Terrell, Google's senior director for energy and climate, emphasised that new electricity sources are essential to support AI technologies fueling scientific advances and economic growth.

Nuclear power provides a consistent and reliable source of carbon-free electricity 24/7. Unlike solar and wind, which are variable and depend on weather conditions, nuclear energy can meet continuous electricity demands, crucial for powering data centres and AI tech that needs uninterrupted energy supply. This allows for more predictable project delivery and deployment in a wider range of locations. Also, the smaller size and modular design of SMRs shorten construction timelines and lower costs. This all makes nuclear energy more accessible and economically viable. The deal was signed when Kairos met their necessary milestones. Google already use a ton of solar/wind/geothermal - it ain't enough. So they've given Kairos a commitment to buy if they meet their deadlines.

Amazon is paying actual money through their partnerships and investment in X-energy. They have two other nuclear deals with Energy MNOrthwest and Dominion Energy. 

Google and Amazon are not alone in turning to nuclear options. Microsoft recently struck a deal to source energy from Pennsylvania's Three Mile Island, reactivating the plant after a five-year hiatus. Amazon also acquired a nuclear-powered data centre earlier this year, signalling a broader industry shift toward embracing nuclear energy.

The UK is also witnessing a competitive push among companies to develop SMR technologies as the government seeks to rejuvenate its nuclear industry. Rolls-Royce SMR recently gained momentum by being selected by the Czech government to build a fleet of reactors. One wonders where the Labour Gov are on this - strangely silent?

This could be the start of something quite big, as it taps into the innovation, risk taking and problem solving that Governments seem to have lost on energy.

Energy and crypto

Another area we should look at is the waste in Crypto. I am no pure techno-optimist and have argued against Cryptocurrencies for years. It serves no useful purpose and is the purest form of speculation, driven by greed, often fuelled by fraud and crime. It is of no benefit to our species, a plague on our financial system and should be banned. But do we have an EU Crypto Act? China did it, the West did not. Do we have an army of Crypto safety people writing papers and attacking it day and night – no.

Yet its energy consumption way outweighs that of AI. Even back in 2022 it had the energy of a large country like the Netherlands and it has grown massively. Its energy consumption is way beyond that of AI even with projections to 2026.

Odd that we don't see well-funded anti-crypto institutions, safety summits, hard-core legislation (except China), hundreds of papers and thousands of 'Responsible' anti-Crypto 'Safety' bods?

Calculations for conference goers

If you're attending a conference by car you're emitting about 1000 grams of CO2 per 100 miles. A transatlantic flight from New York to London emits about 1 to 1.5 metric tons of CO₂ per passenger, depending on the aircraft, occupancy, and efficiency. (Business and first-class passengers may have a higher CO₂ footprint due to more space and lower passenger density. This  is for a single one-way flight. A return flight would be 2.200.000 grams. A GenAI request is 0.1- 1gram. For a European flight it is about 140.000 grams. A cup of coffee is 170 to 550 grams.

Conclusion

AI is here to stay. It does have energy needs but these are dwarfed by other wasteful activities, such as crypto. We are seeing AI help solve that problem it has created. That's what technology matters. We can stare into the abyss of climate change or get on and do something about it.