Saturday, December 16, 2023

Babbage - genius but of little causal significance in history of computers

Charles Babbage (1791 - 1871) was a colourful British mathematician, inventor, and mechanical engineer. He made significant contributions to the field of computing through his pioneering work on the design of mechanical computers. 

A mathematician of great stature, he held the position at Cambridge held by Newton and received substantial Government funds to build a calculating Differential Engine, funds he used to go further to develop an Analytical Engine, more of a computer than calculator.

 

Machine to mind

For the first time we see,  albeit still mechanical product of the Industrial Revolution, the move from machine to mind. Babbage saw that human calculations are often full of errors and speculated whether steam could be used to do such calculations. This led to his lifetime focus on building such a machine.

Babbage had been given government money to conceive and develop a mechanical computing device, the Difference Engine. He designed it in the early 1820s and it was meant to automate the calculation of mathematical tables, basically a sophisticated calculator, hat used repeated addition. He did, in fact, go on to design a superior Analytical Engine, a far more complex machine that shifted its functionality from calculation to computation. Conceived by him in 1834, it was the first programmable, general-purpose computational engine and embodies almost all logical features of a modern computer. Although a mechanical computer, it features an arithmetic logic unit, control flow with conditional branching, and memory, what he called the ‘store’. We should note that it is decimal but not binary but it could automatically execute computations. 

Although this is an astonishing achievement, neither were fully built during his lifetime, Babbage's design laid the groundwork for future developments in computing. His son Henry Babbage did build a part of his differential engine as a trial piece and it was completed before Babbage's death in 1871. A full Differential Engine was built in 1991 from materials that were available at the time and to tolerances achievable at the time. It weighs in at 5 tons, with 8000 moving parts. Both can be seen in the Science Museum in London.

Critique

Babbage was a prickly character who alienated many, especially in Government, who generously funded his work. He alienated the government and in a tale that has been common in UK computing, never turned from theory into practice. Those developments eventually came from the US. In the end he failed but from the drawings alone, Lady Byron called it a ‘thinking machine’ and Ada Lovelace, her daughter, asked for the ‘blueprints’ and became fascinated by the design and its potential.

Influence

Although described as the ‘father of computers’, there is no direct, causal influence between Babbage and the development of the modern compuer. Babbage’s designs were not studied until the 1970s, so Babbage’s designs could not have been the direct descendants of the modern computer. It was the pioneers of electronic computers in the 1940s that were the true progenitors of modern computers. There is a much stronger case made for the idea that it was Holerith and his census machines that had the real causal effect.

 

Bibliography
Swade, D. and Babbage, C., 2001. Difference engine: Charles Babbage and the quest to build the First Computer. Viking Penguin.

Hyman, A., 1985. Charles Babbage: Pioneer of the computer. Princeton University Press.

In Our Time, Ada Lovelace, featuring Patricia Fara, Senior Tutor at Clare College, Cambridge; Doron Swade, Visiting Professor in the History of Computing at Portsmouth University; John Fuegi, Visiting Professor in Biography at Kingston University.

 https://www.bbc.co.uk/sounds/play/b0092j0x

Ada Lovelace - insightful but full of surprises...

Ada Lovelace (1815 -1852) died at the age of 36 but had significant insights into computer science. She was the daughter of the poet Lord and Lady Byron but her parents parted only weeks after her birth. Her mother was interested in mathematics, also social movements, and established a series of schools and helped establish the University of London. It was she who ensured that Ada got a good, disciplined education in both science and mathematics. 

After both attended a Babbage soiree in London, where her mother described Babbage’s engine as a ‘thinking machine’, they both went on a tour round the Midlands where they saw the Jacquard Loom. This was to inspire a series of thoughts in the form of notes from Ada on the potential of the Analytical Engine that Babbage had invented.

The mathematician Hannah Fry describes Ada as intelligent but also “manipulative and aggressive, a drug addict, a gambler and an adulteress!”

Analytical engine

Ada then collaborated closely with the mathematician and inventor, Charles Babbage, who invented what some regard as the first modern computer - his Analytical Engine. This resulted in her translation from French to English, of an article written by the Italian mathematician Luigi Federico Menabrea (future Prime Minister), about Babbage's Analytical Engine, where she added extensive notes and annotations. These notes were three times as long as the original essay and were published in Scientific Memoirs Selected from the Transactions of Foreign Academies of Science and Learned Societies in 1843 and contained some seminal ideas on computing.

Programming

In these notes she described the potential for machines to perform operations beyond simple arithmetic calculations. In one of her notes, she described an algorithm for calculating Bernoulli numbers, which is considered, by some, to be the world's first computer program, although doubt has been cast on this by recent scholarship. It is a detailed and tabulated set of sequential instructions that could be input into the Analytical Engine. This demonstrated her understanding of how machines could be programmed to perform various tasks, a fundamental concept in computer science and AI. 

Insightful though her notes were, she was not a top flight mathematician and the supposed computer programme was really a sort of pseudocode with mathematical expressions. The claim that she wrote the first computer programme some regard as exaggerated and it was never executed on any machine as an actual programme. As the Babbage scholar, Doron Swade, who built the Babbage Analytical Engine, claims, the concept and principle of a computer programme for this machine was actually Babbage’s idea, as his notes of 1836/37 predate those of Lovelace, although her insights on computation beyond mathematics was absolutely original. 

From calculation to computation

The notes had the idea that instructions (programs) could be given to these machines to perform a wide range of tasks, making her a pioneer in the ‘concept’ of computer programming. Accomplished in embroidery, she describes the possibility of input through punched cards similar to the method used on the Jacquard loom. This loom was invented by Joseph-Marie Jacquard in the early 19th century and revolutionized the textile industry by allowing for the automated production of intricate patterns in fabrics. Punched cards were used for patterns, each hole being an on/off switch, one card per line in a column of sequenced cards, a technique used on mainframe computers in the 20th century.

Babbage saw his machines as dealing with numbers only. Lovelace saw that we could see such machines as not just doing calculation but also computation. Numbers can represent others things, representations of the world and she speculated that computers could be used to create outputs other than mathematics such as language, music and design. She understood that machines could be programmed to generate creative works. This anticipation of the creative potential of machines aligns with the field of generative AI, which focuses on developing algorithms that can produce creative content such as music, art, and literature, This was her main insight, although there is no direct causal influence between her work and these developments. 

Education and learning

Her views on education aligned with her own experiences and that of her mother Lady Byron. She received an extensive education in mathematics and science, which enabled her to mix with other intellectuals and practitioners in the field, making ground-breaking insights to the field of computing. She was an advocate for the intellectual and educational development of women and believed in providing women with opportunities for education in mathematics and the sciences, which was uncommon at that time. Lovelace's passion for learning and her advocacy for education for all, regardless of gender, continue to inspire educators and learners today.

 

Critique

Her role in inventing either the idea of computer programming and the first computer programme seems o have been quashed. The said programme was, of course, never used in the Analytical Engine, as it was never turned into actual code and the Analytical Engine was never built.

There was no real causal influence here on modern computing, no real continuity between Lovelace and modern computing. This is an ad hoc legend rather than a matter of history. Turing read her notes and admired her insights, and although one can argue that came through Turing, who was in Bletchley Park, and that she influenced the Colossus machine, which decoded German scripts, the causal link is tenuous and unproven. There is no direct, causal trail to modern compuers, even through Babbage, as Babbage’s designs were not studied until the 1970s, so Babbage’s ideas were not actually the direct descendants of the modern computer. It was the pioneers of electronic computers in the 1940s that were the true progenitors of modern computers.

Influence

Ultimately, says Hannah Fry, her contribution was in seeing that computation was more than calculation, yet “Her work… had no tangible impact on the world whatsoever.” Nevertheless, Lovelace's passion for learning and her advocacy for education for all, regardless of gender, continue to inspire educators and learners today. The Ada Lovelace Institute in the UK is a good example of this legacy and a dozen biographies were published on the 200th anniversary of her birth in 2015.

Bibliography

Notes https://maa.org/press/periodicals/convergence/mathematical-treasure-ada-lovelaces-notes-on-the-analytic-engine

Hollings, C., Martin, U. and Rice, A.C., 2018. Ada Lovelace: The making of a computer scientist (p. 2018). Oxford: Bodleian Library.

In Our Time, Ada Lovelace, featuring Patricia Fara, Senior Tutor at Clare College, Cambridge; Doron Swade, Visiting Professor in the History of Computing at Portsmouth University; John Fuegi, Visiting Professor in Biography at Kingston University.

 https://www.bbc.co.uk/sounds/play/b0092j0x

Hannah Fry https://www.bbc.co.uk/programmes/articles/3jNQLTMrPlYGTBn0WV6M2MS/not-your-typical-role-model-ada-lovelace-the-19th-century-programmer?ns_mchannel=social&ns_campaign=bbc_radio_4&ns_source=facebook&ns_linkname=radio_and_music

Friday, December 15, 2023

Google has several cards up their sleeves...

What can we expect from Google’s Gemini?

Google have several cards up their sleeve on Generative AI:

  1. They invented it!
  2. Awesome AI resources
  3. Google Search
  4. Google Scholar
  5. Google books
  6. YouTube
  7. Google Translate
  8. Google Maps
  9. Global reach, data centres and delivery
  10. Above all they have DeepMind

 

Gemini is their big response to OpenAI and has a family of foundational models, at three sizes:

 

Ultra – professional version of Bard

Pro – through Bard and Google products
Nano (on device - mobile version)

 

An important word here is ‘multimodal’, as their model has been trained on and is capable of processing most media types, including text, images, video, audio and code. So far Generative AI has largely been a 'calculator for words'... but this lifts it into another realm. It is multimodal from the ground up (input not necessarily output). 


However, one must be careful in making the assumption that this moves it more towards how humans think. Our multimodality is quite different, especially the ways we deal with sight and sound. There is little crossover.


They make claims about “sophisticated reasoning and its ability to process complex information in different formats, also the analysis and extraction of data from research articles, being able to really distinguish the relevant from irrelevant. On benchmarking, it has already outperformed human specialists in MMLU (massive multitasking language comprehension on topics like math, history, law and ethics, at 90.0% against 86.4% of GPT-4.

 

This is all very exciting, as it shows how competition is accelerating progress. Google has taken a cautious approach by rolling out these products across next year. Despite their ability to read multimedia, the initial Gemini models will not – at least initially – produce images or videos.


Unlike OpenAI, they have an advertising business to protect and need to be careful. They know that the writing is on the generative wall for search. As usual, they are going for integration into products approach, a rising tide rather than a single flood. Seems wise. After a tumultuous year, we are entering a more stable product development phase. Google will not see any Board bun-fights. Although there will be some surprises in store for sure.


Google have a universal, global mission, to harness our cultural information and make it available to all. Year one was revolution, year two will be evolution.


Watch video here.


Wednesday, December 13, 2023

Chatty robots are here...

A number of domestic and robot companies have been funded, some have already failed but two stand out, as being well funded because they have shown remarkable progress. If any are successful, and success means different things in different markets, there will be huge demand. Domestic robots are one thing but functional working robots is quite another. Increases in productivity, along with zero salary costs would create a radical shift in real world, vocational jobs.

There have been ridiculous pretenders like Sophie, basically chatbots in cheap bits of plastic but two companies stand out; Figure and Optimus.

Figure

Figure have been first to bring a general purpose robot to life with a superb robot integrated with OpenAI's ChatGPT. The dialogue is fascinating, with visual recognition of people and objects, interpretation of incoming speech, completion of tasks and what appears to be explanations of its own behaviour. There is a little latency, suggesting this will be a problem but that doesn't matter. One can imagine open source, local models, trained inside the robot (models are not that big) with developing functionality around dialogue.




It is dialogic is the full sense of the word, using dialogue in language as well as recognition and action dialogue with the real world. Note that there is a limited set of real world actions, limited by what the robot can and should do. Problems are broken down and turned into specific plans within certain constraints.

The demo is at normal speed and uses AI, namely an A-Z neural network. We now see the integration of image recognition (seeing), sound recognition through microphones, with speech the interpretation of planning then action. It uses a pre-trained multimodal model (we don't know what - probably better than GPT4 - one specific to robotics) with language recognition, generation, along with actions. These robots are likely to use specific MMNs.

Image recognition is recognition and reason. This also sounds like a human as the speech generation software does this. Then it moves and performs actions, keeps balanced, makes smooth movements, grasps, moves and places objects as a stand-alone robot.

Very impressive.

Optimus

Musk has stated that his robot business could out perform his car business. Potentially, he is right. Robots change the nature not just of driving but work itself - initially unsafe or mundane tasks. This would be an enormous shock to the economy, as 24 hour labour becomes cheap, plentiful and scalable.

He has told employees that Tesla could become a $10 trillion company on the back of this. The robot is the same size as a human so the it fits into existing work context as we do.

Still 2023 and progress by Tesla is phenomenal. Neck movements really bring it to life, tactile sensing on fingers can deal with an egg. The hands are where the action is. Waste of time walking and running if you can't do things. Five fingers, all the same length making manufacturing and maintenance easier. The dexterity is amazing. Within a year Musk tweeted, it will thread a needle.

There is nothing in the head, as it can all be in the body. The head is just a camera and possibly a display screen. The point is not to wholly mimic a human. Airplanes do no wholly mimic birds, the design is driven by needs.

The fact that it can walk shows they're after any domain where humans work and operate. We'll see fixed robots (factories), wheeled robots (limited area) and legged robots, I'm sure. Walking has toe articulation, a real advance. Looks slow but speed may not be the key issue. Most people are fairly stationary most of the time in life. It can squat and dance. Injection moulded body parts reduce weight and costs.

It's all in the actuators. An actuator is a device that converts energy (typically electrical, hydraulic, or pneumatic) into mechanical motion. This is the trick, to make these a high enough quality and easy to manufacture.

It will, of course talk and listen. You will be able to tell it what to do and it will have Chatbot functionality. 

I can see these being your butler, something you can chat to, a companion. But there are thousands of potential real-world applications in the workplace. Think care homes, hospitals, retail.

Go further and think Teachers and Doctors?

Release date 3-5 years?




‘Engines of Engagement’ is a curious (authors’ description) book about Generative AI.


‘Engines of Engagement’ is a curious (authors’ description) book about Generative AI. Julian Stodd, is the progenitor of this project. If you know Julian, as I do, you’ll know this will be an interesting read. Along with his co-authors Sae Schatz and Geoff Stead, they’ve come up with something wonderful.


The book is fluid, a bit like a large language model. It has avoided the fixity of most books on the subject and is honest about the ambiguity of Generative AI. People find it difficult to think probabilistically, yet that is what one has to do with Generative AI. We demand certainty, right answers and truth-machines. Julian and his co-authors have admirably avoided this trap. Our brains are intrinsically probabilistic, as are these Generative AI tools, so the subject demands a different mindset, one that is more open and doesn’t fall into the trap of using pejorative language such as ‘hallucinations’. It has this great line,“AI isn’t perfect – because neither are we” and has more questions than answers, is playful rather than dogmatic.


They have also avoided the endless moralising we hear from academics and quango folk with a lot of time on their hands, accompanied by little knowledge of AI or ethics, riding into the discussion on their moral high horses. They pose questions and recognise the complexity of the situation. It’s also relatively short, a blessing in this age of verbosity. So I’ll end my review here.


Lastly, a confession. I have a small piece in the book on ‘Human Exceptionalism’. Julian was open enough, as always, to take a risk on a piece that tries to demolish the idea that we are special and exceptional. Copernicus and Darwin put that to rest, yet we still hang on to our 21st century skills, whatever…. 

Well done to these three and See Saw Publishing. 

EU legislation is a mess... even Macron thinks it has gone way too far

The EU have stated their intention to implement some pretty stringent laws in their AI Act. Even Macron has warned that EU AI legislation will hamper European tech companies compared to rivals in the US, UK and China. Mistral is a very successful French company who may be hit by the AI Acts attack in foundational models.


It will, of course take forever, there is a two year process, with three different drafts from here from three different institutions – Parliament, Commission and Council. It is all a bit of a technocratic mess, with lots of potential bun fights.

Here are the problems:


A ban on emotion recognition may well hamper useful work in autism, mental health. In truth there wiggle room here, as it is absent in two of the drafts.



A ban on facial recognition will face fierce resistance from police and intelligence agencies, on cases as wide as deepfake recognition, age recognition in child protection, accessibility features, safety applications, terrorism, trafficking and child porn. Expect a lot of back-tracking and exceptions here.

The ban on social scoring sounds good but you are being assessed all the time for credit, loans, insurance and mortgages. In truth, social scoring Chinese style does not exist, so this is an unnecessary law tackling a non-existent problem.


The big one is the move on foundational models and their training data. This will be fiercely fought, hence Macron’s early move. Attacking foundation models is like attacking the wind not the boat and its destination. Ohers, such as the US and UK have urged a more application-driven, sector driven approach, that focus on uses not the underlying models. France, Germany and Italy has fought this, so I’d expect backtracking.


The Act has several other shortcomings in terms of definition and implementation. It sees AI as a fixed entity. AI is intrinsically dynamic, as it learns and morphs as it operates. This makes it a moving target in terms of the object of legislation. On top of this AI is an integrated set of services, networks and organisations with various data sources. If I am a small AI company I am likely to be using various foundational models and delivery services from OpenAI, Microsoft, Facebook and Amazon, all as cloud services. One general-purpose system may be used for several separate services. It is difficult to see how responsibility can be assigned.


While the rest of the world gets on and uses this technology for astounding benefits in learning and healthcare, the EU seems determined to become a mere regulator. Not just a regulator but one that loves complex, red-tape driven solutions. Billions, if not trillions, in terms of productivity growth and innovation may be a stake here.


My guess is that the act will facing challenges in terms of reaching consensus among member states, given the diverse interests and viewpoints on AI regulation. There will also be difficulties in implementing the guidelines and regulations proposed in the AI Act, due to technological complexities or resistance from AI stakeholders. The act will therefore need revisions or updates to address emerging AI technologies and scenarios that were not initially considered.

 

The final problem with EU law is its fixity. Once mixed and set, it is like concrete – hard and unchangeable. Unlike common law, such as exists in England, US, Canada and Australia, where things are less codified and easier to adapt to new circumstances, EU Roman Law is top down and requires new changes in the law through legislative acts. If mistakes are made, you can’t fix them.

 

At only 5.8% of the world’s population, there is the illusion that it speaks for the whole world. It does not.


PS

When the Pope called for a global treaty on AI, we have surely reached Peak BS on AI and ethics. The Catholic Church put Galileo:house arrest for rest of life, they burnt Bruno at stake for scientific heresy. Their banned books list Galileo, Descartes, Voltaire, Sartre & Kant only in 1966! We have surely reached Peak BS on ethics & AI?