Thursday, July 11, 2024

Good discussion paper on the Role and Expertise of AI Ethicists: Bottom line – it’s a mess!

 Good discussion paper on the Role and Expertise of AI Ethicists

Who is an AI Ethicist? An Empirical Study of Expertise, Skills, and Profiles to Build a Competency Framework Mariangela Zoe Cocchiaro et al.

Bottom line – it’s a mess! 

In the less than 2 years, AI Ethicists have become common. You see it in social media profiles, speaker profiles, especially in academia. Where did they all come from, what is their role and what is their actual expertise?

Few studies have looked at what skills and knowledge these professionals need. This article aims to fill that gap by discussing the specific moral expertise of AI Ethicists, comparing them to Health Care Ethics Consultants (HC Ethicists) in clinical settings. As the paper shows, this isn’t very clear, leading to vastly different interpretations of the role.

It’s a mess! A ton of varied positions that lack consensus on professional identity and roles, a lack of experience in the relevant areas of expertise, especially technical, lack of experience in real-world applications and projects and a lack of established practical norms, standards and best practices. 

As people who have as their primary role the bridging the gap between ethical frameworks and real-world AI applications, relevant expertise, experience, skills and objectivity are required. The danger is that they remain too theoretical and can be bottlenecks if they do not have the background to deliver objective and practical advice. There is a real problem of shallow and missing expertise along with the ability to deliver practical outcomes and credibility. 

Problem with the paper

The paper focus on job roles, as advertised, but misses the mass of self-proclaimed, internally appointed and simply identified as doing the role without much in the way of competence-based selection. Another feature of the debate is the common appearance of ‘activists’ within the field, with very strong political views. They are often expressing their own political beefs, as opposed to paying attention to the law and reasonable stances on ethics – I call this moralising, not ethics.

However, it’s a start. To understand what AI Ethicists do, they looked at LinkedIn profiles to see how many people in Europe identify as AI Ethicists. They also reviewed job postings to figure out the main responsibilities and skills needed, using the expertise of HC Ethicists as a reference to propose a framework for AI Ethicists. Core tasks for AI Ethicists were also identified.

Ten key knowledge areas

Ten key knowledge areas were outlined, such as moral reasoning, understanding AI systems, knowing legal regulations, and teaching.

K-1 Moral reasoning and ethical theory  

● Consequentialist and non-consequentialist approaches (e.g., utilitarian, deontological approaches, natural law, communitarian, and rights theories). 

● Virtue and feminist approaches. 

● Principle-based reasoning and case-based approaches. 

● Related theories of justice. 

● Non-Western theories (Ubuntu, Buddhism, etc.). 

K-2 Common issues and concepts from AI Ethics 

● Familiarity with applied ethics (such as business ethics, ecology, medical ethics and so on).

● Familiarity with ethical frameworks, guidelines, and principles in AI, such as beneficence, non-maleficence, autonomy, justice and explicability (Floridi & Cowls, 2019). 

K-3 Companies and business’s structure and organisation 

● Wide understanding of the internal structure, processes, systems, and dynamics of companies and businesses operating in the private and public sectors. 

K-4 Local organisation (the one advised by the AI Ethicist) 

● Terms of reference. 

● Structure, including departmental, organisational, governance and committee structure.  

● Decision-making processes or framework. 

● Range of services.  

● AI Ethics’ resources include how the AI Ethics work is financed and the working relationship between the AI Ethics service and other departments, particularly legal counsel, risk management, and development.  

● Knowledge of how to locate specific types of information. 

K-5 AI Systems  

● Wide understanding of AI+ML technology’s current state and future directions: Theory of ML (such as causality and ethical algorithms) OR of mathematics on social dynamics, behavioural economics, and game theory 

● Good understanding of other advanced digital technologies such as IoT, DLT, and Immersive.  

● Understanding of Language Models – e.g., LLMs – and multi-modal models. 

● Understanding of global markets and the impact of AI worldwide.  Employer’s policies  

● Technical awareness of AI/ML technologies (such as the ability to read code rather than write it). 

● Familiarity with statistical measures of fairness and their relationship with sociotechnical concerns.  

K-6 Employer’s policies 

● Informed consent. 

K-7 Beliefs and perspectives of the stakeholders 

● Understanding of societal and cultural contexts and values.  

● Familiarity with stakeholders’ needs, values, and priorities.  

● Familiarity with stakeholders’ important beliefs and perspectives.  

● Resource persons for understanding and interpreting cultural communities.

K-8 Relevant codes of ethics, professional conduct, and best practices  

● Existing codes of ethics and policies from relevant professional organisations (e.g. game developers, software developers, and so on), if any.

● Employer’s code of professional conduct (if available).

● Industry best practices in data management, privacy, and security. 

K-9 Relevant AI and Data Laws 

● Data protection laws such as GDPR, The Data Protection Act and so on. 

● Privacy standards.  

● Relevant domestic and global regulation and policy developments such as ISO 31000 on risk.  

● AI standards, regulations, and guidelines from all over the world.  

● Policy-making process (e.g., EU laws governance and enforcement). 

K-10 Pedagogy  

● Familiarity with learning theories.  

● Familiarity with various teaching methods. 

Five major problems

They rightly argue that AI Ethicists should be recognized as experts who can bridge ethical principles with practical applications in AI development and deployment. Unfortunately this is rare on the ground. It is a confusing field with a lots of thinly qualified, low level commentators self-appointing themselves as ethicists. 

  1. Some, but few in my experience, have any deep understanding of moral reasoning and ethical theories or applied ethics. 
  2. As for business or organisational experience few seem to have been in any real positions relevant to this role within working structures. 
  3. Another often catastrophic failing is the sometimes the lack of awareness of what AI/ML technology is, along with the technical and statistical aspects of fairness and bias.
  4. A limited knowledge even of GDPR is often apparent and the various international dimensions to the law and regulations.
  5. As for pedagogy and teaching – mmmm.

Conclusion

To be fair much of this is new but as the paper righty says, we need to stop people simply stating they are ethicists without the necessary qualifications, expertise and experience of the practical side of the role. AI Ethicists are crucial for ensuring the ethical development and use of AI technologies. They need a mix of practical moral expertise, real competences in the technology, a deep knowledge of the laws and regulations, and the ability to educate others to navigate the complex ethical issues in AI. At the moment the cacophony of moralising activists need to give way and let the professionals take the roles. Establishing clear competencies and professional support structures is essential for the growth and recognition of this new profession. 


My favourite AI quote....

This is my favourite AI quote, by E.O Wilson:

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” 

There’s a lot to unpick in this pithy and brilliant set of related statements.

By framing the problem in terms our evolutionary, Paleolithic legacy, as evolved, emotional beings, he recognises that we are limited in our capabilities, cognitively capped. Far from being exceptional, almost all that we do is being done, or will be done, by technology. This we refuse to accept, even after Copernicus and Darwin, as we are attached to emotional thought, beliefs rather than knowledge, emotional concepts such a soul, Romanticism around creativity and so on. We are incapable of looking beyond our vanity, have some humility and get over ourselves.

To moderate our individualism, we created Medieval institutions to dampen our human folly. It was thought that the wisdom, not of the crowd, but of middle managers and technocrats, would dampen our forays into emotional extremes. Yet these institutions have become fossilised, full of bottlenecks and groupthink that are often centuries old, incapable of navigating us forward into the future. Often they are more about protecting those within the institutions themselves than serving their members, citizens, students or customers. We lack trust in global institutions, political institutions, educational institutions, businesses and so on, as we see how self-serving they often become.

When Godlike (great word) technology comes along, and threatens either ourselves or our institutions, we often react with a defensive, siege mentality. Generative AI in Higher education is seen as an assault on academic integrity, using generative tools an attack on writing, getting help leading to us becoming more stupid. All sense of proportion is lost through exaggeration and one-sided moralising. No high-horse is too high for saddling up and riding into the discussion.

Wilson’s final point is that this produces an overall crisis. With Copernicus it led to printing, the Reformation, Enlightenment and Scientific Revolution. With Darwin, Church authority evaporated. With the Godlike technology AI, we have created our own small Gods Having created small Gods, they help us see what and who we are. It is seen as an existential crisis by some, a crisis of meaning by others. At the economic level a crisis of tech greed, unemployment and inequality. 

But it’s at the personal level where the Paleolithic emotions are more active. As Hume rightly saw, our moral judgements are primarily emotional. That is why many, especially those who work in institutions express their discontent so loudly. Technology has already immiserated blue collar workers, with the help of white collar institutions such as business schools. It is now feeling their collar. AI is coming for the white collar folks who work in these institutions. It is actually ‘collar blind’ but will hit them the hardest. 


Wednesday, July 10, 2024

Lascaux: archaeology of the mind - a LIM (Large Image Model) and a place of teaching and learning


Having written about this in several books, thrilling to finally get to Lascaux and experience the beauty of this early spectacular form of expression by our species. The Neanderthals had gone and this was the early flowering of the early Homo sapiens.

This is archaeology of the mind, as these images unlock our cognitive development to show that over tens of thousands of years we represented our world, not through writing but visual imagery, with startlingly accurate and relevant images of the world we lived in – in this case of large animals – both predator and prey – of life and death.

As hunters and gatherers we had no settled life. We had to survive in a cold Ice Age climate when there were no farms, crops and storage, only places small groups would return to after long nomadic journeys on the hunt. It was here they would converge in larger groups to affirm their humanity and, above all, share, teach and learn.

Cave as curriculum

After a period of disbelief, where it was thought such images were fraudulent and could never have been created by hunter gatherers tens of thousands of years ago, we had lots of perspectives; from Victorian romanticism of cave 'art' through to shamanic drug induced experiences, finally moving towards more practical, didactic interpretations.

The didactic explanations seem right to me and Lascaux is the perfect example. It was much used, purposeful and structured. A large antechamber and learning journeys down small branched passageways show narrative progression. like early churches, it is packed with imagery. Movement is often suggested in the position of the legs and heads, perspective is sometimes astounding and the 3D rock surface is used to create a sculptural effect. You feel a sense of awe, amazement and sheer admiration.

Narratives are everywhere in these exquisite paintings. Working from memory they created flawless paintings of animals standing, running, butting and behaving as they did in the wild. They dance across the white calcite surface but one thing above all astounded me - they made no mistakes. They used reindeer fat and Juniper which does not produce sooty smoke to light the cave, also scaffolding and a palette of black (manganese) and a range of ochres from yellow to orange and red. Flints scored the shapes, fingers palms, ochre pencils, straws, spitting techniques and stencils were used to shape, outline and give life to these magnificent beasts.

Learning journey

Entering the large rotunda with a swirl of huge bulls, horses and stags, you get a sense of the intensity of the experience, the proximity to these animals, their size and movement. But you are attracted by the scary dark hole existence of two other exits

The first has a foreboding warning – the image of a bear with his claws visible just next to the entrance. One can imagine the warning given by the teacher. Then into the hole of darkness of what is now called the Sistine Chapel of cave painting, a more constricted passage, just enough in places for one person to pass, with images much closer to you. At the end, a masterful falling horse in a pillar of rock, which you have to squeeze around, then into an even more constricted long passage with predatory lions. The narrative moves from observing animals in the wild to their death and finally to the possibility of your death from predators.

Choose the other side passage and you get a low crouching passage, at one point there is a round room, full of images, and at the back after a climb, a steep drop into a hidden space where a dead man (only figure in entire cave) lies prone, the charging bison’s head low and angry, its intestines hanging out. Beside the man lies a spear thrower and the spear is shown across the bison’s body. Beside him a curious bird on a stick.

What is curious are the dozens of intentional signs, clearly meaningful, often interpreted as the seasons, numbers of animals in a group and so on. It isprproto-writing and they have a teaching and learning purpose.

The cave is a structured curriculum, an ordered series of events to be experienced, gradually revealed and explained to the observer in a dark, flickering, dangerous world.

Setting the scene

Let's go back to the cave opening again. As you enter there is a strange, hybrid creature, that seems to say What is this? What animal could it be? The point may have been that we see animals as first glimpses, often at a distance and must learn to identify what they are – predator or prey? It has circular markings on its body, long straight horns and what looks like a pregnant belly. This seems like the spot an experienced hunter would explain that variability in markings, horns, body shape and knowing colour and breeding seasons matters to the hunter.

Expertise was rare, as people died young. The known had to be passed down generations not just by speech and action but permanently as images that told stories. This was a way of preserving thatvrare commodity - cultural capital.

Basic skills

As you enter the huge ante-chamber, which could have held the entire hunter and gatherer group, you literally walk in and find yourself beneath a huge herd of animals. It would have been a surprise, not possible in the real world, a simulation. This is an introduction to many different species.

It has been carefully composed. You are in a representation of the real world, a simulation that focuses on what matters, what you must avoid as predators, and kill as prey. It needed a huge communal effort, as scaffolding had to be manufactured and built, materials gathered and skilled artists themselves trained and selected. This is an organised group, creating an organised venue for organised learning.

The effect of large animals coming at you out of the dark, within the confines of a cold cave would have been terrifying, like being a horror movie. The flickering lamps revealing a horned head here, a tail there. It is as if they understood the idea of attention, simplicity of image and their impact on learning and memory.

Hunting

As hunters late palaeolithic people tended to hunt a specific species at any one time of the year. This matches the imagery, where one can stop at an image of one species (they had to enter difficult passages with small lamps) and move from one species to another sequentially. There are narrative structures within the images; breeding pairs, animals in motion, different seasonal coats. At the end you encounter a masterpiece – the falling horse, with a bloated stomach, dead,

Break-outs

In another long side cave, like a long break-out room, the images are entirely different, a bewildering set of outlines and scores that suggest a more direct telling of what it is to hunt. Like a huge blackboard, they have been drawn and overdrawn by what seems like more improvisational hands. Here, I think, they were explaining the details of hunting. This was the chalkboard lecture hall. It is low and requires one to crouch, more likely to sit and be taught. 

New teachers clearly overwrote on those that came before, as there were no board cleaners! There a huge range of animals in these drawings - horses, bison, aurochs (bulls), ibexes, deer, a wolf and lion. They are often partial images, especially heads, which suggests some specific points were being made about what you need to look for as a hunter. It is a series of drawings over-writing the earlier work, over a long period by different people. 

In this area there is a shaft, climb down and there is a black scene of a human figure lying prone beneath a wounded bison, its intestines hanging out, its head low as it charges. This is the only image of a person in the whole cave. Flint knapping debris and ochre covered flints were found here, indicating the teaching of tools for butchering. One can imagine this being a specific, final lesson – kill or be killed.

Sound and speech

What is missing are the sounds of the teachers and learners. But even here we have some clues. One image, called the Roaring Stag is prominent. I have heard this in the Highlands of Scotland while camping in Winter. The noise is incredible like wolves all around you. It is likely that these sounds would have been simulated in the cave, an intense and frightening amplifier. You can imagine people in the dark suddenly frightened by the sound of rutting stags.

Communal knowledge

I wrote about this in my books on AI and 3D mixed reality, as they tell us something quite profound, that learning, for millions of years, was visual. We were shown things. This is our primary sense and as learning was about the world we knew and skills we had to master, images were our content. But we also have meaningful symbolic communications - not yet rwiting as we know it but an account of sorts and a sense of number.

Additionally, this was the first example of a communally shared learning experience. What we learnt and knew was not owned by others. It was a shared dataset, brought together by the whole group, to be shared for mutual benefit. It took a huge communal effort to create the first LIM (Large Image Model). There were no arguments about who drew or owned what, no ethical concerns about the dangers of sharing our data, just the need to share to survive and thrive.

Conclusion

Altamira was my first illustrated cave, many years ago. I can still remember the shock of that experience, a visceral surprise. Lascaux is even more wondrous. These places reveal the awakening our species - Homo sapiens, the ‘knowing man’. When we began to teach and learn, preserving and passing on our cultural knowledge. We became smarter and more cunning. The Neanderthals, who dabbled in such cave representations, were already long gone. We had separated the known from the knower, so that it could be passed on to many others. We were on the path to externalising ideas, refining them, reflecting upon them and using this knowledge to create new things, moving from tools to technologies; farming, writing, printing, the internet and AI. We became Homo technus.


Sunday, July 07, 2024

Labour, growth, productivity and AI

Labour Party Manifesto was led by a single goal – GROWTH. 

Problem – easy to promise, hard promise to keep.

Yet there are two letters that got zero mention by any party in this election – AI. The evidence is clear, that significant productivity gains can be achieved using this tech. You don’t need AGI to reap the benefits, just focus on straight utility, making things faster, better and cheaper.

Despite the global shift to a digitised economy, productivity growth has been declining since the 2000s. Artificial Intelligence will drive a productivity surge that supports long-term economic growth. Excitement is growing over "generative" Artificial Intelligence, which utilizes advanced computer models to create high-quality text, images, and other content based on their training data. Many are speculating whether this technology could revolutionise labour productivity, similar to the impact of the steam engine, electricity, and the personal computer. One is Tony Blair who has said this should be the first thing a Labour government looks at.

The positive effects on productivity have always lagged behind the invention of new technologies. However, AI seems to have had immediate effects due to its rapid adoption, which is down to its:

Fremium model

Ease of use

Cross sector application

Clear efficacy

2024 is the year of broad adoption, with real productivity gains in organisation-specific tasks. Whatever the process, AI seems to significantly shorten and also increase quality of output.

Initial studies show significant product gains, as well as increases in quality, even in creative tasks. In fact, it is the sheer variety of tasks that matters.

A further advantage is as a solution to recruitment needs. In increasing productivity, less staff are needed for the same amount of work, hence growth.

We are not in the EU therefore not subject to the EU AI Act. In being more aligned with the US and having English as our language we have an advantage within Europe.  

Policies

In schools we need to encourage the use of this technology to increase:

1. Teacher productivity through automated production, teaching & marking

2. Learner productivity through the adoption of these tools in the learning process

3. Admin productivity

4. AI in all teacher training

5. Reduction in curriculum content

In tertiary education, where we have Jaqui Smith as the skills, further and higher Minister, we need:

More short transition courses

Lecturer productivity

Leaner productivity

Focussed, robust AI research

In business we need to incentivise and support the creation and growth of startups. But for startups to truly thrive, several conditions must be met:

Efficient and high-speed dissemination of technical information, including:

Open-access technical publication

Open-source software

Data available for research

Smooth movement of personnel between companies, industry and academia

Smaller equity slices by Universities

Easier access to venture capital

Application of AI in SMEs

In large organisations we need:

Good measurement and studies in productivity

Cases studies of successful implementations

Implementation across the public sector

Tax breaks for AI in companies exporting

Implementation

Above all we need to stop the doom mongering. It is not that AI practitioners are exaggerating its capabilities, that mostly comes from straw manning by the media, a few well known doomsters and speculative ethicists. These people will deny most ordinary people growth and prosperity while they sit on good institutional salaries and pensions.

We have the additional problem of politicians not being tech savvy, few having technical or scientific degrees, and many grazing on media reports and old-school advisors of the same ilk.

What we need from National Government are policies that accelerate this competitive advantage. NOT create more institutes. Enough already with the quangos like Digital Catapult. They do not help. Ignore the trade quangos who write anodyne reports written by low level experts. Avoid the consultancy trough frameworks, reports and documents.

Conclusion

There is one man who could make a big difference here - Sir Patrick Vallance. He has a strong research background and understands the impact it will have economically. We have an opportunity here to not get entangled with the negativity of the EU and forge ahead with an economic growth model based on increased productivity and new jobs. The other Ministers have shown little in the way of innovation in policy. He has the chops to get at least some of this done.

 

Tuesday, July 02, 2024

Mary Meaker's tech report 'AI & Universities' is right but delusional

Mary Meaker’s tech reports were always received with a degree of reverence. Her technology  PPTs were data rich, sharp, clear and were seen as a springboard for action. This latest report is the opposite. Data-rich but delusional. 

Curiously, her data is right but her focus on Higher Education as the driver of growth and social change is ridiculously utopian. The institutional nature of Higher Education is so deeply rooted in the campus model, high costs and an almost instinctive hatred of technology, that it cannot deliver what she advises. She is therefore both right and wrong. Right in her recommendations, wrong in her prediction. Higher Education’s reaction to the AI storm was to focus largely on plagiarism, with a pile of negativity expressed thinly veiled as ‘ethics’.

The research base has shifted out of Universities into companies that have market capitalisations in their trillions, huge cash reserves, oodles of talent and the ability to deliver. Higher education at the elite end has huge endowments but remains inefficient, expensive, has a crisis of relevance and is wedded to a model that cannot deliver on her recommendations.

Where she is right in is pointing towards the US advantage, set by Vannervmar Bush in the 1940s, where industry, Higher Education and Government were seen as a combined engine for growth and chance. 

Vannervar Bush's Vision

Vannervar Bush (1890 - 1974) was the Dean of the School of Engineering at MIT, a founder of Raytheon and the top administrator for the US during World War II. He widened research to include partnerships between government, the private sector and universities, a model that survives to this day in the US. He claimed that his leadership qualities came from his family who were sea captains and whalers. He was also a practical man with inventions and dozens of patents to his name. In addition to his Differential Analyzer, he was an administrator and visionary who not only created the environment for much of US technological development during and after World War II leading to the internet but also gave us a powerful and influential vision for what became the World Wide Web.

When World War II came along he headed up Roosevelt’s National Defense Research Committee and oversaw The Manhattan Project among many others. Basic science, especially physics, he saw as the bedrock of innovation. It was technological innovation, he thought, that led to better work conditions and more “study, for learning how to live without the deadening drudgery which has been the burden for the common man for past ages”. His post war report saw the founding of the National Science Foundation, and Bush’s triad model of government, private sector and Universities became the powerhouse for America’s post war technological success. Research centres such as Bell labs, RAND Corporation, SRI and Xerox PARC were bountiful in their innovation, and all contributed to that one huge invention - the internet.

Bush was fascinated with the concept of augmented memory and in his wonderful 1945 article As We May Think, described the idea of a ‘Memex’. It was a vision he came back to time and time again; the storage of books, records and communications, an immense augmentation of human memory that could be accessed quickly and flexibly - basically the internet and world wide web.

Fundamental to his vision was the associative trail, to create new trails of content by linking them together in chained sequences of events, with personal contributions as side trails. Here we have the concept of hyperlinking and personal communications. This he saw as mimicking the associate nature of the human brain. He saw users calling up this indexed, motherlode of augmenting knowledge with just a few keystrokes. A process that would accelerate progress in research and science.

More than this he realised that users would be able to personally create and add knowledge and resources to the system, such as text, comments and photos, linked to main trails or in personal side trails - thus predicting concepts such as social media. He was quite precise about creating, say a personal article, sharing it and linking it to other articles, anticipating blogging. The idea of creating, connecting, annotating and sharing knowledge, on an encyclopedic scale anticipated Wikipedia and other knowledge bases. Lawyers, Doctors, Historians and other professionals would have access to the knowledge they needed to do their jobs more effectively. 

In a book published 22 years later, Science Is Not Enough (1967), he relished the idea that recent technological advances in electronics, such as photocells, transistors, magnetic tape, solid-state circuits and cathode ray tubes have brought his vision closer to reality. He saw in erasable, magnetic tape the possibility of erasure and correction, namely editing, as an important feature of his system of augmentation. Even more remarkable was his prophetic ideas around voice control and user-generated content, anticipating the personal assistants so familiar to us today. He even anticipated the automatic creation of trails, anticipating that AI and machine learning may also play a part in our interaction with such knowledge-bases.

What is astonishing is the range and depth of his vision, coupled with a realistic vision on how technology could be combined with knowledge to accelerate progress, all in the service of the creative brain. It was an astounding thought experiment.

AI and growth

AI is now fulfilling Bush’s vision in moving our relationship with knowledge beyond search into dialogue, multimodal capabilities and accelerated teaching and learning, along with the very real implementation of the extended mind.

But the power has shifted out of the University system into commerce. Higher Education has retreated from that role and the research entities such as Bell Labs and Xerox Parc are no longer relevant. She is right in seeing the US power ahead of everyone on AI and productivity. The danger is that this produces an even lazier Higher Education sector that doesn’t adapt but become even more of an expensive rite of passage for the rich.