Friday, July 26, 2024

Choosing a GenAI model? Tricky... quick guide...

Choosing a GenAI model has become more difficult at the professional, even personal, level. This is to be expected in this explosive market but at least the frontrunners are clear. Note that the investment and entrepreneurial spirit to do this are not trivial, so it is unlikely that these names will change much in the near future. I suspect we'll have a few big vendors, some open source, with many falling by the wayside - that's the way markets work in tech.

BLOOM was a salutary lesson here. It was a large language model (LLM) created by over 1000 researchers from 60+ countries and 250+ institutions, was released to help advance research work on LLMs. It’s focus on being widely multilingual seemed like a strength but turned out to give little advantage. The lack of a chatbot, no RLFH and being outrun by Llama and Mistral didn’t help.

But it is not easy keeping up to date, familiar with and having the ability to discriminate and choose the model that works best for your needs. Here’s the good news – you can swap them out but be careful as they differ in many ways.

Different models

Now that we have a range of models available, not all the same, AI has become a dynamic field that benefits the consumer, with new models being regularly released. But is not just LLMs that are being released. There are the top end SOTA models, CPT- 4o, Claude Sonnet 3.5, Gemini Pro 1.5. Then there are open-source options such as Llama 3.1 and Mistral Large 2. There is also a range of smaller models. Here's a quick crib sheet...

They all come with a range of additional functionality in terms of integration within large IT environments, tools, mobile apps, some (not all) with image generation (not all), different context windows, web search (not all), validate with sources, different uploading and editing capabilities, foreign language capabilities, along with specialist services such as GPTs or Artifacts. It is complex when you get into the detail.

Choosing a model

Choosing a model depends on breadth of functionality, speed, cost, quality, access to external tools and availability. This is often poorly understood.

We have found that, depending on what you are delivering front-end, and new releases often have choices on size, price and functionality, it is often not difficult to swap out models at the back-end, getting reductions in price, better functionality and speed. A good technology partner will keep a close eye on all of this and make the right choices for you.

It is only when you work with all of the major models, in real projects with real clients that you really understand the real strengths and weaknesses of each model, especially when you hit what appears to be arbitrary limits or unreliability on access and speed.

It is easy, for example, to assume that all are multimodal, all available in Europe, all available as mobile apps on both Android and iOS, all have massive context windows and code interpreters. Their editing and integration capabilities also vary. Some have very specific functionality that others do not have, like GPTs and Artifacts.

Open source

Don’t assume that because a model is Open Source, like Llama and Mistral, it is free and allows you to dispense with the paid services. They come with licences in terms of use and are not easy to use. Open source, and we’ve had experience of this in other domains such as with VLEs and LMSs, are not easy to use on their own and, especially in AI need considerable in-house expertise.

Nevertheless, they open up a new territory for development and need to be considered. Meta believes this is a move towards the integration of open-source models, just as Linux has become an industry standard. Integrity and transparency are also increased, as you can test these models yourself.

Conclusion 

Knowing and having a real ‘working’ knowledge of these models is essential for any real implementation and especially development work. Before working with a consultant or vendor, ask them a few questions on the differences between models. Avoid vendors who just use one model, as that often shows a prejudice towards ‘their’ model. It really does separate the competent from the bandwagon jumpers.


Thursday, July 25, 2024

Resilence training - where it came from and why it went so badly wrong


My bullshit word for the last couple of years has been 'Resilience'. It is what David Graeber, in his brilliant book, Bullshit Jobs, called making shit up to make money from assumed misery. If you have to attend a hokey conference or conference talk on 'Resilience' you don't and will never have it... to be fair, if you make it through a Resilience training course that should suffice!

Workforce learning professionals are in a state of perpetual angst. They feel they are not listened to and don’t have a voice at the top table. This is true HR and L&D have never had any sustainable influence to board level. Hardly surprising when we deliver courses on things neither the business nor its employees ever asked for. I have never, ever heard any normal person say what they need is a ‘course’ on ‘resilience’. It is something supplied by L&D not demanded by organisations, a chimera to make us look caring and important. This has been a worrying trend in workplace learning the delivery of courses based on abstract nouns that no one ever asked for. We supply things we think are relevant, rather than look at what the business demands in terms of goals.

Having beaten into people that they have all sorts of mental deficits, through billions spent on DEI, ESG and wellbeing training courses and initiatives, thrashing employees like piƱatas, telling them they are weak and have deficits that need cured by courses. To remedy this, apart from the endless groundhog debates on the future of L&D at conferences, we come up with abstract concepts around which conference sessions and courses have to be built. The current obsession is with ‘Resilience’. These are too often bouts of over-earnest classroom courses or weird e-learning. All of this despite the overwhelming evidence, over many years, that this does not work, it continues. A ‘roll of the eyes’ is the most common reaction when you ask people what they think of all this.

So, as HR has turned into defending the organisation against itself, they then had the temerity to demand that we all need to man and woman up – we need more resilience. It’s like slapping people repeatedly on the face then them telling them to ‘pull themselves together’ before carrying on with the slapping. If your organisation is so dysfunctional that you need to train people to deal with that dysfunction - that speaks volumes about your organisation. Training people to deal with dysfunction is not going to fix it.

Curious history of ‘Resilience’ training

Resilience has deep roots in psychiatry, especially Freud and his daughter Anna Freud, on how individuals cope with trauma and adversity, who I discussed in detail in a recent podcast and whose theories are literally flights of fancy. also Bowlby and Erikson (wrong on most counts) pushed this forward in the 50s and 60s. But it was in the 70s and 80s, that Emmy Werner and Ruth Smith did longitudinal studies on children in adverse conditions. Their work, especially the Kauai Longitudinal Study, highlighted the factors that contributed to resilience in at-risk and sick children.

In training, resilience emerged from the positive psychology movement in the late 1990s when Martin Seligman emphasised the importance of building strengths and well-being, rather than just treating mental illness. Also discussed in detail in this podcast. He backtracked somewhat and more recent evidence shows that the training and wellbeing programmes are not effective at all.

Antifragility 

Modern Resilience training is a mishmash of all of this but there is one book that people thought promoted resilience but was in fact an attack on resilience and resilience training. That book was Antifragile: Things That Gain from Disorder by Nassim Nicholas Taleb. 

Taleb was a derivatives trader then hedge-fund manager and anyone who has actually read the book will know that he hates resilience training. “The fragile want tranquility, the antifragile grows from disorder, and the robust doesn’t care too much.” This is a man who is fiercely critical of certain elites and experts, particularly those he believes are detached from real-world consequences and has resonated resonate with populist sentiments.

He defines resilience as a mistake in that it promotes the ability to resist shocks but stay the same. A resilient system can withstand stress without significant damage but does not necessarily improve from the experience. By contrast, antifragile systems thrive and grow stronger in the face of stress and adversity. Taleb advocates for antifragility over mere resilience because antifragile systems benefit from disorder and challenges.

Taleb argues that resilience training, focuses on helping individuals or systems return to their baseline state after a disruption, an approach that misses the opportunity to leverage stressors for growth and improvement. It creates a false sense of security encouraging individuals and organizations to believe they are adequately prepared for challenges when, in fact, they are only prepared to endure them, not to improve from them.

We need to deliberate expose people to manageable levels of stress and variability to stimulate to build stronger, more adaptable capabilities. People need to continually seek out challenges that push their boundaries and enhance their capabilities so they can survive disruptions but actively using them as catalysts for innovation, embracing and leveraging stressors and challenges to achieve growth and improvement. In practice he doesn’t like HR, L&D as they are part of the bureaucracy of institutions promoting rules and rigidity, fixed outlooks and fixed career paths. Individuals should rather seek out challenges, embrace uncertainty and new experiences that push their boundaries and expand their capabilities.

Conclusion

And so we end up with a hotchpotch of stuff wrapped up into a disjointed PowerPoint and call it Resilience training. We need to stop building empires around ‘big words’ and get back to training competences to solve the skills shortages that all employers report.


Worrying data on how students & institutions are dealing with AI

Just published, this report from Cengage shows that students are stuck in institutions that can’t make their minds up about AI. As we’ve also seen workplace users are often in a similar position. This report should be seen in conjunction with the paper just published by the University of Chicago on the use of AI in the workplace showing it’s everywhere, the main resistance being institutional barriers. Back to the Cengage report and some data.

As 52% of students don’t end up in work even remotely related to their degree subject and career advice is woeful, they know, more than the institutions they’re educated in, how the world is changing, understand that AI is important and doing something about it.

They really are using this technology in anger to learn:

55% grads said their educational program did not prepare them to use GenAI tools.

The reasons for this are clear. Education is highly institutionalised in terms of structures, timetables, funding, quality control and embedded practices. It is these deeply embedded practices, strict hierarchies, bottlenecks and bureaucracy that make it difficult to adapt to new realities. One such reality is AI.

Even worse, their reaction is often to simply reject out of hand, categorise it as cheating or get obsessed about fictional ethical issues. After 40 years in the learning technology game I’ve seen it many times before, with email, photocopiers, calculators, computers, internet, word processing, social media, smartphones, recording lectures…

Yet what this data shows is that these students will emerge into a world where the data already shows massive use of AI. It is fast becoming an essential skill, yet little attempt is made to recognise this or do anything about it. Even worse, it is often demonised to the point of being restricted.

59% of employers say AI has caused them to prioritise different skills when hiring.

Even when we see this dramatic swing in the workplace, we still don’t react. Huge numbers of people are using GenAI when applying for jobs or skilling themselves in using the tools. They know it gives them edge. Why not listen to what is happening - read the research, look at the data.

48% of students say courses were offered to help them learn how to use GenAI

To be fair we’re in a twilight zone, or what Dan Fitzpatrick calls a ‘liminal’ zone’, stuck between the old and new. The first reaction in institutions is usually to deliver a ‘course’.  That’s is fair but not the real solution. 

Yet many are making the effort to embed AI in the educational system. This is mostly in the US but also some notable examples in other countries. There’s a slew of Universities in the US that have heartily embraced the technology and prepare their students for an obvious dimension of the future.

55% of colleges/learning programs discouraged GenAI use

Sadly, there’s the laggards, still stuck is caricaturing AI as some sort of cheat code in their game, for that is what much of this has become, a cat and mouse game on assessment. I understand that it takes time to get to grips with new stuff, but the revulsion and constant moralising, is damaging the future prospects of your learners. I expected fear and loathing in Las Vegas, not our Universities.

51% students say pace of technology making them second-guess their career

This is interesting and heartening. Young people recognise that the system is not helping them and are doing it for themselves, rethinking what they need and want to do. Sure they’re worried about their future, but understand that they have to be part of the real world not just the world of just lectures and essays.

Conclusion

It is important to track the data on use and attitudes as this allows us to readjust for the future. It is perhaps utopian or at least impractical, to expect institutions to change fast but the good news is that tech-savvy young people are doing it anyway.

 

Wednesday, July 24, 2024

GT5 and synthetic baby! It’s the future I tell you… but what is it?




Rumour has it that OpenAI is using over 50 Trillion! tokens of ‘synthetic’ data to pre-train GPT5. May or may not be true, but it is worth exploring why this is a fascinating development frees scaling from the bottleneck of existing web-based data and has many benefits in terms of cost, scaling, privacy and efficacy

Synthetic data is artificially generated data that simulates real-world data. Rather than scraping and buying data that is getting scare and expensive, you create it on computers. It can be used to augment even replace real datasets. Put simply; computers, not people, create the data.

Advantages of synthetic data?

The costs efficiencies are obvious, as you can generate huge amounts quickly, at very low cost, avoiding the need to buy ever-diminishing real datasets. Privacy problems then disappear, as synthetic data is not derived from actual users, making it particularly useful in domains such as healthcare and finance. Using synthetic data also eliminates the risk of accidentally exposing information, addressing ethical and legal concerns associated with real data, especially around the use of sensitive or copyrighted material.

It can also be targeted to specific needs,  cases and conditions that may not be well-represented in real data. For example, it allows researchers and developers to simulate various scenarios and analyse potential outcomes in fields like autonomous driving, robotics, healthcare and financial modelling. It can also extend the reach of cases beyond the common cases in existing real-world data, out towards unusual or edge cases. This matters when models start to reason and create new solutions.

But its main advantage is that it can generate gargantuan quantities, making it easier to create huge datasets for training and scaling deep learning models. One also has complete control over the characteristics and distribution of the data, which allows the creation of balanced datasets, the elimination of bias or introduction of specific needs, practical and ethical.

It may not capture all the complexities and nuances of real-world data, potentially leading to models that do not generalise well to actual scenarios. This is because it is tricky to validate that it accurately represents the target problem, If the generation process is flawed or biased, the synthetic data can also inherit these issues, leading to skewed model performance. However, at least one can test and amend to solve the problems as you have complete control over the production process and what data to use.

Note that this is different from recursively generated data, where models scrape data from the web that now included model generated data. The recent paper in Nature, showing model collapse, was about indiscriminate data, not carefully generated and selected synthetic data. was taken by some to be a critique of synthetic data. It is not,

How is it generated?

As always, synthetic data has a long history. Early 20th century statistical simulations where researchers created artificial datasets to test statistical methods. This approach was used to validate statistical models. In the 1940s, Monte Carlo Methods were used in applications in physics and mathematics to simulate complex systems and processes. 

More recently in the 1990s synthetic data started to be used to preserve privacy. The statistician, Donald Rubin, statistician, came up with the idea of generating multiple synthetic datasets to handle missing values. 

One breakthrough was the development of Generative Adversarial Networks (GANs) by Ian Goodfellow and others in 2014. Generative Adversarial Networks (GANs) can used to create highly realistic synthetic data by training two neural networks in opposition to each other. Another group of generative models are autoencoders that have been widely used to create synthetic data, especially for image and text data.

Physics-based or agent-based simulations can also generate data that mimics complex real-world processes. Then there are techniques like image rotation, flipping, and scaling can produce variations of real data, effectively creating synthetic examples. When you expose the model to various transformations of the same image, data augmentation helps the model generalise to new, unseen data. 

Augmentation techniques like rotation, flipping, and scaling help the model learn to recognise objects and patterns coping with these variations. You can rotate the image by a certain angle (e.g. 90, 180 or 270 degrees) to help the model recognise objects regardless of their orientation. Similarly with horizontal and vertical flipping, translation, cropping, varying brightness, contrast and scaling of images to simulate viewing the object from different distances and perspectives. You can even add random noise to the image making the model more immune to imperfections in the data.

The advantages are obvious in medical imaging, where obtaining a large labelled dataset is difficult. This type of data was used, for example, in the development of autonomous vehicles. Companies like Waymo and Tesla generate huge amounts of synthetic data to train and test their self-driving algorithms, simulating diverse driving scenarios and conditions. In healthcare, synthetic data has been used to create realistic medical records for research and training purposes, while preserving patient privacy as well as tumour detection. You can also generate synthetic health records for population health research.

Another use is to train conversational models to provide more examples for the model to learn from, especially in cases where real conversational data is limited or biased. By generating synthetic conversations, developers can introduce a wide range of scenarios, dialects, and linguistic nuances that the model may not encounter in real data, to improve its robustness and adaptability. This may be conversations involving rare or unusual situations, to allow the model can handle a broader spectrum of queries and responses. Also, if certain types of conversations are underrepresented in real data, synthetic data can be generated to balance the dataset, reducing bias and improving model performance across different types of interactions. Again, using synthetic data helps mitigate privacy concerns, as the data does not contain any real user information, say facebook or X posts . You see how this also helps in customer support chatbots that can be trained with synthetic conversations tailored to the specific products and services of a company.

Open source models, like Llama, are powerful tools for synthetic data generation. It is open source and allows customers to create high-quality task- and domain-specific synthetic data for training other language models. You can answers for datasets, then use them in to fine-tune smaller models. This has already been done in a number of different domains.

Conclusion

Synthetic data is a powerful tool for training and validating machine learning models with massive benefits in terms of scalability, control, and privacy. While it is not without problems but these are known and solvable. Ongoing advancements in data generation techniques continue to improve the realism and utility of synthetic data, making it an increasingly valuable resource in AI and machine learning. GPT5 – bring it on!

 

Monday, July 22, 2024

Future of AI is here but not evenly distributed - some real SURPRISES on use, gender, age, expectations & ethics

A few surprises in this excellent paper on use of ChatGPT on ‘The Adoption of ChatGPT’ by Anders Humlum from the University of Chicago (July 9, 2024).

It was a large-scale survey (Nov 2023 - Jan 2024) of 100,000 workers from 11 exposed occupations, linked to labour market histories, earnings, wealth, education and demographics to characterise the nature of adoption of ChatGPT. Note that this was before release of certain improved AI models such as ChatGPT4o and Claude 3.

The 11 occupations, a good selection, were; 

HR professionals, Teachers, Office clerks, IT support, Software developers, Journalists, Legal professionals, Marketing professionals, Accountants, Customer Service and Financial advisors.

Use


almost everyone is aware of it 

50% had used technology

adoption rates from 79% for software developers to 34% for financial advisors

differed in their intensity of use


As expected and the authors confirm the “widespread adoption of ChatGPT, only a year after its first launch, solidifies it as a landmark event in technology history.”

Age a factor

younger & less experienced workers more likely to use ChatGPT

every year of age has 1.0 percentage point lower likelihood of using ChatGPT 

similarly with every year of experience at 0.7 lower

Gender gap

Women less likely to use tool:

20 percentage points less likely to have used the tool

pervasive in all occupations

in various adoption measures

persists when comparing within same workplace

also controlling for workers’ task mixes

Expectations

Employees perceive:

substantial productivity potential in ChatGPT

average estimates that ChatGPT can halve working times in a third of job tasks

smaller rather than larger time savings for workers with greater expertise

38% say they will not perform more of the tasks ChatGPT saves time completing

Interesting experiment

In a clever experiment, inserting exposure to expert assessments of the time savings from ChatGPT in their job tasks, they found, “despite understanding the potential time savings from ChatGPT, treated workers are not more likely to use ChatGPT in the following two weeks”. This is rather worrying for those who see training, frameworks and documents as the solution to increasing use.

Finally, and this really matters, they investigated what prevents workers from transferring the potential productivity gains from ChatGPT into actual adoption. 

Workers reported barriers:

restrictions on use

needing training as the primary barriers to adoption

need for firm policies (guidelines for use or facilitating employee training) 

few reported existential fears, becoming dependent on technology or redundant in their jobs, as reasons for not using ChatGPT

This last point is important. Employees are not too concerned with ethical issues and do not seem to fear dependency or redundancy.

Conclusion

There's a ton of other useful stuff and detail in this paper but the Surprises that stood out for me were the lower uses by women, strong barriers to adoption by employers, despite workers agreement htat is will increase productivity and indifference to ethical concerns.

Bibliography

Humlum, A. and Vestergaard, E., 2024. The Adoption of ChatGPT. University of Chicago, Becker Friedman Institute for Economics Working Paper, (2024-50).


AI projects: What to do? How to do it? Let me introduce you to Nickle LaMoreaux, the CHRO (Chief Human Resources Officer) at IBM

For all the ‘big’ reports from Deloitte, BCG, PWC etc, I prefer the more measured views from people within organisations who are doing smart stuff and learning from it. One I particularly liked, as it chimed exactly with my own experience on implementing AI within small and large organisations comes from IBM.

Let me introduce you to Nickle LaMoreaux, the CHRO (Chief Human Resources Offices) at IBM.

She sees HR as ‘client zero’ for lots of initiatives. I like this up front statement, as HR is so often simply reactive or sidelined into initiatives that are not focussed on business improvement.

It just seemed reasonable from someone who is actually doing stuff, not just talking about it of doing surveys. She’s practical and has to deal with the bottlenecks, cultural resistance and hierarchies within a large organisation. We should listen.

What to do? The 3 cs!

1. Consumer-grade experiences: delightful experiences enabled & enhanced by technology. 

I loved this. Go for time saving and increasing quality but if you want to effect change, go for something visible, that touches people. GenAI clearly does this as the billions of uses per month show that dialogue works. It is simple, engaging and delivers what people want. It gives them agency.

2. Cost efficiencies 

Cost efficiencies are so 2024, so she also went for a solid project and measured the efficiencies in terms of time saved and quality. “HiRo saved 60,000 hours for Consulting managers in a year”. That’s what I want to hear, a solid project with measurable outputs. She unashamedly wants to see a high return on investment. This is refreshingly businesslike.

3. Compliance  

Spot on. This is the one area where AI can be leveraged to reduce time and, increase quality in processes and training. As she rightly says, it is “perfect fit for AI”. Complaince has become a nightmare at the national and international level. AI can really help in terms of processes, support, learning and keeping everything up to date. This is one area where the data side of AI can really help.

Identify Some Quick AI Victories

Don’t procrastinate – that’s the easy choice. AI is the here and now. The tools are available, use cases identifiable so don’t give into to the ‘wait and see’ attitude. She explains how to proceed by winning some smaller battles, not turning into a strategic war. This is correct. This is new, powerful technology, it has huge potency but also some vagaries. We need to proceed clearly but also with caution. What’s the point of creating an unnecessarily negative climate for future progress on the back of some failed and over-ambitious masterplan?

Choose “High-volume, repetitive tasks. Processes employees don’t enjoy. Moments that matter for employees.” This is so right. Large organisations are full of processes, which are tedious, bureaucratic, have bottlenecks, old technology solutions and are ripe for automation. They are often deeply embedded practices. Focus on these and the quick wins will come.

How to do it?

1. Start small and experiment 

This will vary across organisations but I doubt there’s a single organisation, small, medium or large that will not benefit from the use of AI across a range of uses. These need not be grandiose or abstract, such as skills recommendation prediction or large data analysis projects. They can be simple and manageable.

2. Learn as you go, fail fast & be agile 

Can’t emphasise this enough. This technology can be implemented fas=st against a goal. Don’t get bogged down in huge project plans – get on and do mit as the technology will get better as you process. It is not that you may have to pivot in some way on the technology and approach – you WILL have to pivot.

3. Lead with use case, not technology

This often comes down to finding a pain point, bottleneck or hated process for a quick win. It doesn’t have to be big just impactful. Make sure your goal is clear. 

4. Cover off data and security issues

You have to establish trust and allay fears in the projects chosen. This often comes down choosing a technology partner whose vision, goals and purpose align with your own. But some simple FAQs to vapourise the myths and calm fears is wise.

5. Build advocates out of your employees

I have seen this for real. Young employees, let loose to use this technology and show their worth. People who do process know what’s needed to redefine, short-circuit and improve those processes. Give them the agency to suggest projects.

Clear goals

I’ve seen projects founder when it was not clear what the final goal was. AI projects tend to shapeshift and that is fine in terms of swapping out LLM models used and changing tactics – this is normal and easier than people think. But don’t founder on the rock of vagueness. Be clear about goals

I’m not a fan of old-school SMART objectives, as what is needed is a solid goal and sub-goals. This FAST framework, from MIT, sums it up perfectly:

Conclusion

We are in the first phase of seeing the benefits of GenAI in organisations. Before we head off to climb that great single peak in the distance, take time to conquer a few foothills. Choose your hills carefully and once chosen, make sure you focus on them. Stay ambitious, measure that ambition and shout it from the hill top when you get there! Thanks Nickle.

Sunday, July 21, 2024

Collaborative Overload Harvard Business Review - why collaboration often sucks and what to do about it!

Neat study in Harvard Business Review on the disturbing impact of increased collaboration in the workplace on employees and organisational efficiency. 

I literally gave a sigh of relief when I saw and read this. We’ve all been there – in meetings where we know many should not be there,, Everyone these days wants to schedule a Zoom meeting, when an email would probably suffice. Overlong zoom calls with far too many attendees, some paying little attention. Poorly chaired meetings, everything scheduled for an hour, sometimes longer. Too many presentations and not enough decision making. 

I’ve long had the suspicion that collaboration, especially in meetings but also in learning, is over-egged and often results in poor productivity. That’s what my recent post on ‘quiet learning’ was about.

The study, based on data collected over two decades and research conducted across more than 300 organisations, found that time spent in collaborative activities by managers and employees has increased by 50% or more but here’s the rub, 20% to 35% of value-added collaborations come from only 3% to 5% of employees. These high levels of collaboration consume valuable time and resources, often leading to low performance and stress. The result is actually overburdened employees who become bottlenecks, leading to decreased personal effectiveness and higher turnover rates.

In truth, time and energy are finite, and knowledge can often be shared asynchronously, without over-long meetings. Smaller teams, more focus on goals, moving fast – shared resources – that’s the way to increase productivity

They urge us to:

Leverage technology to make informational and social resources more accessible & transparent

Encourage employees to stop, filter & prioritise requests

Educate employees to Just Say No or allocate ‘limited’ time for requests

Promote use of informational and social resources over personal resources

Implement practical rules for email requests and meeting invitations

Use tools to monitor and report time spent on collaborative activities

Promote natural F2F collaborations by co-locating interdependent employees

Include collaborative efficiency in performance reviews, promotions, and pay raises

Neat... but it all goes a bit off beam with the idea of hiring Chief Collaboration Officers to manage and promote effective collaboration within organisations. Davide Graeber’s brilliant book ‘Bullshit Jobs’ comes to mind.

Was done some time back and of anything I feel things have got a lot worse.

 

Recruitment is a cat and mouse game. AI gives the mice real edge! 30 ways candidates use AI to get new jobs…

I gave a talk on AI for recruitment in Scandinavia this year, showing how ‘recruitment’ was one of the first things to be hit by AI. For all the abstract talk on AI, let's get practical.

There was the famous interviewee who got a job as a space engineer by taking the online questions from the interviewer, applying speech to text along with insertion into Chat GPT, then coming back with answers, which he read in his own words. He had no knowledge in the domain but was offered the job! This was unusual but candidates are almost invariably using AI to increase their chances.

In truth, anyone who recruits needs to know about what is happening here. My talk was largely about how recruiters can use AI to improve the process of recruitment but first you need to know how potential candidates are using AI. Believe me – they are!

So here’s my Cliffnotes for job seekers….

30 ways candidates use AI to get new jobs

General

Prompt for jobs you may not have thought of that match your quals/CV and experience

Write and refine speculative email/letter

Use Ad, Job description, org website, anything you can find in your prompts

CV/Letter

Personalise CV for every new job

Identify key skills for that job then adjust your CV

Rewrite CV with key points at top

Rewrite with keywords from ad/Job Description in CV

Make CV more concise/turn into bullet points

Summarise down to 1 or 2 pages

Continue to create/adjust/rewrite your CV to match Ad/Job Description

Summarise your CV for the covering letter

Create the covering letter

Critique/proofread your final CV/covering letter

In online applications rewrite entries before inputting

Create your own avatar and send with CV!

Interview preparation

Ask for top ten interview questions (interviewers are lazy)

Ask for likely questions from Ad/Job Description

Use CV to create great answers for each predicted question

Make your achievement answers more memorable

Create answers that showcase specific skills from your CV

Include keywords from Ad/Job Description do I need to include in my answer

Create questions you may want to ask

Create credible answers for gaps in CV

Go to web site and get AI to summarise what ‘organisation’ does

What challenges/opportunities does the organisation have with solutions

Interview practice

Prompt with your Job description and CV and get AI to interview/role play for the job

Try speech dialogue in ChatGPT4o for realistic spoken dialogue

Negotiation

Create a negotiation strategy for salary

Create follow up email if you haven’t heard

Create thank you email even when you don’t get job

PS

If you’re feeling uneasy with this list, then I recommend that famous quote from Catch22…

From now on I'm thinking only of me. 

But, Yossarian, suppose everyone felt that way?

Then, said Yossarian, I'd certainly be a damned fool to feel any other way, wouldn't I?

Next post will be 30 things you can do as a recruiter…


Saturday, July 20, 2024

A plea for quiet learning for adults against the cacophony of the training room or e-learning...

David Foster Wallace’s wrote ‘Infinite Jest’ a huge sprawling novel that is free from the usual earnestness of the novel. You need to find the time and quiet to read its 1000+ pages. It requires effort. Here he explains why reading is now often cursory, if people read at all.

I’m not as fanatical as most about reading, especially the ‘reading Taliban’, kids spending hours and hours reading Harry Potter. I’m with Plato on this and think that being out doing stuff is often better when you’re very young. Nevertheless, he has a point about solitary silence. I want to apply this to learning.

All my life I’ve been told that learning is ‘social’, ‘collaborative’ and should be done in ‘groups’ or ‘cohorts’. Most education is assumed to benefit from being in a ‘classroom’, ‘lecture hall’ or ‘training room’. I’m not so sure. It strikes me that most learning is done afterwards in the quiet of your bedroom with homework, in the library as a student or when reading and reflecting on your own as an adult or doing your job.

Teaching and training tends to be fuelled by people who think that you need to be in their company to learn anything. This is rarely true. I much prefer being free from the tyranny of time, tyranny of location and the tyranny of transience. That means the tyranny of ‘courses’.

My least favourite and one I refused to attend or having anything to do with, are those round table sessions where you choose a chair, get some vague question, discuss and feed back with flipchart sheets stuck on the wall. It's simply an exercise in groupthink. Nothing good comes from it as it's just a lazy, embedded practice.

I learn best when it is at a time and place I choose and not at some arbitrary pace set by a teacher, lecturer or trainer. I’m not wholly against the classroom, lecture or course, as it is clearly necessary to bring structure to young people’s lives for sustained attention in schools and for critical tasks. But for most learning and especially adult learning I’m not convinced.

One cognitive phenomenon that confirmed my views is the ‘transience effect’. Perhaps the least known but most important effect in learning. When you watch a video, television or a movie, listen to a lecture or get talked to for a long period, you will be under the illusion that you are learning. In fact, due to your working memory’s inability to hold, manipulate and process them enough to get into long-term memory, you forget most of it. It’s like a shooting star  - it shines bright in your mind at the time but your memories burn up behind you.

This is why I am suspicious of using a lot of non-recorded lectures, even video in learning. Even note taking is hard in a fixed narrative flow, as you’re likely to miss stuff while you take the notes and still don’t really have time to deeply process the information. I have no problem with recorded video or podcasts as I can stop and start, as I often do, to reflect and take notes.

What is annoying can be the cacophony of bad e-learning, with its noisy, cartoonish animations, visual and sound effects, especially in gamified content. I find it almost unbearable to page through this stuff, with its speech bubbles and often pointless interactions.  I posted this six years ago but you get my point…
https://youtu.be/BlCXgpozrZg

Being alone in the quiet (even music has a negative effect, especially if it has lyrics) actively reading, taking notes, stopping, reflecting and doing stuff works well because you are in the flow of your choosing, not someone else’s narrative flow. Agency matters as learning, the actual presence of knowledge and skills in your long-term memory, is always a personal experience. It requires your attention, your effort and your motivation.

For thousands of years the loudest things we would have heard was the background noise of nature, something that we barely register now. The church bell was perhaps the loudest non-natural thing one would have heard for centuries. Now, it’s the din of traffic, sirens, TV on, radio on, computer on… something’s always there. The cacophony of music in pubs, shops and public spaces drowns out thought or worse stimulates annoyance, even anger. 



US-EU digital divide deepens as Meta and Apple hold back product – it’s a mess

I warned about this over a year ago in Is the new Digital Divide in AI between the 'EU' and 'Rest of the World'? We saw Italy stupidly ban ChatGPT, then a similar rush to get a the Digital Markets Act and EU AI Act on top of the very odd GDPR legislation. As Taylor Swift says ‘players gonna play, haters gonna hate, fakers gonna fake’. That’s what this is largely about. Technocrats and legislators feel the need to play their regulatory game, fuelled by an intrinsic dislike of technology they’re going to rush it and because they’ve rushed it they make up things to legislate against.

AI has not proved to be a threat to anything or anybody. We’ve had several huge elections, including the EU parliament and even the bogeyman of deepfakes did not materialise. Meanwhile, the US economy is shooting ahead of Europe. The real issue is one of productivity and economic growth, not moral hazard.

This shows how the market is dominated by the US and China, not a single EU company is in this list:

Google

Three out of the top four EU fines have been against Google. To be fair these were good anti-competition fines that were right, not the more recent laws. It is here that the EU could play a strong and worthy role. However, it has not corrected the outright theft of revenues from tax havens within its own countries, most notably Ireland, where tax revenues are literally stolen on sales from other countries.

Facebook

The negativity and legislation in the EU is now coming home to roost as Meta have decided not to release their open-source ‘multimodal’ version of its Llama models in the EU. This is a crushing blow to both research and start-ups who use these models for product. This was in reaction to the setting of deadlines this month set by the EU. Note that this was actually in response to the opacity of GDPR not the new EU AI Act which will bring on a whole other set of problems.

Meta have made this bold move as the EU has not provided clarity over GDPR in relation to the use of customer data to train its AI models. My guess is that this is Facebook posts. You will not be able to use the coming multimodal version of Llama 3 Herd commercially, as it would break the license and open you up to being sued by Meta.

Apple

Apple have already stated that it is likely that Apple Intelligence will NOT be released in the EU later this year. Your iPhone and Mac will therefore be severely restricted within the EU. In fact, Apple Intelligence, is a clever attempt to protect your private data through edge computing. Apple's objection is, not against the EU AI Act but the Digital Markets Act. Apple claim it puts personal data and privacy at more risk. This is the danger of premature legislation. You cut off potential research and solutions to the very problems you are trying to solve.

This is worth some reflection. The aim is to increase competition. That is admirable, but despite decades of legislation there is no evidence that restricting innovation in US companies has actually led to home-grown solutions and companies. Note that this also means that any company using multimodal Llama and Apple Intelligence cannot be sold in the EU. It has a ripple effect that affects consumers and companies within the EU. The overall dampening effect of the legislation seems to do as much harm to Europe and the US. Meanwhile the Chinese make hay.

The threat for AI organisations, both public and private, is massive fines if they do not comply by August 2026 – the problem is, compliance to what? The legislation, in terms of huge putative fines, is simply too odd and vague for these companies to take the risk. 

X

Musk is suing the EU over their claim that the X blue tick breaches EU rules. He claims to have evidence that they asked for a secret deal. It's all very messy.

Conclusion

The EU AI Act was ratified by the European Parliament in March 2024 but a shadow lies over the whole processas it needs a ton of guidelines. Legislation like this is necessarily vague. On top of this there is a raft of legislation beyond this that is similarly vague. This shadow will dampen researcha dne commercial activity. I know this, as I get a ton of questions on this on almost every project.

As European influence diminishes in the world (it is only 5.8% of world’s population), the EU seems to be shooting itself in both feet, repeatedly. Rather than encourage research and the practical application of this technology to increase productivity and growth, it seems to want to play cops and robbers.

All of this is BEFORE the awful AI AU Act is enforced.


Thursday, July 11, 2024

Good discussion paper on the Role and Expertise of AI Ethicists: Bottom line – it’s a mess!

 Good discussion paper on the Role and Expertise of AI Ethicists

Who is an AI Ethicist? An Empirical Study of Expertise, Skills, and Profiles to Build a Competency Framework Mariangela Zoe Cocchiaro et al.

Bottom line – it’s a mess! 

In the less than 2 years, AI Ethicists have become common. You see it in social media profiles, speaker profiles, especially in academia. Where did they all come from, what is their role and what is their actual expertise?

Few studies have looked at what skills and knowledge these professionals need. This article aims to fill that gap by discussing the specific moral expertise of AI Ethicists, comparing them to Health Care Ethics Consultants (HC Ethicists) in clinical settings. As the paper shows, this isn’t very clear, leading to vastly different interpretations of the role.

It’s a mess! A ton of varied positions that lack consensus on professional identity and roles, a lack of experience in the relevant areas of expertise, especially technical, lack of experience in real-world applications and projects and a lack of established practical norms, standards and best practices. 

As people who have as their primary role the bridging the gap between ethical frameworks and real-world AI applications, relevant expertise, experience, skills and objectivity are required. The danger is that they remain too theoretical and can be bottlenecks if they do not have the background to deliver objective and practical advice. There is a real problem of shallow and missing expertise along with the ability to deliver practical outcomes and credibility. 

Problem with the paper

The paper focus on job roles, as advertised, but misses the mass of self-proclaimed, internally appointed and simply identified as doing the role without much in the way of competence-based selection. Another feature of the debate is the common appearance of ‘activists’ within the field, with very strong political views. They are often expressing their own political beefs, as opposed to paying attention to the law and reasonable stances on ethics – I call this moralising, not ethics.

However, it’s a start. To understand what AI Ethicists do, they looked at LinkedIn profiles to see how many people in Europe identify as AI Ethicists. They also reviewed job postings to figure out the main responsibilities and skills needed, using the expertise of HC Ethicists as a reference to propose a framework for AI Ethicists. Core tasks for AI Ethicists were also identified.

Ten key knowledge areas

Ten key knowledge areas were outlined, such as moral reasoning, understanding AI systems, knowing legal regulations, and teaching.

K-1 Moral reasoning and ethical theory  

● Consequentialist and non-consequentialist approaches (e.g., utilitarian, deontological approaches, natural law, communitarian, and rights theories). 

● Virtue and feminist approaches. 

● Principle-based reasoning and case-based approaches. 

● Related theories of justice. 

● Non-Western theories (Ubuntu, Buddhism, etc.). 

K-2 Common issues and concepts from AI Ethics 

● Familiarity with applied ethics (such as business ethics, ecology, medical ethics and so on).

● Familiarity with ethical frameworks, guidelines, and principles in AI, such as beneficence, non-maleficence, autonomy, justice and explicability (Floridi & Cowls, 2019). 

K-3 Companies and business’s structure and organisation 

● Wide understanding of the internal structure, processes, systems, and dynamics of companies and businesses operating in the private and public sectors. 

K-4 Local organisation (the one advised by the AI Ethicist) 

● Terms of reference. 

● Structure, including departmental, organisational, governance and committee structure.  

● Decision-making processes or framework. 

● Range of services.  

● AI Ethics’ resources include how the AI Ethics work is financed and the working relationship between the AI Ethics service and other departments, particularly legal counsel, risk management, and development.  

● Knowledge of how to locate specific types of information. 

K-5 AI Systems  

● Wide understanding of AI+ML technology’s current state and future directions: Theory of ML (such as causality and ethical algorithms) OR of mathematics on social dynamics, behavioural economics, and game theory 

● Good understanding of other advanced digital technologies such as IoT, DLT, and Immersive.  

● Understanding of Language Models – e.g., LLMs – and multi-modal models. 

● Understanding of global markets and the impact of AI worldwide.  Employer’s policies  

● Technical awareness of AI/ML technologies (such as the ability to read code rather than write it). 

● Familiarity with statistical measures of fairness and their relationship with sociotechnical concerns.  

K-6 Employer’s policies 

● Informed consent. 

K-7 Beliefs and perspectives of the stakeholders 

● Understanding of societal and cultural contexts and values.  

● Familiarity with stakeholders’ needs, values, and priorities.  

● Familiarity with stakeholders’ important beliefs and perspectives.  

● Resource persons for understanding and interpreting cultural communities.

K-8 Relevant codes of ethics, professional conduct, and best practices  

● Existing codes of ethics and policies from relevant professional organisations (e.g. game developers, software developers, and so on), if any.

● Employer’s code of professional conduct (if available).

● Industry best practices in data management, privacy, and security. 

K-9 Relevant AI and Data Laws 

● Data protection laws such as GDPR, The Data Protection Act and so on. 

● Privacy standards.  

● Relevant domestic and global regulation and policy developments such as ISO 31000 on risk.  

● AI standards, regulations, and guidelines from all over the world.  

● Policy-making process (e.g., EU laws governance and enforcement). 

K-10 Pedagogy  

● Familiarity with learning theories.  

● Familiarity with various teaching methods. 

Five major problems

They rightly argue that AI Ethicists should be recognized as experts who can bridge ethical principles with practical applications in AI development and deployment. Unfortunately this is rare on the ground. It is a confusing field with a lots of thinly qualified, low level commentators self-appointing themselves as ethicists. 

  1. Some, but few in my experience, have any deep understanding of moral reasoning and ethical theories or applied ethics. 
  2. As for business or organisational experience few seem to have been in any real positions relevant to this role within working structures. 
  3. Another often catastrophic failing is the sometimes the lack of awareness of what AI/ML technology is, along with the technical and statistical aspects of fairness and bias.
  4. A limited knowledge even of GDPR is often apparent and the various international dimensions to the law and regulations.
  5. As for pedagogy and teaching – mmmm.

Conclusion

To be fair much of this is new but as the paper righty says, we need to stop people simply stating they are ethicists without the necessary qualifications, expertise and experience of the practical side of the role. AI Ethicists are crucial for ensuring the ethical development and use of AI technologies. They need a mix of practical moral expertise, real competences in the technology, a deep knowledge of the laws and regulations, and the ability to educate others to navigate the complex ethical issues in AI. At the moment the cacophony of moralising activists need to give way and let the professionals take the roles. Establishing clear competencies and professional support structures is essential for the growth and recognition of this new profession. 


My favourite AI quote....

This is my favourite AI quote, by E.O Wilson:

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” 

There’s a lot to unpick in this pithy and brilliant set of related statements.

By framing the problem in terms our evolutionary, Paleolithic legacy, as evolved, emotional beings, he recognises that we are limited in our capabilities, cognitively capped. Far from being exceptional, almost all that we do is being done, or will be done, by technology. This we refuse to accept, even after Copernicus and Darwin, as we are attached to emotional thought, beliefs rather than knowledge, emotional concepts such a soul, Romanticism around creativity and so on. We are incapable of looking beyond our vanity, have some humility and get over ourselves.

To moderate our individualism, we created Medieval institutions to dampen our human folly. It was thought that the wisdom, not of the crowd, but of middle managers and technocrats, would dampen our forays into emotional extremes. Yet these institutions have become fossilised, full of bottlenecks and groupthink that are often centuries old, incapable of navigating us forward into the future. Often they are more about protecting those within the institutions themselves than serving their members, citizens, students or customers. We lack trust in global institutions, political institutions, educational institutions, businesses and so on, as we see how self-serving they often become.

When Godlike (great word) technology comes along, and threatens either ourselves or our institutions, we often react with a defensive, siege mentality. Generative AI in Higher education is seen as an assault on academic integrity, using generative tools an attack on writing, getting help leading to us becoming more stupid. All sense of proportion is lost through exaggeration and one-sided moralising. No high-horse is too high for saddling up and riding into the discussion.

Wilson’s final point is that this produces an overall crisis. With Copernicus it led to printing, the Reformation, Enlightenment and Scientific Revolution. With Darwin, Church authority evaporated. With the Godlike technology AI, we have created our own small Gods Having created small Gods, they help us see what and who we are. It is seen as an existential crisis by some, a crisis of meaning by others. At the economic level a crisis of tech greed, unemployment and inequality. 

But it’s at the personal level where the Paleolithic emotions are more active. As Hume rightly saw, our moral judgements are primarily emotional. That is why many, especially those who work in institutions express their discontent so loudly. Technology has already immiserated blue collar workers, with the help of white collar institutions such as business schools. It is now feeling their collar. AI is coming for the white collar folks who work in these institutions. It is actually ‘collar blind’ but will hit them the hardest. 


Wednesday, July 10, 2024

Lascaux: archaeology of the mind - a LIM (Large Image Model) and a place of teaching and learning


Having written about this in several books, thrilling to finally get to Lascaux and experience the beauty of this early spectacular form of expression by our species. The Neanderthals had gone and this was the early flowering of the early Homo sapiens.

This is archaeology of the mind, as these images unlock our cognitive development to show that over tens of thousands of years we represented our world, not through writing but visual imagery, with startlingly accurate and relevant images of the world we lived in – in this case of large animals – both predator and prey – of life and death.

As hunters and gatherers we had no settled life. We had to survive in a cold Ice Age climate when there were no farms, crops and storage, only places small groups would return to after long nomadic journeys on the hunt. It was here they would converge in larger groups to affirm their humanity and, above all, share, teach and learn.

Cave as curriculum

After a period of disbelief, where it was thought such images were fraudulent and could never have been created by hunter gatherers tens of thousands of years ago, we had lots of perspectives; from Victorian romanticism of cave 'art' through to shamanic drug induced experiences, finally moving towards more practical, didactic interpretations.

The didactic explanations seem right to me and Lascaux is the perfect example. It was much used, purposeful and structured. A large antechamber and learning journeys down small branched passageways show narrative progression. like early churches, it is packed with imagery. Movement is often suggested in the position of the legs and heads, perspective is sometimes astounding and the 3D rock surface is used to create a sculptural effect. You feel a sense of awe, amazement and sheer admiration.

Narratives are everywhere in these exquisite paintings. Working from memory they created flawless paintings of animals standing, running, butting and behaving as they did in the wild. They dance across the white calcite surface but one thing above all astounded me - they made no mistakes. They used reindeer fat and Juniper which does not produce sooty smoke to light the cave, also scaffolding and a palette of black (manganese) and a range of ochres from yellow to orange and red. Flints scored the shapes, fingers palms, ochre pencils, straws, spitting techniques and stencils were used to shape, outline and give life to these magnificent beasts.

Learning journey

Entering the large rotunda with a swirl of huge bulls, horses and stags, you get a sense of the intensity of the experience, the proximity to these animals, their size and movement. But you are attracted by the scary dark hole existence of two other exits

The first has a foreboding warning – the image of a bear with his claws visible just next to the entrance. One can imagine the warning given by the teacher. Then into the hole of darkness of what is now called the Sistine Chapel of cave painting, a more constricted passage, just enough in places for one person to pass, with images much closer to you. At the end, a masterful falling horse in a pillar of rock, which you have to squeeze around, then into an even more constricted long passage with predatory lions. The narrative moves from observing animals in the wild to their death and finally to the possibility of your death from predators.

Choose the other side passage and you get a low crouching passage, at one point there is a round room, full of images, and at the back after a climb, a steep drop into a hidden space where a dead man (only figure in entire cave) lies prone, the charging bison’s head low and angry, its intestines hanging out. Beside the man lies a spear thrower and the spear is shown across the bison’s body. Beside him a curious bird on a stick.

What is curious are the dozens of intentional signs, clearly meaningful, often interpreted as the seasons, numbers of animals in a group and so on. It isprproto-writing and they have a teaching and learning purpose.

The cave is a structured curriculum, an ordered series of events to be experienced, gradually revealed and explained to the observer in a dark, flickering, dangerous world.

Setting the scene

Let's go back to the cave opening again. As you enter there is a strange, hybrid creature, that seems to say What is this? What animal could it be? The point may have been that we see animals as first glimpses, often at a distance and must learn to identify what they are – predator or prey? It has circular markings on its body, long straight horns and what looks like a pregnant belly. This seems like the spot an experienced hunter would explain that variability in markings, horns, body shape and knowing colour and breeding seasons matters to the hunter.

Expertise was rare, as people died young. The known had to be passed down generations not just by speech and action but permanently as images that told stories. This was a way of preserving thatvrare commodity - cultural capital.

Basic skills

As you enter the huge ante-chamber, which could have held the entire hunter and gatherer group, you literally walk in and find yourself beneath a huge herd of animals. It would have been a surprise, not possible in the real world, a simulation. This is an introduction to many different species.

It has been carefully composed. You are in a representation of the real world, a simulation that focuses on what matters, what you must avoid as predators, and kill as prey. It needed a huge communal effort, as scaffolding had to be manufactured and built, materials gathered and skilled artists themselves trained and selected. This is an organised group, creating an organised venue for organised learning.

The effect of large animals coming at you out of the dark, within the confines of a cold cave would have been terrifying, like being a horror movie. The flickering lamps revealing a horned head here, a tail there. It is as if they understood the idea of attention, simplicity of image and their impact on learning and memory.

Hunting

As hunters late palaeolithic people tended to hunt a specific species at any one time of the year. This matches the imagery, where one can stop at an image of one species (they had to enter difficult passages with small lamps) and move from one species to another sequentially. There are narrative structures within the images; breeding pairs, animals in motion, different seasonal coats. At the end you encounter a masterpiece – the falling horse, with a bloated stomach, dead,

Break-outs

In another long side cave, like a long break-out room, the images are entirely different, a bewildering set of outlines and scores that suggest a more direct telling of what it is to hunt. Like a huge blackboard, they have been drawn and overdrawn by what seems like more improvisational hands. Here, I think, they were explaining the details of hunting. This was the chalkboard lecture hall. It is low and requires one to crouch, more likely to sit and be taught. 

New teachers clearly overwrote on those that came before, as there were no board cleaners! There a huge range of animals in these drawings - horses, bison, aurochs (bulls), ibexes, deer, a wolf and lion. They are often partial images, especially heads, which suggests some specific points were being made about what you need to look for as a hunter. It is a series of drawings over-writing the earlier work, over a long period by different people. 

In this area there is a shaft, climb down and there is a black scene of a human figure lying prone beneath a wounded bison, its intestines hanging out, its head low as it charges. This is the only image of a person in the whole cave. Flint knapping debris and ochre covered flints were found here, indicating the teaching of tools for butchering. One can imagine this being a specific, final lesson – kill or be killed.

Sound and speech

What is missing are the sounds of the teachers and learners. But even here we have some clues. One image, called the Roaring Stag is prominent. I have heard this in the Highlands of Scotland while camping in Winter. The noise is incredible like wolves all around you. It is likely that these sounds would have been simulated in the cave, an intense and frightening amplifier. You can imagine people in the dark suddenly frightened by the sound of rutting stags.

Communal knowledge

I wrote about this in my books on AI and 3D mixed reality, as they tell us something quite profound, that learning, for millions of years, was visual. We were shown things. This is our primary sense and as learning was about the world we knew and skills we had to master, images were our content. But we also have meaningful symbolic communications - not yet rwiting as we know it but an account of sorts and a sense of number.

Additionally, this was the first example of a communally shared learning experience. What we learnt and knew was not owned by others. It was a shared dataset, brought together by the whole group, to be shared for mutual benefit. It took a huge communal effort to create the first LIM (Large Image Model). There were no arguments about who drew or owned what, no ethical concerns about the dangers of sharing our data, just the need to share to survive and thrive.

Conclusion

Altamira was my first illustrated cave, many years ago. I can still remember the shock of that experience, a visceral surprise. Lascaux is even more wondrous. These places reveal the awakening our species - Homo sapiens, the ‘knowing man’. When we began to teach and learn, preserving and passing on our cultural knowledge. We became smarter and more cunning. The Neanderthals, who dabbled in such cave representations, were already long gone. We had separated the known from the knower, so that it could be passed on to many others. We were on the path to externalising ideas, refining them, reflecting upon them and using this knowledge to create new things, moving from tools to technologies; farming, writing, printing, the internet and AI. We became Homo technus.