Sunday, September 15, 2024

Art and AI - a radical jump?

I spent 10 years as the Deputy Chair of a large Arts Organisation, under two University Vice Chancellors and then the journalist Polly Toynbee. Only once in all that time did I ever hear the ‘theory’ of art being mentioned, in a bar with the brilliant CEO Andrew Comben. Yet it lies at the heart of debate around AI and art.

In Vienna last week for the ‘Secession’ exhibition, also art in the Belvedere, Leopold Galleries. Fabulous art from the turn of the century Secessionists, who broke with the ‘Academy’ to forge forward into radically new forms of expression. Klimt, then Schiele, were explicitly sexual, wildly expressionist and within a few years had changed art forever. Reflecting on the impact of AI on art, there are parallels. 

Past begets the future

I’m not convinced by arguments that see AI as stealing from the past. All content creation involves taking from the past to create the new. Schiele took from Klimt, Klimt took from Japanese and Byzantine mosaic art. No matter how radical the jump, all new content takes form old content, all content emerges to a degree previous content.

This is the essence of GenAI, which take a vast corpus of text or visual data and allow us to create new media by simply asking it to do so, freshly minted worlds. When we prompt, we co-create with AI to create something new, now with Chain of Thought and reasoning. When we prompt to create an image, we create something new that has never been seen before. We can also create new music and videos. I find creating art with the help of AI exhilarating.

Forward movements get backlash

Just as Klimt and Schiele were new and strange, breaking the rules, so AI seems strange and alien. Egon Schiele, had his art banned by the Nazis as ‘degenerate’ but it was resurrected in the English-speaking world in the 1960s. There is a sense, obviously not nearly as strong, of a new Puritanism around AI and art, especially the word ‘creative’. Human exceptionalism was knocked by Copernicus, then Darwin. We are not at the centre of the Universe, merely a rock plying around the sun and an evolved animal. So AI poses another challenge showing that our cognitive capabilities are far from unique. A simple calculator puts paid to that idea and AI is now pushing into areas such as critical thinking, creativity and intelligence.

Late 18th and 19th century Romanticism, promoted the idea that creation is a uniquely human endeavour, human exceptionalism, when all along it has been a drawing upon the past, deeply rooted in what the brain has experienced and takes from its memories to create anything new. Now that we can draw from the much deeper well of human culture, a vast collective memory of our cultural past, new acts of Postcreation are possible.

Just as the academy reacted badly to the shift, so we react badly to the shift with AI. The ‘new’ becomes the ‘new old’. The history of art is one of taking from the past to help create the future. It was Schiele who said, there “no such thing as Modern Art”. He had a point. 

What is Art?

What is rarely discussed when considering AI, is aesthetics. What is Art? It is often assumed to be obvious. It is not. Theories of Art matter as they have deep implications for discussions around AI-produced Art. Yet many would be surprised to find that most theories of Art can be used to support the AI production of art. Let’s consider the top ten:

1. Art as Expression

There are theories based on the artist, as the expression of the artist's emotions, feelings, or inner experiences (Tolstoy, Collingwood) and without emotional intent, AI-generated works do fall short of being true art. Since AI lacks consciousness and subjective experience, it cannot genuinely express emotions. Without emotional intent, AI-generated works fall short of being true art under this theory. Even here, some claim that the emotions of the human prompters or users influencing the AI could imbue the work with expressive qualities. CON AI

2. Art as Communication

A more general form expressionism is Art is a communications medium through which artists communicate ideas, emotions, or narratives to an audience. AI may lack consciousness and cannot intend to communicate but if viewers interpret meanings or emotions from AI art, a form of communication occurs. The human input in guiding the AI could also be seen as a conduit for human-to-human communication through the AI medium, an act of communication. PRO AI

3. Art as Experience

Art here is rooted in the experiential interaction between the artist, the artwork, and the audience (Dewey). It emphasises the continuity between art and everyday life, suggesting that art arises from and contributes to the experiences of individuals within their environment.

Under Dewey's theory, AI-produced art is art if it facilitates a meaningful experience for both the creator (prompter or user) and the audience. The interactivity and engagement prompted by AI art align with this experiential idea. PRO AI

4. Art as Representation

Art as an imitation or representation of reality (Plato, Aristotle) was long the commonest theory of art. This theory can be used to support the idea of AI generated art which can generate art from large datasets, effectively imitating styles, themes, and techniques of human art. AI-produced art qualifies as art because it replicates or reinterprets aspects of reality. PRO AI

5. Aesthetic Theory of Art

Under the Aesthetic theory (Kant, Beardsley), Art is anything that provides an aesthetic experience to the viewer. If AI-produced works arouse an aesthetic response, they satisfy the criteria of this theory. The focus is on the viewer or the experience of the audience, allowing AI art to be appreciated as art. It fulfills the criteria of providing an aesthetic experience. PRO AI

6. Art as Formal qualities

Art here is defined by some through its formal qualities—composition, colour, line, shape, and other aesthetic elements (Bell, Greenberg). AI can create works with compelling formal properties that induce aesthetic appreciation. If art is appreciated solely for its formal aspects, regardless of the creator's identity or intent, AI-generated pieces can be considered art. PRO AI

7. Art as Cognition

Art serves as a means of understanding, knowledge acquisition, and intellectual exploration (Goodman, Langer). They argue that art contributes to cognition by presenting new perspectives and insights, functioning similarly to language in conveying ideas. AI art can certainly offer new perspectives or challenge perceptions, contributing to cognitive engagement, especially with Chain of Thought. PRO AI

8. Art as Historical 

There is also the Historical definition of Art (Levinson), where Art is seen as an artifact intended to be regarded in a way previous artworks were regarded. AI art often references or builds upon existing artistic styles and movements. If AI-generated works are intended (by users) to engage with art history, they fit within this historical continuum. PRO AI

9. Institutional Theory

The Institutional defines Art as whatever the art world—artists, critics, curators, galleries—recognises as art (Dickie, Danto). As AI-generated works gain acceptance in galleries, museums, and auctions, they attain the status of art within the institutional framework. The theory supports the inclusion of AI art as long as it is embraced by the art community. PRO AI

10. Postmodern Theories

Postmodernism defies singular definitions and embraces plurality, challenging notions of originality and authorship. Its scepticism towards grand narratives and embrace of appropriation and pastiche align with AI art's remixing of existing data. AI art can become a postmodern exploration of creativity and authorship and aligns with postmodern themes of challenging originality and embracing plurality. PRO AI

AI-produced art supported by 9/10 theories

A theory that relies on authorship and intentionality relies on the consciousness and intent of the artist (Expression). Even here, when a human is involved, it quickly opens itself up to intention by the user of AI, as a tool, to communicate (Communication), understand (Experience), collaborate with and create art. Theories that focus on the spectator's experience (Representational, Aesthetic), also support the inclusion of AI art. If the artwork elicits an aesthetic or emotional response, its origin becomes secondary to its impact on the audience. And with theories that place art on the shoulders of form, process and history (Formal, Cognitive and Historical), AI can certainly have its place. Similarly with historical, cultural and institutional acceptance (Historical, Institutional) which suggests that as long as Art is part of a historical process or the art world accepts AI-generated works, they are Art. The increasing presence of AI art in mainstream venues confirms its growing acceptance and challenges traditional gatekeeping.

AI art certainly raises questions about what constitutes creativity. Is AI merely recombining existing data, or is it generating new forms? The debate involves redefining creativity to potentially include generative processes, in collaboration or independently. 

Art and orthodoxy

As AI continues to advance and integrate into the art world, these discussions will shape our perception of art in the digital age. This may disturb some but shifts in Art have often agitated the orthodoxy. The rise of AI art challenges us to discuss the role of technology in human culture, the value of human input in art. It challenges us to consider whether art is a uniquely human endeavour or if machines can partake in artistic creation. With AI art, the artist's role may shift from creator to curator or facilitator. This transformation aligns with Postmodern Theories that question traditional authorship and embrace collaborative or decentralised creation processes. It may even, in time, transcend these theories. Art’s destiny may be transcendence.

Conclusion

The many peoples, cultures and languages of the world can be in this communal effort, not to fix some utopian idea of a common set of values or cultural output but creation beyond what just one group sees as art. This could be a more innovative and transformative era, a future of openness, a genuine recognition that the future is created by us, not determined wholly by our limited national cultural present but drawing upon all cultures and the past. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue. This new aesthetic world, or new dawn, may be more communal, drawing from the well of a vast shared, public collective. We can have a common purpose of mutual effort that leads to a more co-operative, collaborative and unified effort, something much more profoundly communal.

Technological shifts have transformed art, with new tools and media. They have also democratised art making it easier to produce and more accessible. Entirely new forms of art have emerged, including photography, film, digital art, and interactive installations. Technology also changes how audiences engage with art, introducing interactivity and personalisation. 

Technological inventions have continually reshaped the landscape of art, pushing the boundaries of creativity and expression. Physical technologies introduced new materials and methods, allowing artists to explore uncharted aesthetic territories. Digital technologies have further expanded these horizons, enabling virtual experiences and global connectivity. As technology evolves, it challenges artists to adapt and inspires them to innovate, ensuring that art remains a dynamic reflection of human ingenuity and captures the zeitgeist of each era. The symbiotic relationship between art and technology not only propels artistic evolution but also invites society to reconsider the possibilities of creation and the essence of what art can be. An AI Warhol is bound to emerge. 

Friday, September 13, 2024

Has ChatGPTo1 just become a 'Critical Thinker'?


Critical thinking was a famous 21st century skill, that everyone thought AI could never solve. It just has. We are only just beginning to glimpse what many predicted -  that AGI is getting closer. 

What we have just witnessed is nothing less than a cognitive breakthrough from text generation to reasoning. Note that not all problems and tasks are to do with pure reason but the charge made against GenAI was that it could not plan and reason. That door has been blown wide open.

New scaling model

But what really matters is that this opens up a new future for AI, as it can be scaled. It was thought that we had hit a ceiling on scaling or at least little gains for huge scaling. It may well be that OpenAI have hit a ceiling on model scaling and have switched to this approach - we don't know. Inference (reasoning) can now be scaled, opening up a new world of real intelligence. Sure, there will be glitches, but that’s the point of releasing these models. We will now get hundreds of thousands of real use cases in the real world. The old days of release a perfect product are gone

Beyond the chatbot

This shift has moved Open AI beyond chatbots to agents. You’re as likely to ask it to DO something, than answer a simple question. It can do maths and science but just as importantly it can do critical professional tasks and projects, the bread and butter of the professions, whether it is health, law, finance, HR or L&D. So many more use cases are opened up by this functionality as it moves from single case or linear queries to solving complex problems.

Critical thinking


Critical thinking was one of the famous 21st century skills, that everyone thought AI could never solve. If we see Critical thinking as a reasoned and critical approach to solving problems, it is here. In fact, ChatGPT o1's performance in maths and reasoning is extraordinary. It literally goes away to think, then comes back with potential solutions. Remember that this model is doing fantastically advanced maths at the level of the Maths Olympiiad and problem solving on quantitative tasks. The same will be true of qualitative tasks and more commonly, a mixture of both.

At a more specific level, these new models can determine an optimised L&D intervention based on information provided about the goals, learners, learning and resources. It would avoid the all-too-common solution of slabbing out a course.

Coworker

They can predict and plan the stages of projects, share those plans for comment to allow projects to develop. In more qualitative tasks, such as those one finds in the real world of organisations, domain knowledge also matters. This is where things will develop quickly.

You have to see this new approach as not providing simple solutions to single prompts but predicting and planning, multi-stage tasks, with far more penetrative judgement. You can get it to do the market research or needs analysis, then scenario analysis to evaluate potential outcomes, then a risk assessment. It will also surface any uncertainties and risks as decisions are made across the project, using risks analysis to manage the project. Users will still be able to steer things but in an insightful and informed manner. 

A typical problem may be the introduction of AI into workplace learning. One would want to consider all the critical and non-critical learning tasks, choices available on learning delivery, based on the types of learning, nature of the learners, geographic and language spread and available resources. On top of this regulatory and legal issues would also have to be considered along with available resources, perhaps an estimated budget. This is the sort of project that will be executed by these new models with caveats along the way.

If one were to plan a course in education or in the workplace it could do an excellent job in planning an optimised approach that considered the critical points, tasks and pedagogic approach.

Skills assessment will be so much better, as it is a complex tasks to identify skills, skill gaps and solutions to the organisation’s needs. 

Automation

You can see how these agents will massively improve customer support, actually focussing and solving customer problems, not working to limited scripts. Any by the way the translation capabilities have also been improved. Performance support will be SO much better as, when you get stuck, it will solve your problem in the workflow. These systems will give more targeted and realistic solutions to your need at that moment.

Reason

Reason is not everything and it almost always has a context and involves domain knowledge in the real world. That's why the coworker approach is likely for the foreseeable future. There may also be limits to the Chain of Thought approach. We do not yet know much about costs, context window, limits on the 'thinking' process. What we do know is that this is a breakthrough. There will be others.

Conclusion

Many of the old criticisms have been dramatically reduced. OpenAI's o1 model is now being put through its paces in professional service where reflection and reason really matter, before coming back to you. It goes into its own private chain of thought. This means that the longer it thinks, the more likely it is to be better on reasoning tasks. 

Everyone rushing to bin their AI slides -  Can't do maths? Can't plan? Can't reason? How many Rs in strawberry? Limitations being eliminated week by week. This is a new paradigm for AI.

We are only just beginning to glimpse what many predicted that AGI is getting very close. This is one step for mankind but a giant leap for AI.

 

Friday, September 06, 2024

AI is a provocation – that explains a lot of the bizarre reactions


Huge numbers are using ‘AI on the SLY’ in schools, Universities and workplaces. Institutions will try to preserve the problem to which they are a solution. The institutions are protecting themselves against their own learners and employees.

State of play

Even if we stopped right now, randomised controlled trials show large performance gains in real institutions for management, administration, ideas generation, translation, writing and coding using AI. But opinions among insiders (different from outsiders) has coalesced around the idea that AI will experience continuing rapid development for years. GPT5 is a real thing, we have not exhausted scaling and many other techniques are in play stimulated by huge investment and a recognition that this now a massive shift in technology with huge impacts all our lives in culture, economics, education, health, finance, entertainment and above all, the nature of work itself. That’s why it will change why we learn, what we learn and how we learn.

Provocation

We would do well to ask why has this happened? I suspect it is because Generative AI is a brilliant ‘provocation’, not because it is the fastest adopted technology in the history of our species, but its greatest challenge. It destabilises the establishment, especially those who want to control power through knowledge. That’s why the reaction has been strongest among technocrats in Government and academia. Institutions just can’t deal with it. 

They can’t deal with the fact that it behaves like us, because it is ‘us’. It has been trained on our vast cultural legacy. It throws traditional transfer of knowledge thinkers because they think they are in the game of teaching the ‘truth’ (as they see it) when it actually behaves like real people. Thy expect a search engine and get dialogue. Used to telling people what they need to know and do, they see the technology as a treat to their positions. 

To be honest, it is pretty good at being traditionally smart – only 18 months in and it’s passing high stakes exams in most subjects, now performing better than humans in many. And when it comes to the meat and potatoes of a largely ‘text-based’ education, it can do most of what we expect the average pupil or student do in seconds. We are meant to be teaching people to be critical but anyone with a modest critical outlook, on their first use of GenAI, thinks ‘Holy shit… much of what I’m learning and do in my job can be done quickly, in most languages, at any time – and it is free. It makes learners think – is all of this essentially text-based learning worth my while? This is exactly what was uncovered in a recent Harvard student survey where almost every student admitted to using AI as well as having a rethink about their education and careers. LINK

A lot of what learners are doing in school, college or work has started to seem a little predictable, tired, often irrelevant. All of that time writing essays, doing maths you’ll never use. It takes 20 years of pretty constant slog to get our young people even remotely ready for the workplace. We send our kids to school, from 5-25, yet almost all of this time is spent on text - reading and writing oodles and oodles of text. Most vocational skills have been erased from curricula. No wonder they react by gaming the system as they are assessed on text only.

All of this effort in playing the cat and mouse game around text consumption and production rather than reflecting on the purpose of education, has led to a certain suspicion about this 20 years of schooling. It is hard to maintain enthusiasm for a process that has become odder and odder as the years go by. This is not to say bthat text is unimportant, only that it is not everything and very often, a poor form of assessment.

Morality police

Another symptom of institutions not being able to cope with the provocation is the endless moralising by paid ‘AI and Ethics’ folk, is the assumption that they have the keys to the ethical behaviour when the rest of us do not. When, in fact, they are often no more qualified on the subject, show massive biases towards their own ‘values’ and often have a poor understanding of how the technology actually works. It’s a heady cocktail of pious moralising.

Worse still, is their assumption that we, and by that I mean teachers, employees and learners, don’t really know what we’re doing, and need to be educated on matters of morality. They speak like the morality police, a well-paid professional cadre, demanding that we obey their exhortations. We poor amoral souls need their help in behaving in the right way. Sur, there are ethical issues, that does not mean Iranian levels of policing.

Conclusion

If we really want our young people to become autonomous adults, and that is what I think education is about, then AI can help foster and support that autonomy. It already gives them amazing levels of agency, access and support. Let’s see how we can help them become more autonomous and use these tools to achieve their goals, not stuff that can be done in seconds by machines. Treat learners like machines and they’ll behave like machines. Treat learners like autonomous people and they’ll use the machines.


Founder v Manager mode - the meme

At a recent YC event, Brian Chesky’s talk challenged the conventional wisdom on scaling companies. It has become a massive meme. It struck a nerve with me and confirmed what I experienced over 40 years in business. I've built and sold businesses, invested and got involved with others. Recently, I was in a busienss that we grew from scratch, sold to Private Equity and saw all of this play out - advisors shoved down our throats, bad  advice - mostly from people who do not know the sector or market. I got out quickly as I saw their behaviour on day 1 - riding roughshod over everyone, ignoring obvious points.

Back to Chesky. He shared how the advice he received 'hire good people and give them space' was disastrous for Airbnb. After studying Steve Jobs' approach, Chesky developed a more hands-on style that has been successful, as reflected in Airbnb’s strong financial performance.

Many founders echoed similar experiences, realising that the advice they received was tailored for managers, not founders. This highlighted two different modes of running a company: 'founder mode' and 'manager mode'. Unlike managers, founders need to stay deeply involved in key areas of their companies, even as they grow.

His talk revealed that there’s little formal knowledge about founder-led scaling. Founders have been left to figure it out themselves, but successful examples like Chesky’s show the importance of founders staying connected to their teams and not relying solely on delegation. This approach, while more complex than traditional management, often works better for fast-growing companies.

1. Don’t blindly follow conventional advice
The typical advice of "hire good people and give them space" doesn’t always work for founders. Be cautious when others tell you how to scale your company. This, for me is the. umber one rule. Advisors are often stuck in their own world. I've never had a 'mentor' or 'coach'. I've seen success come from NOT taking the obvious advice. Advisros are so often just group think people and you can find that stuff in seconds on ChatGPT.

2. Embrace 'Founder Mode

Founders should run companies differently from professional managers. Stay involved in important details and don't feel pressured to delegate everything. Yip. You're not playing the game you're trying to reinvent the game and get some edge. Oh and joint CEOs never work!

3. Don’t switch to a manager mindset

Just because your company is growing doesn’t mean you should operate like a corporate manager. Your unique insight as a founder is key, so keep it in play. Perhaps the most important bit of advice - as soon as you descend into managerial mode, you lose the difference you're trying to make.

4. Have skip-level meetings

Don’t just talk to your direct reports. Get to know people further down the line. It helps maintain the company culture and keeps you in touch with what’s really happening. Yip, walk the floor - listen and act. Introduce your customers to as many people as possible. I used to have a stations of the cross tour, constantly taking customers round showing them the production process.

5. Develop your own style

Try out unconventional ideas, people at Apple, to keep things fresh and keep the company agile. So important. Ring the changes and make it seem exciting. This does not mean silly team building events in escape rooms. Be yourself and don't follow leadership course BS about empathy being everything - it's not.

6. Watch out for professional fakers

Be careful of people who seem great at "managing up" but don’t actually bring much value. Make sure your team truly shares your vision. There's a ton of 'let me grow your business' and 'entrepreneur courses' around. They're largely BS.

7. Delegate carefully

As your company grows, you’ll need to delegate, but make sure you do it based on trust that’s been earned, not just because it’s expected. This is so right. Delegation is one thing, losing control or low performance is another.

8. Trust your instincts

Don’t let others make you doubt your gut feelings, even if professional managers or advisors disagree with you. As a founder, your perspective is valuable. I remember bold decision, like forking from code, switching out of our established sector - sometimes bold decisions have to be made.

9. Don’t misuse founder mode

Once the idea of “founder mode” becomes more popular, be careful not to use it as an excuse to avoid delegation. Also, watch out for non-founders trying to adopt it in the wrong way. When a PE company comes in you get a crop of largely useless 'advisors', often people from different sectors giving you sage advice - it's formulaic and never sage. In one case, in a hugely successful company, after hearing a series of these, I cashed out. The company is now run by spreadsheet bods.

10. Keep evolving

As your company scales, keep reassessing how you want to progress. You may need to adapt, but stay closely connected to the core vision of your business. This perseverance I've seen in people who love what they do and build over time. This is important. When AI, for example is seen as a productivity amplifier, get on with it, don't wait for reports.

Don;t be scared to get out when the money guys come in. If you're bought, take the money and go off and do something interesting. It gives you the freedom to do precisley that. Don't become a manager of someone else's business - you will hate it.




Thursday, September 05, 2024

Man who coined the phrase ‘Postmodern’ is often forgotten - Jean-François Lyotard

More fashionable names in Postmodern theory include Foucault, Derrida, Baudrillard and Barthes. But the man who coined the phrase ‘Postmodern’ is often forgotten. It was Jean-François Lyotard in The Postmodern Condition (1979), yet he often gets ignored, as his views contradict his hipper colleagues. 

As a far-left activist and academic in France, Algeria and the US (in the Critical Theory Department of the University of California, then Emory University) he explored the impact of postmodernity on a wide range of subjects; philosophy, epistemology, science, art, literature, film, music and culture. 

Meta- and Mini-narratives

His alternatives to ‘meta-narratives’ are personal ‘mini-narratives’ that reduce knowledge to personal experience. Objective, empirical evidence is trumped by lived experience, so that the mini-narratives of individuals and groups are placed above those of science, general ethics or society as a whole.

We see in Lyotard an explicit epistemic relativism (belief in personal or culturally specific truths or facts) and the advocacy of privileging ‘lived experience’ over empirical evidence. We also see the promotion of a version of pluralism which privileges the views of minority groups over the general consensus of scientists or liberal, democratic ethics which are presented as authoritarian and dogmatic. This is consistent in postmodern thought.

In The Postmodern Condition: A Report on Knowledge (1979) Lyotard explores the transformation of knowledge in postmodern society, focusing on how the decline of meta-narratives and the rise of performative knowledge affect educational practices. The Differend: Phrases in Dispute (1985) elaborates on the idea of different discourses and the importance of acknowledging differences in understanding and communication, relevant to educational contexts that value diverse perspectives.

Like Foucault, Jean-François Lyotard’s views on teaching and learning reflect his broader critique of knowledge and society. With the decline of grand narratives and the rise of performative, context-dependent knowledge, he pushed for an educational approach that values diversity, adaptability, and critical engagement. He also encouraged a move towards more flexible, pragmatic, and technologically integrated forms of learning that respond to the complexities of the postmodern condition.

The grand narratives of the Enlightenment (meta-narratives) that once legitimised knowledge are in decline. Universal reason and progress no longer rule and this decline affects how knowledge is produced and transmitted. Learning is increasingly legitimised through its utility and efficiency, rather than through universal or absolute claims. This shift influences educational practices and the goals of teaching and learning.

Knowledge, for Lyotard, changes with the dissolution of dominant narratives. The Enlightenment narratives of objectivity, truth are no longer applicable. This, he thinks, has caused a crisis in knowledge, as it has been commercialised, creating tensions between rich and poor, private sector and state. 

Lyotard oddly used "paganism" metaphorically to describe a stance that rejects universal principles in favour of a multiplicity of perspectives and values. In this context, it contrasts with monotheistic or universal approaches to truth and ethics. This concept is linked to his broader critique of universalism, advocating instead for a recognition of diversity and the co-existence of different, sometimes conflicting, ways of life. 

Parology

Knowledge, if more fragmented with a plurality of perspectives, is opposed to the fixity of a ‘canon of universal knowledge’. Building on the value of these multiple perspectives, he introduces the concept of ‘paralogy’, the generation new, often contradictory perspectives and diversity in thought, a pluralistic approach to learning. Localized, context-specific narratives are preferred to overarching truths. Teaching and learning should, therefore, focus on fostering diverse perspectives and critical thinking.

Knowledge is contingent and context-dependent and cannot be derived from universal principles. We must now look towards their pragmatic effectiveness in specific contexts to combat the traditional notions of academic authority and curriculum design. Traditional educational institutions perpetuate outdated meta-narratives and hierarchical structures and resist the pluralistic and performative approaches that Lyotard advocates, so he called for reform.

Teaching and learning are open, dynamic processes rather than rigid, predetermined paths and should embrace uncertainty and complexity, encouraging students to engage with multiple perspectives and to be critical of established norms. It should involve innovation and experimentation, allowing students to explore and create new forms of knowledge rather than merely replicating existing ones.

Performative Knowledge

Lyotard sees learning increasingly judged by its performance, practical application, and efficiency rather than its slavish adherence to old narratives and universal truths. Performative Knowledge is valued for its ability to produce measurable outcomes and skills that can be immediately applied in practical contexts. This performative emphasis shifts the focus of education towards marketable skills and competencies. This ‘perspective’ influences curricula to prioritise skills and knowledge that have clear, immediate applications and can be quantified, often at the expense of more traditional, humanistic educational goals.

Language games

As part of his critique of grand narratives and stated truths, he also had a view on language and its uses. While the term "language games" was originally coined by Ludwig Wittgenstein, Lyotard popularised it in the context of postmodernism to describe how different groups use language according to their own rules and contexts, leading to a plurality of meanings rather than a single, unified understanding. This concept highlights the idea that knowledge and meaning are contingent on social contexts and that different communities or disciplines may operate according to different linguistic norms.

Technology

Lyotard also recognises the significant impact of technological advancements on teaching and learning. Technology changes how knowledge is accessed, distributed, and valued. In a world characterised by rapid technological advancements, the production and dissemination of knowledge become highly efficient. Educational institutions must adapt to these changes by integrating new technologies into their teaching methods. The rise of digital learning aligns with Lyotard's idea that knowledge is becoming increasingly performative and accessible in new forms, led him to embrace learning technologies.

Yet, as he is critical of claims that knowledge is truth, as it is a slave to 'meta-narratives'. Science, in particular, he sees as a meta-narrative that puts knowledge in the hands of power and politics, thereby shedding its claim to objectivity. Faith in science, as he explains in Inhuman (1988) legitimises the digital capture of knowledge and therefore faith in technology.

Critique

Despite the claim for a plurality of perspectives there is often a fall back to universal claims, theories and values. Also, his attack on science as a meta-narrative doesn’t really explain why the scientific method, with falsification lacks legitimacy or what scientific knowledge has been delegitimised. It is a failure to recognise that many of the meta-narratives postmodernists criticise have methods that allow them to examine, even themselves. They are also sceptical about claims claiming to be absolute truths and at least have processes of self-correction.

It is as if the progress we’ve made since the Enlightenment didn’t exist, that there was no Reformation, French Revolution, secular progress, no progression towards liberal democracies and values. Postmodernism doesn’t have a monopoly on emancipation, many of the advances made in the 60s and 70s were prior to Postmodernism, not caused by it. That was not just the well-spring but theoretical basis upon which such progress was made, the very progress that allows the current generation of critical theorists to think and act for themselves.

Worse still, it destroys all possible methods of discussion, debate and disagreement, the foundations of liberal democracy, there is no arguing with it. All common ground or methods of falsification have disappeared or are interpreted as power plays. It has donned all the defensiveness of the meta-narratives it purports to despise.

Bibliography

Lyotard, J.F., 1984. The postmodern condition: A report on knowledge (Vol. 10). U of Minnesota Press.

Lyotard, Jean Francois. (1988). The differend : phrases in dispute / Jean-Francois

Lyotard ; translation by Georges Van Den Abbeele. Minneapolis : University of Minnesota Press

Lyotard, Jean-Francois. (1988). The Inhuman: Reflections On Time (First Published). California: Standford University Press.



UNESCO and AI - mostly rrhetoric

I have been following the output of UNESCO on AI for some time, even debated against them (twice). It has been a dispiriting experience. Rthare then useful effort and adice it remains mired in abstract and often irrelevant frameworks. This is the world of conferences and reports, not the real world.


There is a stark contrast between US and EU, between the affirmative, voluntary and guidance approach of the US and regulatory approach of the EU.


The US is forging ahead in education, their companies and Universities now way ahead of the EU. Most of the technology comes from the US, with few European examples. The investment in the US dwarfs that of the EU. Meanwhile, the EU sinks into in a vale of despondency, it's Universities doing little, innovation way behind.

The Chair, while talking about bias, becomes hopelessly biased seconds later and makes a big blunder by calling the 'EU' AI Act as the 'European' AI Act. That is quite simply wrong. Thankfully there are countries in Europe that are not in the EU or subject to this act. And China is notably sidelined, yet they have some excellent legislation that has been in place for a long time.

There is something odd, very Davos, about these people flying all over the world to discuss AI and ethics, especially when their core principle was, and I quote 'Climate friendly AI'! In truth UNESCO is irrelevant here. The world is using this technology, paying no regard to the millions of words these aloof world bodies throw out on their websites..

This may sound harsh but these top-down entities have no really useful role to play here. Mired in the rhetoric of 'values' they mean THEIR 'values'. This is not a revolution led by UNESCO, UN, OECD or any other of these bodies. They join the bandwagon long after it left town.

This is a shame, an opportunity lost. Rather than push the really positive, innovative and exciting opportunities, they sit on sofas, reading from prepared scripts and screens, remote from the actual technology and its uses. It's wholly performative - exceptionally well paid people harking on about the poor. Indeed they simply duplicate, at great cost, the same old long reports, frameworks and documents and statements, which are largely ignored, as the real world moves on paying them little or no regard.

In truth this AI shift is bottom up, driven by product releases, users and use. That's why their Teacher Competences document is mostly repetitive rhetoric. It will have no real impact, as it is far too abstract. The group is loaded with AI and Ethics people, low on people who have any actual experience in the application of AI in learning. This means a ton of abstract talk about ethics. The word 'ethics' is mentioned on almost every page. 

Teachers are teachers, not experts on ethics. The idea that they need to be competent in judging ethical issues at the political and technical level is very odd. All empty theory, low on practice. I've seen competency frameworks all my adult life - they're usually empty exercises by a mixture of academics and people who have little real practical experience and often ignored or out of date by the time they hit the press.

The problem with a competency framework, is that they need real examples. Banging on about competences in ethics is completely misguided. That is not the role of the teacher. It's easy to conjure up little pyramids with these words but without practical guidance, it's yet another document.
You can read the entire document as a teacher, and to be honest, be none the wiser about what you actually need to do in your job.


Wednesday, September 04, 2024

Only a God will save us… some reflections on the enormity of AI

Greek dystopia

The Greeks understood, profoundly, the philosophy of technology. In Aeschylus’s Prometheus Bound, when Zeus hands Prometheus the power of metallurgy, writing and mathematics, he gifts it to man, so Zeus punishes him, with eternal torture. This warning is the first dystopian view of technology in Western culture. Mary Shelley called Frankenstein ‘A Modern Prometheus’ and Hollywood has delivered for a nearly a century on that dystopian vision. Art has largely been wary and critical of technology.

Yet we may not be thinking deeply enough about what AI brings. For all the chat about its power, we need to see it as a technological event that eclipses the invention of stone tools, writing, printing and the internet. It may be the culmination of all of these, as it already promises physical tools as robots, multimodal capabilities beyond the world of text, global dialogue with an another intelligence at any time from any place on anything in any language. It seems to transcend other technologies, with implications beyond past technologies into future unknowns. These are a few reflections on these unknowns.

God as maker

But there was another more considered view of technology in ancient Greece. Plato articulated the philosophy of technology, seeing the world, in Timaeus, as the work of an ‘Artisan’, the universe as a created entity, a technology. Aristotle makes the brilliant observation in his Physics, that technology not only mimics nature but continues “what nature cannot bring to a finish”. They set in train an idea that the universe was made and that there was a maker, the universe as a technological creation.

Monotheism rose on the back of cultures in the fertile crescent of the Middle East, who literally lived on the fruits of their tool-aided labour. The spade, the plough and the scythe gave them time to reflect. Interestingly our first written records, on that beautifully permanent piece of technology, the clay tablet, are largely the accounts of agricultural produce and exchange. The rise of writing and efficient alphabets made writing the technology of various forms of capitalism and control, holding everything to account, even our sins. The great religious books shaped us for Millenia, and still do.

The two-thousand year history of Western culture after the Greeks bought into the myth of the universe as a piece of created technology. As we entered the age of industrial design and production, Paley formulated it as a modern argument for the existence of God from design, using technological imagery, the watch, to specify and prove the existence of a designed universe and therefore a designer - we call (him) God. In Natural Theology; or Evidences of the Existence and Attributes of the Deity, he uses an argument from analogy to compare the workings of a watch with the observed movements of the planets in the solar system to conclude that it shows signs of design and that there must be a designer. God as watchmaker, technologist, has been the dominant, popular, philosophical belief for two millennia. 

Technology, in this sense, helped generate this metaphysical deity. It is this binary separation of the subject from the object that allows us to create new realms, heaven and earth, which gets a moral patina and becomes good and evil, heaven and hell. The machinations of the pastoral heaven and fiery foundry that is hell revealed the dystopian vision of the Greeks and continues in the more exaggerated form of Promethean, doomster AI ethics.

Technology is the manifestation of human conceptualisation and action, as it creates objects that enhance human powers, first physical then psychological. With the first hand-held axes, we turned natural materials to our own ends. With such tools we could hunt, expand and thrive, then control the energy from felled trees to create metals and forge even more powerful tools. Tools beget tools. 

Technology slew God

Technology may have suggested, then created God, but in the end it slew him. With Copernicus, who drew upon technology-generated data, we found ourselves at some distance from the centre of the Universe, not even at the centre of our own little whirl of planets. Darwin then destroyed the last conceit, that we were unique and created in the eyes of a God. We were the product of the blind watchmaker, a mechanical, double-helix process, not a maker, reduced to mere accidents of genetic generation, the sons not of Gods but genetic mistakes. Dawkins titled his book The Blind Watchmaker as an evolutionary counterpoint to Paley.

We have now resurrected a modern form of animism with AI and software, that we first used to our ends but then realised that we ourselves are animistic beings, driven by software in our brains. The separation of us from the natural world is no longer viable. Human exceptionalism was wounded by Copernicus and Darwin, now killed dead by AI.

Anchors lost, we are adrift, but we humans are a cunning species. We not only make things up, we make things and make things happen.

We are makers

Once God was dead, in the Nietzschean sense of a conceptual death, we were left with ourselves and technology. We got our solace not from being created forms but by creating our own forms. We became little Gods and began to create our own universe. We abandoned the fields for factories and designed machines that could do the work of many men. What we learned was scale. We scaled agricultural production through technology in the agricultural revolution, scaled factory production in the industrial revolution, scaled mass production in the consumer revolution. Then more machines to take us to far-off places – the seaside, another country, the moon. We now scale the very thing that created this technology, ourselves. We alchemists of AI have learned to scale our own brains.

Small Gods

Eventually we realised that even we, as creators, could make machines that could know and think on our behalf. God had died but Little Gods are emerging. We may return to that pre-agricultural age, as hunters and gatherers, hunting for meaning, gathering ideas and enthusiasms and making new worlds for ourselves. In an age of an abundance we, once more, will have to reflect on the brief folly of 9-5 work and learn to accept that was never our fate, only an aberration. Technology now literally shapes our conception of place and space. With film, radio, TV and the web but we spiders may have got entangled in our own created web and the danger is that it begins to spin us.

Technology is not now a ‘black box’, something separate from us. It has shaped our evolution, shaped our progress, shaped out thinking - it will shape our future. Forget the simplistic ‘it’s about people not technology’ trope. There has always been a complex dialectic between our species and technology, that dialectic has suddenly got a lot more complex with AI. That dialogue has just got very real, as with the invention of writing, then printing, the sum total of human cultural knowledge was gathered and used to train LLMs, small Gods. We now engage in dialogue with these small Gods. We are speaking to a created small God - US.

Only a God can save us

As Martin Heidegger said in his famous Spiegel interview, “Only a God can save us”. What I think this commentator on being, technology and the human condition meant, was that technology has become something greater than us, something we now find difficult to even see, as its hand has become ever more subterranean and invisible. It is vital that we reflect on technology, not as a ‘thing-in-itself’, separate from us, but as part of us. Now that we know there may be no maker God, no omnipotent technologist, we have to face up to our own future as makers. For that we need to turn to philosophy – Plato, Aristotle, Nietzsche and Heidegger are a good start….

The postscript is that AI may, in the end, be the way forward even in philosophy. In the same way that the brain has limits on its ability to play chess or GO, it may also have limits on the application of reason and logic. Philosophical problems themselves may need the power of AI to find solutions to these intractable problems. AI may be the God that saves us....