Saturday, August 31, 2024

ASU is all in on AI with a bottom up approach....

I was involved as an investor and board member in a company that rolled out adaptive learning in Arizona State University. It was a four-year $2 million research project funded by the Gates Foundation. In short, it worked, but I wasn’t wholly impressed with ASU. Their investment arm bought the software and immediately sold it to Cambridge University at a profit, scalping all of the angel investors. Don’t imagine that US Universities don’t play dirty in the capitalist game.

Nevertheless, they do have a refreshing approach to innovation. I’d just rather they got more focused with it all, as they churn through software and research projects like there’s no tomorrow. The rhetoric is, of course, also a bit pious and righteous… “ASU standing at the forefront of AI, propelling the university to develop unique and transformative applications that push the boundaries of what is possible … for today, tomorrow and future generations.” Easy, tiger!

What you can’t argue with is the fact that ASU remains the state's largest school, and one of the largest in the U.S., with an overall enrolment of 145,655. In a country that has seen 13 straight years of falling enrolment, it has bucked the trend with stellar growth, boosted by online delivery. For a University with a substantial online numbers, AI is clearly part of their strategic, successful and sustainable growth plan.

5 tenets of AI

Rather than making AI a mission in itself, they have chosen a dissemination and democratisation approach guided by 5 ‘tenets’. I’ll paraphrase…..

  1. AI an enduring part of the innovation landscape
  2. Brings need to innovate in a principled way
  3. Supports human capabilities, rather than replace them
  4. Progressing very fast, so need to keep pace 
  5. Accessibility really matters

They’re fine but almost statements of the obvious. What really matters is what they’ve done within the institution.

Technical Foundation

CreateAI

CreateAI is their platform, which attempts to transform the landscape of AI innovation at ASU, a platform that anyone within ASU can use to build and interact with AI-powered solutions as effortlessly as possible. It takes the complexity out of AI development through user-friendly tools.

The advantage of this approach is its inclusive approach to innovation. It lowers the barrier to entry so that ideas can come from anywhere within the community. It also bridges research and reality, connecting cutting-edge AI research with practical applications. This makes advanced AI capabilities accessible and easy to implement, where the platform helps turn theoretical innovations into tangible solutions that can be used in real admin, research, teaching and learning contexts. And, of course, it’s all in a secure technical environment, safeguarding data and intellectual property, while encouraging bold experimentation.

MyAI Builder

MyAI Builder is their extension of their CreateAI platform and revolutionises how AI is accessed and used by making it accessible to everyone. This tool allows users to easily create custom AI-driven chatbots within the secure ASU ecosystem, fostering innovation. In essence, MyAI Builder provides a simple, streamlined process to craft personalised AI experiences, such as chatbots powered by generative AI, in just three easy steps.

The benefits of MyAI Builder are clear. It democratises AI by breaking down barriers, allowing individuals without advanced technical expertise to create something quickly. I like this. This accessibility is in line with their tenets and empowers a much broader range of users to innovate and create solutions tailored to their specific needs. By providing a tool within a secure environment, MyAI Builder supports the scalable deployment of AI applications. Users can therefore develop and deploy custom AI experiences quickly and efficiently, driving innovation across the organisation. Creating a custom AI in just three steps not only accelerates development but also makes it easier for users to experiment and iterate on their AI projects. Above all it puts the tech in the hands of students, staff and faculty to do their own thing. It rewards personal agency.

Reality

So how is it going? Well, with 530 proposals submitted and 250 projects activated across academic, research and work environments, it seems to have had real purchase across the institution. There are some pretty cool projects that have already come from their innovation challenge; an AI-generated patient on which to practice behavioural health techniques, an on-demand study buddy to help with language learning and an AI simulation that allows you to debate with some of the world’s most influential philosophers. There’s also that US obsession with tools that teach students writing skills. In other words, it is going well, with massive engagement and real applications. How many are sustainable remains to be seen. The problem with this approach is the constant 'hackathon', where lots of projects are launched, none fully formed, so that the technology more often disappoints than impresses. Having built product in this area, it is not easy to push through prototypes to effective product.

Conclusion

While different from Yale, with far more focus on its core strength around inclusion and accessibility, it is still a huge commitment, with a good technical infrastructure platform and an open invitation to innovate across the whole institution. Very much a bottom up and not top down approach. It is not an ivy league like Yale and has its eye on the many not the few. That’s admirable. But President Michael Crow has his eyes on the main prize - doing what we started with our personalised software project through the Gates Foundation. “We have long dreamt about individualized, personalized learning without constraints.” That’s would be not only admirable but truly transformative in terms of access, cost and attainment for the many.


Friday, August 30, 2024

Yale spend $150 million on AI - why this leads the way for AI in Higher Education...

US Universities are engaging with AI in ways beyond anything I see in the UK, Europe and elsewhere. I’ve written about Harvard before but Yale seems like the strongest in intent, widest in access and deepest in enabling research, teaching and learning. Well funded at $150 million, they understand the need for the necessary IT infrastructure.

What really matters is top-down management intent. Scott Strobel has been the Provost of Yale since 2020. As a successful science researcher and Caltech graduate he understands the technology and has sought, through his Yale Task Force on Artificial Intelligence, recommendations that were delivered in June. It has taken only tw0 months to get a plan and funding in place. That is impressive.

He has no doubts about the role of AI in the Yale mission…

To fulfill the university’s mission to improve the world and prepare the next generation of society’s great leaders and thinkers, we must explore, advance, and harness AI for its benefits while providing ethical, legal, and social frameworks to address the challenges it poses.

Yale’s Four AI-related priorities: 

1. Secure access to Generative AI tools

They have their own AI platform, the Clarity Platform, which will be used to secure walled garden access to ChatGPT4o walled garden for full faculty, students & staff. They are also open to using other models in the future. Other tools will also be secured such as Microsoft Copilot & Adobe Firefly. This is real action, dissolving any issues around the digital divide and data protection. It also avoids the obvious AI on the sly problem (see later).

2. Addition of New faculty & AI seed grants 

There is a clear intention to enhance curricula with AI and real grants through their Poorvu Center for Teaching & Learning to find new ways to include AI tools and content in their curricula. This is the most impressive aspect of their intent, to allow AI to enhance teaching and learning. Harvard did this in their CS50 course but this is a promise to apply it to ALL teaching and learning. In the midst of layoffs and college closures, Yale are hiring, with 20 new appointments in this area alone. It is built upon a belief that…

Additional faculty expertise will strengthen Yale’s depth of knowledge and enhance the learning environment for students, who will be expected to understand, navigate, and make decisions about AI technologies throughout their lives and careers” 

3. Enhanced Interdisciplinary collaboration

A campus-wide research symposium on AI with dig deep into how AI can be leverages across all activities within the institution. This includes money on the table with a research seed grant program. Library AI-powered tools will also be funded to increase access to relevant digital services and resources.

4. Improvements in Computing infrastructure                        

None of the above will happen easily without beefing up infrastructure and compute, so 450 CPUs are being bought to expand existing compute capability as well as demand-driven cloud GPU spend. This provides a secure and independent infrastructure to deliver all of the above.

AI on the SLY

Most schools and Universities, obsessed by a single issue – AI cheating for essays. Thye have started by seeing AI is the enemy, all the while sprinting to produce AI courses to attract students. While the vast majority of those students are already using AI, the institutions and most faculty remain doggedly negative. 

The recent Harvard survey showed 95% using ChatGPT, 30% paying for AI subscriptions, Students felt that AI lightens their academic load and reduces their need to ask staff for help.  They see the issue clearly. 45% say AI could hurt future career prospects and 20% have already changed their course paths due to AI's influence. A fascinating insight from the data was that many now have a sense of purposelessness in their education. What do they want? Free access to a paid plan for AI and consistent rules of engagement. On top of this, courses on AI, a more AI-aware career planning services and help in finding find meaning in their education.

Yale have transcended these problems by accepting that AI is here to stay and recognising that this is an important feature of all learners' education.

Conclusion

There’s MOSQUITO projects, sound buzzy but they’re short-lived, often dying as soon as funding runs out. Then TURTLE projects, duller but with substance, scalability & sustainability. They’re long-lived. You need strategic turtle projects, like Yale.

Whether you are studying, teaching, researching, or working at Yale, I encourage you to explore the resources available now and engage with the opportunities to come” says Strobel .

This is exactly what a leader in a world-leading institution should be doing. Consulting, doing your homework, bringing people with you and then being clear about the goal and sub-goals, in line with your mission.

Tuesday, August 27, 2024

Foucault - friend or foe in learning?

Michel Foucault, a prominent French philosopher and social theorist, had an enormous impact on critical theory and education as taught in universities. From the 1980s onwards his ideas infused everything, apart arguably, from actual practice. As one of the structuralist Gang of Four with Levi-Strauss, Barthes and Lacan, he is difficult to pigeon-hole, as his writing is often obtuse, abstruse and conceptually difficult. Despite this, he remains a towering figure in critical theory as expressed in the structuralist movement with a huge influence in the humanities, feminism, genre, race and post-colonial studies.

As for his influence on education and training, while profound it has been criticised for its potential negative consequences. His focus on power relations has led to an overly cynical view of education, overshadowing other important aspects like learning and critical thinking. Foucault's scepticism about objective truth also fosters relativism, undermining the authority of teachers and the value of expertise. This scepticism about the existence of objective truth has permeated educational theory, encouraging a postmodern approach that questions the legitimacy of established knowledge and expertise, where all perspectives are seen as equally valid. This influence on curriculum design has also resulted in politically charged programs that prioritise social critique over foundational knowledge. Yet there is much to think about in his work.

Philosophy

Foucault is an intellectual pioneer to some, a shameless fraud to others. His archaeology of culture uncovers power structures, ‘epistemes’ that dominate, define and control all knowledge. The individual, their movements, behaviours, interests, desires and even bodies are merely the subject of imposed, oppressive, power relationships. Cultural relativism therefore emerges as individuals are subsumed and emerge as oppressors and the oppressed. Foucault also sees philosophy as in need of the decolonisation of even time, space and subjectivity, through the whole-scale rejection of Eurocentric norms and language. This postmodern destruction of boundaries led to cultural relativism, certain forms of language as epistemically constructive and power plays between groups, not individuals or universal principles. It places gender, race and other distinctions into cultural contexts where the application of power socially constructs and uses language to oppress certain groups. 

Discipline and Punish

His early interest in mental illness and psychiatry led to the book Madness and Civilisation (1960). This fits into the Critical Theory tradition of seeing society as pathological. But it is in Discipline and Punish (1975) that the idea of ‘training’, in the wider sociological sense of the word, is exposed as stages of domination in society, moving into schools and systems of education. Learning becomes institutionalised through a shadow form of monastic enclosure, where the architecture of the school follows that of the Panopticon prison. Supervision and the serial delivery of classes in separate rooms, marching from one room to another room, with teachers policing the formal restrictions of movement and behaviour, result in strictly timetabled control. Designed for prescriptive supervision, the building is a ’pedagogical machine’ that reduces the individual to a documented object. Examinations bring this form of supervision to a head, with the labeling of subjects before release.

Teaching and learning

Foucault’s views on teaching emphasise the ways in which traditional teaching and learning serves as a mechanism of social control and power. Education is deeply embedded in power relations and play a crucial role in shaping what is considered knowledge and truth. These views emerge from his broader analyses of power, knowledge, and discourse. His perspectives challenge traditional notions of education and emphasise the relationship between power structures and learning practices.

Knowledge is power and what is taught and learned in schools, Universities and the workplace is not neutral information but shaped by power relations within those organisations and society. Knowledge is produced and controlled through a nexus of power structures, and education serves to perpetuate these structures. Schools and universities, according to Foucault, are sites where power operates to shape what is considered true knowledge and who is authorised to teach it.

They are part of a broader system of social control that disciplines individuals. Schools function to normalise behaviours and maintain social order through the relentless administration of examinations, surveillance, and hierarchical observation, all used to monitor and regulate students. These practices create a disciplined and docile body of students who internalise norms and expectations.

His ‘Archaeology of Knowledge’ and ‘Genealogy of Education’ trace the historical development of educational practices to understand how they have been shaped by power dynamics. Only then can we uncover how certain norms and values have become entrenched. This understanding can reveal the contingent nature of what is often considered natural or inevitable in education. ‘Regimes of truth’ legitimise certain types of knowledge by promoting some, marginalising the rest. Curricula, textbooks, and academic disciplines are swayed by these regimes, which dictate what is taught and considered valuable knowledge.

Learning tries to form the identities of individuals by influencing how they understand themselves and their place in society and use disciplinary mechanisms to control and regulate individuals. The teacher is a figure who enforces the norms and values of the prevailing power structures, authority figures who help inculcate societal norms in students. They contribute to the formation of subjects who conform to the expectations of the power structures within which they operate.

Critical pedagogy

On the other hand, he acknowledges the potential for education to be a site of resistance and transformation by fostering critical awareness and questioning of dominant discourses, hence seeing him as a critical theorist. Despite his critical view of educational institutions, he also saw potential for ‘resistance’ within education and believed that by understanding how power operates, individuals could challenge and subvert dominant discourses. Education can then become a site where individuals develop a critical awareness of how power shapes knowledge. This awareness can lead to questioning and transformation of existing power relations.

Critique

This shift to seeing education in terms of power relations has been influential. Yet in a democracy, where citizens vote on the major issues of the economy, health and education, the idea that everyone is deluded into playing the role of puppets, with no real agency, seems far-fetched. Critical thinking when expressed at this level seems to tip over into abstruse political theory disassociated from the reality, wishes and needs of most people. Additionally, it sets up a form of intellectual snobbery, where academics see themselves as the true arbiters of what is important and what is emancipatory.

This idea of education as activism is dangerous as it undermines the mechanisms through which democracy and stable institutions work. It puts education into the hands of a few, often against the view of the many. Foucault is not a Marxist but he is clearly influenced by Marx’s focus on social structures, class relations, and the critique of capitalism. What he does is replace economic forces with power at the institutional level.

Few solutions are offered in his critiques. This is a general problem in Critical Theory. Foucault’s idea of power is problematic in being relentlessly negative, the exercise of oppression, not liberation. It is all very well drawing parallels between prisons and schools, and there is some wisdom in being sceptical about the formalities of supervision and Victorian architecture, however, most want to see sensible behaviour management and the applications of restrictions necessary for attention and education. To caricature school supervision as ideologically driven punishment, is just that, a caricature. 

Foucault’s idea of power, a core concept in critical theory and structuralism, is that it is always assumed to be a deficit or negative, a flow of oppression. Yet power, in both politics and education, can be used positively, to free and liberate. The problem with de-anchoring everything is that you also de-anchor yourself and your own theories, setting everything adrift.

Influence

His influence on modern thought, philosophy and critical theory in academia is undoubtedly enormous. His influence on educational and learning theory is, however, oft quoted but minimal and seldom applied. After his death in 1984 his reputation was strengthened as critical theory became a dominant force in the humanities, especially in degrees which critics jokingly call ‘Grievance Studies’. While recent theorists on feminism, gender studies, queer theory (Butler), race, post-colonialism (Said, Spivak) and, even Fat studies (Bacon), all draw on Foucault’s epistemic relativism, theorists in Critical Race Theory and Feminism, such as Angela Harris (Critical Race Theory) and Kimberle Crenshaw (Intersectionality fame), have at least been consistent in rejecting Foucault and Derrida, which would have shocked them, as prime examples of oppressive white men and Eurocentric theory.

Bibliography

Foucault M. Madness and Civilisation: A History of Insanity in the Age of Reason, 1961.Abridged; translated by R. Howard. London: Tavistock (1965)

Foucault M. Archaeology of Knowledge 1961. Translated by A.M. Sheridan Smith. London: Routledge (2002)

Foucault M. Discipline and Punish  The Birth of the Prison 1971

Foucault M. The History of Sexuality 1976 - 84 Vol I: The Will to Knowledge, Vol II: The Use of Pleasure, Vol III: The Care of the Self, Vol IV: The Confessions of the Flesh

 

Thursday, August 15, 2024

One thing often happens at keynotes and conferences. It surprised me….

Ready to step on stage for 2500 people in a huge theatre, the Grieghallen in Bergen Norway. It really felt like ‘The Hall of the Mountain King’. Given talks in many countries in Europe, US, Asia and Africa since GenAI launched and something happens at these events. I had exactly the same experience at a talk I gave the next day.

Time and time again, someone with dyslexia, or with a son or daughter with dyslexia, came up to me to discuss how AI had helped them. They describe the troubles they had in an educational system that is obsessed with text. Honestly, I can’t tell you how often I’ve had these conversations. 

They rightly want to tell their story, as it has often been one of struggle, in a system that often ignores them, where they have had to find their own way to overcome their problems, or see institutions ban the very tools they need to survive. It is always heartfelt.

Text on blackboards, text-based subjects, textbooks, text-based homework, text-based exams. I now wonder at the simple fact that we send our kids off to school aged 5 or so, to emerge at 20+ having done not much more that read or write text. Is it any wonder we have skills shortages? The net results may be causing problems, with a text-trained graduate, managerial class, high on report writing, bureaucracy and rules, but low on operational and social skills.

AI is welcomed by those with dyslexia, and other learning issues, helping to mitigate some of the challenges associated with reading, writing, and processing information. Those who want to ban AI want to destroy the very thing that has helped most on accessibility. Here are 10 ways dyslexics, and others with issues around text-based learning, can use AI to support their daily activities and learning.

Text-to-Speech & Speech-to-Text Tools

This two-way street uses AI to convert difficult to read text to speech and speech to text where that is required in the system. Both are often mentioned, as text to speech cuts out the need to read allowing dyslexics and others to listen rather than read. This reduces the cognitive load associated with decoding and dealing with written text.  Its sibling, transcription, converting the spoken word into text help dyslexics write assignments, essays, emails, or notes more easily by speaking their thoughts instead of typing. These are now built into smartphones. Texts are often photographed and OCR turns them into text then to speech.

Grammar and Spelling Assistants

Dyslexics often struggle with spelling and grammar, so AI-powered writing assistants, with real-time corrections and suggestions, making writing more accurate and less frustrating, are a boon. These have become normalised and built-in to contexts where text is required. There are also tools like Grammarly, that take things a step further.

Comprehension Tools

AI can break down complex texts, summarise information and provide definitions or explanations for difficult words or concepts. This can make reading less daunting and more manageable for dyslexics. Apps like Rewordify simplify complex language, while any Chatbot can summarise to provide quick summaries of long articles or papers.

AI in Note-Taking

We have been involved in building AI features for Glean, who are a market leader in helping those with disabilities in learning, on note taking. To be honest, any learner would benefit from AI assisted note taking. AI can help dyslexics take notes more efficiently by transcribing what the teacher/lecturer says, summarising long texts or lectures, converting handwritten notes into digital format, and organizing information in a way that’s easy to understand. It can also be used to generate retrieval practice quizzes, expand notes, find links to useful sources and so on. Good examples are Glean and tools like Otter.ai can transcribe and summarise meetings or lectures, while apps like OneNote or Evernote can convert handwritten notes into searchable text and help organise information.

Visual and Multisensory Tools

Dyslexics often benefit from visual aids and multisensory learning techniques and as AI can create interactive, visual learning experiences that help dyslexic learners grasp complex concepts without relying solely on text, the creation of mindmaps, diagrams and so on, can help in organizing thoughts visually.

Translation

For dyslexic individuals who are learning a new language or need to translate text, chatbots are now AI-powered translation tools that can simplify and assist in understanding and producing text in multiple languages. Google Translate and language learning apps like Duolingo use AI to provide real-time translations and language learning support. But AI chatbots can take entire books and do this in one go.

Chatbot assistants

AI, voice enabled, digital assistants can help dyslexics manage daily tasks, set reminders, dictate and send messages, and control smart home devices through voice commands, reducing the need for reading and writing. These assistants can answer questions, engage in full dialogue and act as tutors. They can also handle tasks such as setting alarms, creating to-do lists, searching the web, all through simple voice interactions.

AI-Powered Personalized Tutors

Self-paced learning allows the dyslexic to go at their own pace and not be stymied by fast based delivery. Pace, style, difficulty and level of language can all be delivered using AI. It is astonishing how ‘academic’ text (often obtuse and badly written) is delivered to novices. It is rife in medical education. The personalised approach can make learning more accessible and effective.

AI-Powered Personalized Tutors

One step beyond self-paced material is AI-driven tutors which can provide one-on-one support, adapting lessons to suit the pace and learning style of dyslexic individuals, offering tailored explanations, practice exercises, and feedback. These tutors are now integrated into platforms and can help dyslexic students with personalized learning experiences. Khanmigo and Duolingo are good examples.

AI Accessibility Features in Devices

Smartphones, tablets, and computers now come with built-in AI-powered accessibility features that can be customised for dyslexic users. These include voice commands, screen readers, and customizable display settings. These include Apple's VoiceOver, Android's TalkBack, and Windows Narrator, all examples of AI-driven features that enhance device usability for those with dyslexia.

Conclusion

It is often forgotten that huge numbers of disadvantaged learners leverage AI tools on their own. They are the original ‘AI on the SLY’ users. Individuals with dyslexia have been using these tools to overcome some of the challenges they face with reading, writing, and processing information, making it easier to learn, work, and communicate effectively. We have a lot to learn from them, especially our almost fanatical obsession in education with the written word. So next time you hear someone who wants to ban AI in learning, think again. Having finally found solutions to the problem, do not throw them back into a world where they feel abandoned again.


Monday, August 12, 2024

Heidegger's a fascinating thinker on teaching, learning and technology

Martin Heidegger is an enigma and contradiction in philosophy, and for those interested in learning and technology. His Nazi beliefs and actions are unforgivable, his relationship with Hannah Ardent odd, his language at times impenetrable, yet he remains a hugely influential thinker.

After joining the Nazi Party, announced in his inaugural address in 1929, when he succeeded his teacher Husserl, he went on to exclude Jewish faculty members, including Husserl. Despite this, he remains a hugely influential thinker. Another idiosyncrasy was his secret affair with Hannah Ardent, a Jewish student 17 years his junior, who went on to be one of the most important political theorists of the 20th century. 

His break with the Western tradition of metaphysics makes him stand out with his recentring or grounding of human experience in being, not the metaphysical systems of Western thought. Although he not a learning theorist but a philosopher, his thoughts on teaching and learning are to be found embedded in his philosophical work.

Dasein

In his great work Being and Time (1927), Dasein is a ‘being-in-the-world’, not like the Cartesian ego, self or subject but within a process of being. Thinking and learning are just ways of being or engaging with the world, one must also react to and engage with the world. It follows that learning is a form of caring about (besorgen) the world, so not just thinking but interest in what is being learned. It is only if one cares that one learns, going forward to inquire and get involved with learning about the world. 

One is thrown forward in life, with what one wants to be, one’s future potentialities and abilities to be. This is what drives one forward. Learners and teachers must be seen as being in the world, not subjects that have to learn about the world. One must see learners as having attitudes - being attracted, curious, vaguely interested, even bored, then see language or discourse as the shared form of being that leads towards goals in life that come through learning.

Teaching and learning

In What is thinking? (1954) teaching, learners and learning are seen within the context of deeper more authentic thinking. To teach or learn is to avoid the superficialities of ordinary thinking. He takes the case of a cabinet-maker apprentice, who does far more than just learn how to measure and use the tools. One must find the essence of the process in the activities and the essence of the wood itself. Learning is a deeper form of commitment and immersion within the world not just memorisation or knowledge.

With this insight he reflects on the relationship between the teacher, learner and learning. In a wonderfully intense passage, he explains why teaching is harder than learning, as the teacher must not be the presenter of knowledge, a didact or pedagogue, but must let the learner learn within the world. Teaching is an exalted matter and not to be confused with titles, such as Professor.

Learning is far more than basic accumulation of knowledge and practice, more than even doing. The learner must respond and relate to the deeper effects of their craft. Using a hammer ‘ready-to-hand’ does not involve consciousness in any rational sense, it may even hamper its proper use. What matters is a deeper engagement with the project an purpose.

Learning is an unveiling and involves uncovering truths through direct interaction with the world. It must be genuine, leading to a transformation in understanding. Instead of merely providing information, the teacher acts as a guide, creating conditions where students deeply connect with the subject matter. He describes teaching as ‘Leiten’ - leading students not by instruction but by guiding them towards personal comprehension and insight.

To be specific, true learning starts with questioning. For Heidegger, questioning is not just about finding answers but about engaging with the world in a meaningful way, leading to new understandings and ways of being. This is an aside but drawing from Heidegger’s ideas, dialogue-based AI can also be viewed as a learning space. Like a classroom, it facilitates questioning, exploration of topics, and genuine engagement, fostering a deeper connection with the subject matter.

Technology

In a typically Heideggerian analysis, there is far more to technology than any instrumental theory tends to suggest. Technology marks this era, as the last in metaphysical thinking, with technology replacing previous systems of belief. The technological age is different. We see technology, and importantly even ourselves, as a ‘standing reserve’ to be on call, ready, optimised and made efficient. 

He uses electricity as an example. It is there. Almost invisible to us but called on by us when needed. Our social or community norms are given to us, we have no choice in this but they also change. Indeed, we become addicted to their easy availability and readiness. There is an ‘enframing’ with technology to put it into a ‘standing reserve’, in advance of consumption. In that respect it is similar to the Nietzschean analysis of the world where people separate the lived world from things that are seen as categorically even metaphysically separate. He sees technology, as a system, like a metaphysical system, that distorts our thinking and actions. 

In later life Heidegger wrote specifically about technology. His mistrust of modernism led him to see the technological dimension of the modern world as a reduction of humanity into a ‘resource’, reducing the possibility of living authentic lives. However, he avoids any trite dismissal or negativity around technology, as it is also a ‘prelude’ to thinking more authentically.

Critique

His devotion to Nazism for many years showed a philosophical and political commitment to the state, with both thought and actions. There is no denying his belief that Dasein was compatible with Nazism, along with dubious theories expressed about self-sacrifice and extreme personal acts of antisemitism. 

His writing style is notoriously dense and obscure making his work difficult to understand. But the main criticisms are that his primary focus is on ontology, the study of being, therefore somewhat detached from practical, ethical, and political concerns. Overall, his deconstruction of traditional metaphysical concepts has had a huge impact on postmodern thought. This influence, some argue, has contributed to a relativistic tendency, with a focus on the self’s authenticity, over other philosophical concerns, undermining the possibility of the search for objective truth and ethical standards. As he became increasingly critical of modern technology and its impact on human existence, some argue that he failed to understand and appreciate the potential benefits of technological advancement.

Influence

Heidegger (along with Nietzsche) are two huge existentialist influences on post-structuralists such as Foucault, Derrida and Lyotard. Derrida, in particular, rejects but builds on Heidegger for his deconstructive approach to texts. That is not to say that the influence was entirely fruitful. Heidegger’s rejection of the language of Western philosophy - the subject, object, act and content - for the language of being (Sein) which is prior to the oppositional systems of appearance and reality, also led to the fragmentation, invention and playfulness with language that took these theorists, not only further away from philosophy but also any semblance of relevance or usefulness for teachers and learners. The dissolution of human nature in favour of just being-in-the world or feelings has led to de-anchoring that leaves many stranded in the process.

Bibliography

Heidegger, M., Macquarrie, J. and Robinson, E., 1962. Being and time.

Heidegger, M. and Krell, D.F., 1980. Basic writings–nine key essays, plus the introduction to being and time. Tijdschrift Voor Filosofie, 42(1).Blake, N., Smeyers, P., Smith, R.D. and Standish, P. eds., 2008. The Blackwell guide to the philosophy of education (Vol. 6). John Wiley & Sons.


Saturday, August 10, 2024

Astonishing figures and insights from Harvard Undergraduate Survey on Generative AI


The survey looks at the influence of AI on the study habits, class choices, and career prospects of Harvard students, with responses from 326 undergraduates

Use

A striking 87.5% of students have embraced generative AI, with an overwhelming 95% using ChatGPT, while Claude and GitHub Copilot (a programming assistant) are each used by around 20%. A significant 30% of users are also investing in premium AI subscriptions, demonstrating a willingness to pay for the edge AI provides. 25% of students feel that AI has lightened their academic load, reducing their need ask course staff for help, even completing readings, though it's interestingly not keeping them away from lectures. 20% of students have already changed their course paths due to AI's influence, and more than half want Harvard to step up and offer more courses that delve into the profound future impacts of AI.

What are they using AI for?

Most common to answer general questions

For one third AI is replacing Wikipedia and Google search
Help with writing assignments (coming up with idea, drafting, proof-reading)
Writing emails
Helping with programming assignments
Data processing

Beliefs and fears

None of the above surprises me but it is the detail where the most interesting stuff is to be found. Students are increasingly anxious about academic fairness, fearing that others may gain an unfair edge through AI, while those from less privileged backgrounds feel uneasy around the high costs of premium AI tools. Although those without financial aid are twice as likely to pay for costly AI subscriptions, widening the gap between them and their peers who receive partial or full financial assistance.

45% of the students are concerned that AI could hurt their future career prospects. But the most fascinating belief to emerge from the data is that many are wrestling with a sense of purposelessness in their education as AI advancements accelerate at a breakneck pace. A staggering 40% believe AI might outstrip human abilities within the next 30 years, a belief that underscores their deep-seated fears about the long-term consequences of AI.

What do they want?

Facilitate access to AI with free access to a paid plan of ChatGPT or Claude.5 Establish & enforcing consistent rules on AI use 

Provide AI-aware career planning services

Offer courses exploring the future impacts of AI

Help students find meaning in education and beyond 

Conclusion

The impact on undergraduates is clear and cannot be ignored. They’re using it. That use will not go away. It is often said that engagement is a problem in learning. That appears to be true in this case but the lack of engagement is with faculty and administrators.

If we listen to learners we can guide its use towards good outcomes. The sly use of AI has clearly become normalised. This will continue for some time but it is now time for institutions to step up and get it integrated into their teaching. I can’t help but ruminate over the fact that students are already thinking about the futility of learning skills that will be done better by AI. It’s their future and they are concerned that their very expensive education may not be worth the effort.

Meanwhile we need to get going, This means creating the future, not just letting it happen to us. Think about what institutions need to do in this future. These students are giving us clear recommendations. Above all, take AI seriously. Use it to teach and learn. Above all, get it into the curriculum so we all know what we are dealing with.

PS
On costs the lifeboat seems to be righting itself. For 70% AI used is free, another 4.6% are paying 0-$15, 24.3% paying what looks like the standard $20. This is modest for students at Harvard and services are some good services are now free. Compared to the cost of textbooks ($1000-$1200) a year) and other services at Harvard, it seems reasonable. Interestingly some Universities, like the University of Michigan, are paying for all student licences. It will happen over time, early days.

Friday, August 09, 2024

Does Derrida's View of Language help us understand Generative AI?

Many years ago, I had lunch with Jacques Derrida in the staff club at the University of Edinburgh. He was as aloof and obscure as you would imagine. The French philosopher, taught at the Sorbonne and the École Normale Supérieure and his work gained huge attention in the United States, influencing literary theory, cultural studies, and philosophy. Derrida taught at various institutions, including Yale University and the University of California, Irvine. 

What is less known is that he was influenced by JL Austin’s philosophy of language, as speech acts but went in another direction with a a purist focus on text in his books Of Grammatology (1967), Writing and Difference (1967), and Speech and Phenomena (1967), where he introduced his deconstructive methods.

His fascinating insights into language, especially his concepts of deconstruction, différance, the fluidity of meaning an dislocation from authorship, resonate powerfully with the way Large Language Models (LLMs) operate.

Big picture

But there is a bigger picture than just language here. The Metaphysics of Presence is his target, the philosophical tradition that seeks to establish a fixed, unchanging foundation of meaning or reality, privileging beliefs in a fundamental essence or truth that underlies appearances and change. This turns Structuralism in on itself, de-anchoring structures and denying the objectivity of science and reality. Like many in the Critical Theory tradition, he ‘deconstructs’ the large metaphysical and secular narratives through the deconstruction of its texts. His ideas challenge traditional assumptions about meaning, truth, and interpretation, offering a complex and nuanced view of language and texts. This was somewhat prophetic on Generative AI, LLMs in particular, where we feel a sense of dislocation of text from its past and origins. LLMs produce texts, not in the sense of a truth machine, but as something that has to be interpreted.

Text not speech

Jacques Derrida, like Heidegger, introduced an entire vocabulary of new terms, which he invents or qualifies in his philosophy (différance, intertextuality, trace, aporia, supplement, polysemy etc). He had an unerring focus on texts as he saw Western thought and culture as being over-dominated by speech (phonocentrism). This led him to elevate the written word, importantly and oddly, seen separately from its author. Rejecting the phenomenology of Husserl and the focus on consciousness and sense-data, of which speech is a part, he saw traditional philosophy as being tied to the language of speech, as opposed to writing. This prioritisation of speech over writing, was based on the assumption that speech is a more immediate, reliable, genuine and authentic expression of thought. Text freed us from this fixity of thought.

Generative AI has revived this focus on text as LLMs were introduced as text only tools and adopted on a global scale. Anyone with an internet connection could suddenly create, summarise and manipulate text in the context of dialogue. Derrida would have been fascinated by this development. He would have a lot to say about the way they are created, trained and delivered.

Texts

Knowledge is constructed through language and texts, which are ‘always’ open to interpretation and re-interpretation as there are no totalising narratives that claim to provide complete and final explanations. Everything is contested. So, Derrida encourages dialogue, critical engagement, and the deconstruction of traditional educational structures.

Teaching and learning should be a dialogical process, which aligns with AI dialogue if there is a mutual exchange of ideas, as the interaction allows for the exploration of different viewpoints and the questioning of assumptions. It should also involve critical engagement with texts and ideas, encouraging students to reflect on the underlying assumptions and power dynamics that shape knowledge.

He highlights the central role of languag in shaping our understanding of reality, the fluidity and indeterminacy of meaning, and the interconnectedness of all texts. This idea is a foundational element of his broader philosophical project of deconstruction, which seeks to uncover and challenge the assumptions and oppositions that underpin traditional ways of thinking.

Deconstruction

He is best known for developing the concept of deconstruction, a critical approach that seeks to expose and undermine the assumptions and oppositions that structure our thinking and language. Derrida used deconstruction to show how texts and philosophical concepts are inherently unstable and open to multiple interpretations. He uses various techniques to analysing a text to reveal how its apparent meaning depends on unstable structures and oppositions, such as presence/absence or speech/writing. For him, text is not a truth machine but a human activity that is complex and ambiguous, a similar view could be taken of generative LLMs.

To say, as he did in Of Grammatology (1967) that “there is nothing outside of the text”, on first appearance, seems ridiculous. The Holocaust is not a text. What he meant was not the text itself but something beyond. What that beyond is, remains problematic, as Derrida refused to engage in much interrogation of his terms but by stating that there is nothing outside the text, Derrida aims to deconstruct traditional hierarchies that privilege speech over writing or reality over representation.

He argues that every aspect of our understanding and experience is mediated through language and texts. There is no direct, unmediated access to reality; everything is interpreted through the ‘text’ of language, culture, and context. This means that meaning is always dependent on the interplay of signs within a given text and the context surrounding it. He challenges the idea that words have stable, inherent meanings. Instead, he posits that meaning is generated through the relationships between words and their differences from each other.

Deconstruction in learning

Jacques Derrida’s has very distinctive and complex views on teaching and learning that emphasise the fluidity, uncertainty, of knowledge. He pushes the interpretative, dynamic, and contingent nature of knowledge. His ‘Deconstructive’ approach sees critical engagement, dialogue, and the continuous questioning of assumptions as fundamental. This pedagogical model sees the teacher facilitate interpretation and exploration, and education as an open-ended process that values ambiguity. It is a more fluid, reflective, and inclusive approach to education that aligns with the complexities and uncertainties of contemporary knowledge.

Deconstruction in particular plays a crucial role in his views on teaching and learning. Deconstruction involves critically examining and unravelling texts and concepts to reveal hidden assumptions and contradictions. It challenges the foundational assumptions and binaries, such as true vs. false, author vs. reader, that underpin education. This encourages students to question and critically analyse accepted knowledge rather than passively absorbing it, an attitude we see frequently expressed in relation to LLMs.

Teachers need to guide students to interpret texts in a way that exposes multiple meanings and interpretations, in a process of engaging with texts and ideas in a way that reveals their complexity and ambiguity. That is because Derrida viewed knowledge as inherently unstable and contingent, not fixed and objective.

Instability of Meaning

Derrida brilliantly argued that meaning in language is never fixed and is perpetually ‘deferred’, hteir meaning coming from their production. Words derive meaning through their differences from other words and their ever-changing contextual usage. This constant flux and unsettled nature of meaning is mirrored in LLMs. These models generate text based on patterns learned from vast corpuses or datasets of text, with the meaning of any output hinging on the context provided by the input and the probabilistic associations within the model's structure. Just as Derrida proposed, the meaning in LLM outputs isn't fixed or stored as entities only in a database to be retrieved. They are created and shift with different inputs and contexts. This relational nature of language, captured in LLMs, implies that meaning is always deferred, never fully present or complete, leading to his concept of "différance".

Différance

Derrida uses the term ‘différance’ in two senses both ‘to defer’ and ‘to differ.’ This was to indicate that not only is meaning never final but it is constructed by differences, specifically by oppositions. It suggests that meaning is always deferred in language because words only have meaning in relation to other words, leading to an endless play of differences. The concept of différance captures the essence of meaning as always deferred and differentiated, never fully present in a single term but emerging through a network of differences. This is akin to the text generation process in LLMs, where each word and sentence is produced based on its differences from and deferrals of other possible words and sentences. The model’s understanding is built on these differences to predict and generate new coherent text.

Intertextuality

Intertextuality then emphasises the interconnectedness of texts. Derrida thought that language is inherently intertextual, with texts inextricably linked to and deriving meaning from other texts, creating a web of meanings that extends infinitely. This intertextuality means that no text can be understood in isolation, as its meaning is shaped by its references to other texts. Texts always refer to and are shaped by other texts, creating a web of meanings that are interconnected and interdependent. 

This is an inherent quality of LLMs. However, text in a Large Language Model (LLM), like GPT-4 or Claude, is not stored in the traditional sense of having a database of phrases or sentences.  The text, as tokens, is interconnected in a highly abstracted form, through embeddings and neural network parameters. The model doesn’t store explicit sentences or phrases but instead captures the underlying statistical and semantic patterns of the language during training. These patterns are then used to generate contextually appropriate text during inference.

Understanding a text involves recognising its relationship to other texts. Similarly, LLMs are trained on a vast and diverse corpus of texts, making their outputs inherently intertextual. The responses generated by LLMs reflect the influences of countless other texts, creating a rich web of textual references that Derrida described.

Trace

A ‘Trace’ is the notion that every element of language and meaning leaves a trace of other elements, which contributes to its meaning. This trace undermines the idea of pure presence or complete meaning. For example, the word ‘present’ carries traces of the word ‘absent,’ as our understanding of presence is inherently tied to our understanding of absence. This conforms with the way tokens (as traces) are held in vector databases and when created contain ‘traces’, as mathematical connections, of all other text in that database.

Aporia

With ‘Aporia’ we reach a state of puzzlement or doubt, often used by Derrida to describe moments in texts where contradictory meanings coexist, making a definitive interpretation impossible. LLMs, when they reach a state of Aporia, famously hallucinate or make the effort to resolve the dialogue between the user and model. The model may even apologise for not getting something right first time. It expresses puzzlement, even doubt, about its own interpretations and positions. It is in an Aporiatic state.

Writing and the Supplement

Derrida focuses on écriture (writing) and the idea of the ‘supplement’ that adds to and completes something else, but also replaces and displaces it. Derrida used this concept to show how what is considered secondary can actually be fundamental. This may have come to pass with LLMs, where new text is tied to and produced from old text but replaces it entirely. Every new word is freshly minted, there is no sampling or copying.

Writing does not just record knowledge but creates and transforms understanding as a supplement to speech. Teaching should therefore emphasise the active role of writing in shaping and reshaping knowledge. The ‘supplement’ represents the idea that meaning is never complete and always requires additional context or interpretation. This concept implies that learning is an ongoing process of adding new perspectives and insights rather than reaching a final, complete understanding. It helps deconstruct and reconstruct knowledge in meaningful ways.

We can see how Generative AI is a ‘supplement’ to forms of writing, whether a summary, rewriting, expansion even translation, as an open process of development, rather than final product.

Absence of Authorial Intention

Derrida also challenged the idea of authorial intention, suggesting that the meaning of a text emerges from the interaction of the text with readers and other texts, not from the author's intended meaning. LLMs, devoid of intentions or understanding, generate outputs through statistical associations rather than deliberate meaning-making, so focus on this interaction between the output of the LLM and the reader. LLMs de-anchors text from its authored data as used in training. The meaning in LLM responses arises from patterns in the data rather than any inherent intention, aligning perfectly with Derrida's de-emphasis on authorial intention. 

Textual Play and Polysemy

Derrida highlighted the playful nature of language and the multiplicity of meanings (polysemy) that any word or text can have. LLMs exhibit this same playfulness and multiplicity in their responses. A single input can lead to various outputs based on slight contextual variations, also across models. showcasing these models’ ability to handle and generate numerous forms of language.

Criticism 

Avoiding the reality of even speech, restricts debate to texts. Yet it is not clear that education is what he calls speech or ‘phonocentric’ and his evidence for this is vague and unconvincing. His denial of oppositional thought, which he tries to deconstruct through reversal, denies biological distinction like gender and the persistence of a subject in relation to objective reality. It becomes an excuse for avoiding debate by reducing the other person’s views as a vague text. It matters not what your intention was, only what was said.

He refuses to define or even defend concepts but it is not clear that concepts such as ‘difference’, which he defines rather confusingly as both deference and difference, are of any relevance in education and learning. As his writing moved further into wordplay, playing around with prefixes and salacious references to death and sex, it drove him further away from being in any way relevant to education, teaching and learning theory, apart from literary theory. 

Deconstruction of texts is his method of instruction but his only method of instruction. Ultimately it is an inward-looking technique that cannot escape its own gravity. No amount of debate can produce enough escape velocity to deny the results of deconstruction. His obsession with double entendres, puns, sex and death also detract from his theorising. Derrida's writing style is often seen as dense and obscure, and some, like John Searle and Jurgen Habermas, criticised him for lacking clarity and precision in his arguments. Searle accused Derrida of "bad writing," and Habermas critiqued his approach as relativistic and lacking commitment to rational discourse.

With Derrida we are at the tail-end of critical theory, where the object of criticism is reduced to texts and methods at the level of the ironic. His impact on education has been almost nil, as there is little that had enough force or meaning to have impact. Having rejected all Enlightenment values, large narratives, even speech, Derrida’s postmodernism is its own end in itself. His reputation lives on in the self-referential pomposity that postmodernism created, mostly limited to academia, and even there only in a subset of the humanities, where spoof papers that mimic its vagueness and verbosity have been regularly accepted for publication.

Our exploration of Derrida and LLMs comes up against the fact that Generative AI is not multimodal, not just text. It can engage in speech dialogue, generate images, audio, avatars, even video. Here, to find useful insights, one has to stretch Derrida to breaking point! 

Conclusion

The parallels between Derrida's theories and LLMs reveal a fascinating intersection between philosophy and technology, illustrating how philosophical ideas about language can find new life in the digital age. He takes critical theory down, away from larger narratives, groups or individuals to language itself. Through ‘deconstruction’ he looks at the ambiguity of texts, de-anchored from reality, even their authors. Derrida's deconstructionist view of language and the functioning of Large Language Models both emphasize the fluid, dynamic, and context-dependent nature of meaning. While Derrida's theories stem from deep philosophical inquiry into language and meaning, the operational mechanics of LLMs echo these ideas through their probabilistic, context-sensitive text generation. Both challenge the notion of absolute, fixed meaning and highlight the complexity and interconnectedness inherent in linguistic communication and dislocation of text from its origins and provenance.

Bibliography

Derrida, J., 2001. Writing and difference. Routledge.

Derrida, J., 1998. Of grammatology (p. 456). Baltimore: Johns Hopkins University Press.

Derrida, J., 1982. Margins of philosophy. University of Chicago Press.

Pluckrose, H. and Lindsay, J.A., 2020. Cynical theories: How activist scholarship made everything about race, gender, and identity—and why this harms everybody. Pitchstone Publishing (US&CA).

 

Thursday, August 08, 2024

Benchmarks on factual accuracy on LLMs

At last, the paper we’ve all been waiting for. It provides a benchmark for factual accuracy against established knowledge bases.

STUDY

"WILDHALLUCINATIONS: Evaluating Long-form Factuality in LLMs with Real-World Entity Queries" generated a ton of stuff (118,785 generations), from 15 LLMs on 7,919 topics.

They vary in performance across domains but it is clear that those with Wikipedia pages do better. This is good news for school educators, as most of what is taught in a school curriculum is well covered in knowledge bases such as Wikipedia. This would also apply to many undergraduate courses, especially 101 courses.

RESULTS


Larger and more recent are better than older and smaller LLMs, with GPT-4o and GPT-3.5 showing the highest accuracy. Some models opt out or abstain from giving outputs, where they are more challenging queries. Another interesting conclusion was that open source models need to raise its game, as they performed worse that closed models.

What was clear, is that results are better if the topic has Wikipedia or similar pages. Unsurprisingly models tend to have lower factual accuracy on rarer or edge cases where there is less likely to be good structured sources available.

CONCLUSIONS
I'm not as obsesses by hallucinations as some. It comes from the expectation that they should be truth engines when they are tools that help us with tasks. Search engines come up with false results, books are full or mistakes, teachers make mistakes, as many subjects are now taught by teachers who do not have a Degree in that subject.

This is good news for school educators, as the topic covered invariably have good Wikipedia pages or similar. School-level curricula tend to be well covered in knowledge bases. I’d say this is also true for undergraduate courses, especially 101 courses, the area I’d focus on for AI support.

This is even truer for business applications where ground truth on knowledge is less important. It is much more qualitative.

It is also important to remember that this paper shows that things are getting a lot better, that will continue.

Strawberry Fields forever? Is GPT5 really to change the world?

Let me take you down
'Cause I'm going to strawberry fields
Nothing is real
And nothing to get hung about
Strawberry fields forever...

The Beatles had it sussed! Replace Strawberry Fields with AI, where the virtual intelligence is real then the distinction between what mind and machine can do dissolves. Embrace this leap, don’t get hung up on cynicism. This will be forever, a species changing event.

I’m taking a punt but we keep an eye on model leaks and something’s brewing. ‘Strawberry’ is name of OpenAI’s reasoning project and reasoning in some leaked models is getting very good. Playing around in Chatbot Arena, there seems to be signs that models, like ‘sus-column-r’ are reasoning quite well. You never really know but even Sam Altman has been leaking strawberry symbols. This is a simple example, but you get the point..











With agents and reasoning it’s not just a game changer, it’s a new game. It moves tools into real human analysis and decision making, in ways we haven’t seen so far.

Her

The strawberry reference is actually from the movie ‘Her’, the best movie ever made on AI. Directed by Spike Jonze, it refers to strawberries during a scene where Theodore, is talking to Samantha (Her). Their conversation is about a book that Theodore is writing, where he mentions that he writes about a character who remembers "perfectly ripe strawberries" from his childhood. Strawberries serve as a symbol of nostalgia and the longing for a simple, perfect moment from the past. It highlights the theme of human experiences and memories, which are central to the film's exploration of relationships and the nature of human connection, even with artificial intelligence. The strawberries symbolise the poignant, sensory details that make memories so vivid and meaningful. 

It's the movie equivalent of the famous moment in Marcel Proust's Remembrance of Things Past. where a madeleine biscuit, dipped in tea, triggers a flood of memories for the narrator. 

"And at once the vicissitudes of life had become indifferent to me, its disasters innocuous, its brevity illusory—this new sensation having had on me the effect which love has of filling me with a precious essence; or rather this essence was not in me it was myself. I had ceased now to feel mediocre, accidental, mortal. Whence could it have come to me, this all-powerful joy? I sensed that it was connected with the taste of tea and cake, but that it infinitely transcended those savours, could not, indeed, be of the same nature. Whence did it come? What did it signify? How could I seize and apprehend it?"


How can Large Language Models (LLMs) like GPT-4 be improved on inference and reasoning?

You can have different and more focused data. Here scale may help as larger datasets with targeted synthetic data, from specific domains and contexts, may well help with more nuanced and reasoned output. If that data is selected or annotated for specific reasoning tasks, such as logical reasoning, mathematical problem-solving, or common sense reasoning, it should help the model learn specific reasoning skills. But that is not enough. Reasoning needs memory over longer contexts to use relevant information from other parts of a conversation or text. Attention also needs to focus on the input or identified intentions and goal.

But the big need is in fine-tuning the model on tasks that specifically require inference and reasoning. This is complex, beyond just questions and answers into problem-solving and decision-making. It also seems likely that actual databases of agreed knowledge would be useful. There’s also the gnarly problem of integrating different modalities

In other words, there’s an ensemble of techniques that need orchestrated to do what we do, only better. Humans are actually quite poor at reasoning, as we have inbuilt biases, limited short term memories and fallible long-term memories. We don’t even have a simply calculator model. The brain can barely cope with times-tables and that takes years of training.

Conclusion

If this happens, and I think it is only a matter of time, we will have moved into another era of AI. Current LLMs are like us, too like us. The next generation of AI will be better than us. This has huge implications for productivity, employment and the future trajectory of our species.


Sunday, August 04, 2024

6 Level AI Maturity Model (AIMM)

While at Learning Pool, I contributed to the design of a data maturity tool for organisations. Reflecting on that experience, the potential for a similar approach in the realm of AI is real. Given the rapid advancements in AI and its proven impact on productivity, an AI Maturity Model (AIMM) could help organisations navigate and optimise their AI adoption journey. 

A maturity model has to be simple, show progression and be applicable to all organisations; schools, colleges, Universities, public sector through to large corporates.

Recent studies and data confirm that AI technologies significantly boost productivity. However, many employees resort to using AI covertly because their organisations either lack clear policies or impose unnecessary barriers. This scenario underscores the need for a structured approach to evaluate and enhance AI maturity within your organisation. 

Research also shows that many are having to use ‘AI on the SLY’ as their organisation is either lax on policy or puts barriers in their way. Having an AI Maturity Model allows organisations to first identify where they are then decide where they want to be, tactically or strategically. This also gives clarity to people within organisations on where they stand in using the technology. More clarity also allows organisations to deal with issues such as lack of oversight, potential security risks and benefits.

You can categorise maturity curves in many ways but, in general, the sanctioned and pro-active encouragement and use of AI has become apparent.


 1. AI on the Sly

Employees use AI tools unofficially due to absence of policies or constraints. Here we have the presence of shadow, sporadic use of AI tools without organisational support. This is not ideal but has become very common.

 2. Informal Without Policies

Employees use AI tools openly with no defined policy, restrictive measures or constraints. Here we have the known but widespread use of AI tools without sanctioned organisational support.

3. Informal With Policies

AI tools are acknowledged and their use is permitted but not actively promoted. Basic AI policies are in place, AI tools are used for routine tasks, with limited organisational support.

4. Tactical Pilot Projects

AI is used in pilot projects to test its potential and viability in specific areas. These pilot projects have defined objectives often with initial investment in AI tools and training, with results monitored and analysed.

5. Strategic Implementation

AI adoption integrated into the organisation’s strategic initiatives. AI projects aligned with strategic goals, dedicated AI teams, regular training programs, funded initiatives with investment in infrastructure and tool use.

6. Deeply Embedded 

AI is embedded in the core business processes and is a key driver of organisational growth. There is a pervasive use of AI across functions, continuous AI innovation, significant investment in AI infrastructure and talent, with full IT infrastructure and support.

Ethics and compliance

Ethics are largely captured within the regulatory and legal environment in which you work. Vague, exaggerated and sometimes irrelevant ethical debates, myths and beliefs can get you stuck at levels 1 & 2. At these levels users are may be ignoring regulatory and legal issues, so risky. However, at this early stage in the market the risks are small. At 3 policies force the issue and should be aligned with rules in whatever territories the organisation operates in – US/UK/EU/China etc are all very different. In many ways the higher your progress the more likely you are to have access IT and legal advice and if you have a major provider, such as Microsoft, Google or OpenAI on an enterprise basis – they do all the hard work for you. 

Change management

On advice and procurement, one must be careful with arriviste consultants who are not practitioners. There are levels of expertise within AI. As for vendor products, go through your procurement process. It is not easy, as the technology is changing but do NOT lock yourself into licence agreements beyond a year and keep options open. This is not easy at enterprise level procurement.

As for internal advice and support, it is no different from any other change management issue. HR, L&D or Legal rarely drive such initiatives but they need to be consulted as part of the change management process. AI on the SLY is a way for employees to slip round HR and L&D, who may be behind the curve on productivity. 

The need for formal training tends to come in at level 4. There is a danger with premature training, such as prompt engineering, where the underlying technology changes so quickly and negates what is taught. 

Conclusion

Developing an AI Maturity Model can provide organisations with a framework to assess their current AI capabilities, identify areas for improvement, and strategically plan their AI journey. By understanding where they stand on the AI maturity curve, organisations can make informed decisions to foster a culture of innovation and leverage AI for sustained growth. 

© Copyright PlanBLearning 





Thursday, August 01, 2024

AI ethical objections - use it, you see the light… says Gartner

Louis Brandeis, the US rights lawyer, famously said that “sunlight is said to be the best of disinfectants” and in another letter he qualified this statement as he had been thinking “about the wickedness of people shielding wrongdoers & passing them off (or at least allowing them to pass themselves off) as honest men.” So he came up with another version “If the broad light of day could be let in upon men’s actions, it would purify them as the sun disinfects.” What he meant was that what we now call ‘transparency’ keeps us honest.

Rather than organisations setting up barriers to the use of AI, they should let the light in, make the effort to understand what it is and how it can help, and encourage its use.

Gartner

Citing myths, misconceptions and media clickbait, said Gartner VP analyst Svetlana Sicular said at the Gartner conference in Sydney, lies behind much of the fear and negativity around AI. She claimed that worries about unemployment drops from 60% to under 14% among people who actually use the technology. The problem is one of perceptions not reality. she stated “Once people are actually exposed to AI and asked to make use of it, their concern about job loss drops significantly... the problem boils down to a lack of exposure and understanding.”

Exposure not regulation

Media claims, in particular, keep the doomster fires burning as it makes good headlines and copy. In reality, they rarely understand the technology. This has led her to recommend exposure rather than regulation. Media portrayals, which border on sci-fi fiction, and misconceptions fuel much of this fear, Sicular noted. 

She made the very good point, which I wholly agree with, that the experience of using GenAI is critical in managing expectations. Time and time again I’ve seen sceptics turned into evangelists, when they go through that personal experience of seeing what it can do for them personally, especially at work. Personal agency seems to free people from negativity, that sense of learning and using something new, that genuinely excites you. Keeping GenAI in no-man’s land limits what can be achieved. 

Problem

The problem as evidenced by Microsoft and Cache reports, is that organisations are not allowing people to use the technology. Bottlenecks in HR and senior management mean that huge numbers are using it on the sly. This means that productivity increases are being lost and that you potentially lose ground against the competition. 

Conclusion

We are seeing evidence in both education and the workplace, that huge numbers of people are using GenAI on the sly, as there are barriers put up by their organisations, despite data showing productivity gains. This is to be expected. The technology is very new and very different. It invokes a sense of awe and wonder when used. That ‘aha’ moment is needed to shift the various biases that come to bear when faced with a radically new technology, like AI.