Sunday, July 16, 2023

This is the worst AI will ever be, so focused are educators on the present they can’t see the future

One thing many have not grasped about this current explosion of AI, it is that it is moving fast – very fast. Performance improvement is real, fast and often surprising. This is why we must be careful in fixating on what these models do at present. The phrase 'AI is the worst it will ever be' is relevant here. People, especially in ethical discussions, are often fixated by the past, old tools and the present, and not considering the future. It only took 66 years between the first flight and getting to the moon. Progress in AI will be much faster.

In addition to being the fasted adopted technology in the history of our species, it has another feature that many miss – it learns, adapts and adds features very, very quickly. You have to check in daily to keep up. 

Learning technology

The models learn, not just from unsupervised training on gargantuan amounts of data but also reinforcement learning by humans. LLMs reached escape velocity in functionality when the training set reached a certain size, there is still no end in sight yet. Developments such as synthetic data will take it further. This simple fact, that this is the first technology to ‘learn’ and learn fast, on scale, continuously, across a range of media and tasks, it what makes it extraordinary.

 

Teaching technology

There is also the misconception around the word ‘generative’, the assumption that all it does is create blocks of predictable text. Wrong. May of its best uses in learning are its ability to summarise, outline, provide guidance, support and many other pedagogic features that can be built into the software. This works and will mean tutors, teachers, teaching support, not taking support, coaches and many other services will emerge that aid both teaching and learning. They are being developed in their hundreds as we speak.

 

Additive technology

On top of all this is the blending of generative AI with plug-ins, where everything from Wikipedia to advanced mathematics, have been added to supplement its functionality. These are performance enhancers. Ashok Goes had blended his already successful teaching bot Jill Watson with ChatGPT to increases the efficacy of both. Aon top of this are APIs that give it even more potency. The reverse is also true, where Generative AI supplements other tools. There are no end of online tools that have added generative AI to make them more productive, as it need not be a standalone tool. 


Use and translation between hundreds of languages, also computer languages, even translation from text to computer languages, images, video, 3D characters, 3D worlds... it is astounding how fast this has happened, oiling productivity, communications, sharing and learning. Minority languages are no longer ignored.


All of the world's largest technology companies are now AI companies (all in US and China). The competitions is intense and drives things forward. This blistering pace means they are experimenting, innovating and involving us in that process. The prize of increased productivity, cheaper and faster learning, along with faster and better healthcare are already being seen, of you have the eyes to look.


People tend to fossilise their view of technology, their negativity means they don’t update their knowledge, experience and expectations. AI is largely Bayesian, it learns as it goes and it is not hanging around. People are profoundly non-Bayesian, they tend to rely on first impressions and stick with their fixed views through confirmation and negativity biases. They fear the future so stick to the present. 

 

Conclusion

Those who do not see AI as a developing fast and exponentially, use their fixity of vision to criticise what has already been superseded. They poke fun at ChatGPT3.5 without having tried ChatGPT4, any plug-is or any of the other services available. It’s like using Wikipedia circa 2004 and saying ‘look, it got this wrong’. They poke the bear with prompts designed to flush out mistakes, like children trying to break a new toy. Worse they play the GIGO trick, garbage in: garbage out, then say ‘look it’s garbage’. 


This is the worst AI will ever be and its way better than most journalists, teachers and commentators think, so we are in for a shock. The real digital divide is now between those with curiosity and those that refuse to listen. Anyone with access to a smartphone, computer laptop or tablet... that's basically almost all learners in the developed world have access to this technology. The real divide is among those in the know and not in the know, using it and not using it, and that is the increasing gap between learners and teachers. So focused are educators on the present they can’t see the future. 

Thursday, July 13, 2023

AI is now opening its eyes, like Frankenstein awakening to the world

The AI frenzy hasn’t lessened since OpenAI launched ChatGPT. The progress, widening functionality and competition has been relentless, with what sounds like the characters from a new children’s puppet show - Bing, Bard, Ernie and Claude. This brought Microsoft, Google, Baidu and Anthropic into the race, actually a two horse race, the US and China.

It has accelerated the shift from search to chat. But Google responded with Bard, the Chinese with Ernie’s impressive benchmarks and Claude has just entered the race with a 100k document limit and cheaper prices. They are all expanding their features but one particular thing did catch my eye and that was the integration of ‘Google Lens’ into Bard, from Google. Let’s focus on that for a moment.

 

Context matters

Large Language Models have focused on text input, as the dialogue or chat format works well with text prompting and text output. They are, after all, ‘language’ models but one of the weaknesses of such models is their lack of ‘context’. Which is why, when prompting, it is wise to describe the context within your prompt. It has no world model, doesn’t know anything about you or the real world in which you exist, your timelines, actions and so on. This means it has to guess your intent just from the words you use. What it lacks is a sense of the real world, to see what you see.

 

Seeing is believing

Suppose it could see what you see? Bard, in integrating Google Lens, has just opened up its eyes to the real world. You point your smartphone at something and it interprets what it thinks it sees. It is a visual search engine that can ID objects, animals, plants, landmarks, places and no end of other useful things. It can also capture text as it appears in the real world on menus, signs, posters, written notes; as well as translating that text. Its real time translation is one of its wonders. It will also execute actions, like dialling telephone numbers. Product search is also there from barcodes, which opens up advertising opportunities. It even has style matching.

More than meets the eye

OK, so large language models can now see and there’s more than meets the eye in that capability. This has huge long-term possibilities and consequences, as this input can be used to identify your intent in more detail. The fact that you are pointing your phone at something is a strong intent, that the object or place is of real, personal interest. That, with data about where you are, where you’ve been, even where you’re going, all fills out your intention.

 

This has huge implications for learning in biology, medicine, physics, chemistry, lab work, geography, geology, architecture, sports, the arts and any subject where visuals and real world context matters. It will know, to some degree, far more about your personal context, therefore intentions. Take one example, healthcare. With Google Lens one can see how skin, nails, eyes, retinas, eventually movements can be used to help diagnose medical problems. It has been used to fact check images, to see if they are, in fact, relevant to what is happening on the news.  One can clearly see it being useful in a lab or in the field, to help with learning through experiments or inquiry. Art objects, plants, rocks can all be identified. This is an input-output problem. The better the input, the better the output.

 

Performance support

Just as importantly, learning in the workplace is a contextualised event. AI can provide support and learning relevant to actual workplaces, airplanes, hospital wards, retail outlets, factories, alongside machines, in vehicles and offices - the actual places where work takes place - not abstract classrooms.


In the workplace, learning at the point of need for performance support can now see the machine, vehicle, place or object that is the subject of your need. Problems and needs are situated and so performance support, providing support at that moment of need, as pioneered by the likes of Bob Mosher and Alfred Remmits, can be contextualised. Workplace learning has long sought to solve this problem of context. We may well be moving towards solving this problem.

 

Moving from 2D to 3D virtual worlds

Moving into virtual world, my latest book, out later this year, argues that AI has accelerated the shift from 2D to 3D worlds for learning. Apple may not use the words ‘artificial’ or ‘intelligence’ but its new Vision Pro headset, which redefines computer interfaces, is packed full of the stuff, with eye, face and gesture tracking. Here the 3D world can be recognised by generative AI to give more relevant learning in context, real learning by doing. Again context will be provided.

 

Conclusion

Generative AI was launched as a text service but it quickly moved into media generation. It is now opening its eyes, like Frankenstein awakening to the world. There is often confusion around whether Frankenstein was the creator or created intelligence. With Generative AI, it is both, as we created the models but it is our culture and language that is the LLM. We are looking at ourselves, the hive mind in the model. Interestingly, if AI is to have a world view we may not want to feed it such a view, like LLMs, we may want it to create a world view from what it experiences. We are making steps towards that exciting, and slightly terrifying, future.

Huw did we ever get to this?

I spoke to an interesting woman at the BBC once, where I gave a talk on the challenge of digital media to traditional TV. My talk was received like a turd left behind by a burglar, as they then saw the internet and YouTube as an irrelevant gadfly. But that’s another story. At that event I met this woman, from Northern Ireland, who trained fledgling newsreaders and presenters. She told me she had informally called her course ‘The Egos Have Landed’ as she had repeatedly seen an odd phenomenon, young, and not so young journalists and others, catapulted into fame, thinking they were something more than autocue puppets. Their exposure turned them into monstrous narcissists who then started having opinions they thought mattered, all because they read from a teleprompter or chatted to each other on a studio sofa.

Saville was the King of such monsters, a prolific paedophile lauded, and worse, protected by BBC managers. Everyone knew, everyone laughed it off. Roll the credits on decades of paedophiles from Rolf Harris to Stuart Hall and a string of Radio 1 DJs. They’re an odd bunch. Kristian Digby, host of BBC1's To Buy Or Not To Buy, accidentally suffocated while attempting auto-erotic asphyxiation. We love a jolly frontman, as long as we don’t hear about his not so jolly backroom behaviour. Schofield and Edwards are just the latest in a long line of friendly faces that mask disturbing behaviour. I’m little concerned with their behaviour, as the witch hunts are so unedifying.
The deeper malaise is old media trying hard to avoid extinction. They need more front, as that’s the only thing they have left. Witness the recent disastrous interview by the BBC with Andrew Tate or Cathy Newman being demolished by Jordan Peterson. Whatever your views on these two odd chaps, they themselves have a lot of ‘front’, they’re smart, articulate and part of the counter-culture that has challenged TV. They ran rings around their stumbling, formulaic, ex-journalist interrogators.
The problem is too much focus on the ‘presentation’ layer. Presenters are really just juiced up human PowerPoints. I see this in tech all the time, its obsession with UX, then along comes Google – just type into a box, or ChatGPT, the same. TV has to put horrifically expensive lipstick on pigs because we want the truth watered down and mouthed out to us by what is known in the trade as ‘talking heads’. Loose Women, Quiz Shows and Reality TV are packed with these D-list ‘presenters’. They never die, just reappear as banal commentators on endless third rate entertainment programmes, the graveyards for clowns.
I have no idea why we think that news ’readers’ are worth listening to, outside of being working journalists. They’re the teleprompt and interview folk, and usually not very good at the latter, as their skills are with the written not spoken word. I was once introduced by Jackie Bird, a famous TV presenter in Scotland, as ‘Douglas’, even though I could see the word on the autocue was ‘Donald’. She was basically a bad parrot.
The problem is that they now get paid huge sums to ‘present’ homilies, seem like wholesome figures, often castigating others for their moral turpitude. We expect them to be our moral guardians, clean, pure, sensible and decent, when in truth they’re worse than most, as they often turn into overpaid narcissists. Will we miss Huw or will we manage without paying him £410,000 a year to read an autocue and behave like an old letch hunting down young ‘talent’?
I feel sorry for old Huw. He seems so ordinary, unremarkable and absent of charisma. Just a drone voice over royal events and a dull, earnest newsreaderI can't think of a single interesting sentence he ever uttered. He does stand out as someone without any obvious talent or presence.
TV is in trouble, as it is being crushed by the timeshifted streamers, social media and a dozen other alternatives. This is merely a sign of the old v new.

Tuesday, July 11, 2023

Is Ethics doing more HARM than GOOD in AI for learning?


I put this to an audience of Higher Education professionals at an Online Learning Conference yesterday at Leeds University.

I have an amazing piece of technology I’ve invented. It will bring astonishing levels of autonomy, freedom and excitement to billions of people. But here’s the downside, 1.4 million people will die horrible, bloody, sometimes mangled deaths every year, with another couple of million maimed and injured. This World War level of casualties, will strike every year, and is the price you have to pay. Would you say YES or NO?

Most rational souls would say NO. But let me reveal that technology – the automobile. We have come to an accommodation with the technology, as the benefits outweigh the downsides. Al may even bring in the self-driving car. My point is that we rush to judgement, as we are amateur ethicists and rely on gut feel, not reason.


This whole area, ethics, is oddly subject to a huge amount of bias as it is such an emotive subject. It plays to people's fears and prejudices, so objectivity is rare. Add new technology to the mix, along with a pile of stories in social media and you have a cocktail of wrong-headed certainty and exaggeration.

 

1. Deontological v Utilitarian

The offer I made at the start, I have put to many audiences. It is never taken up, as we are Utilitarians (calculating benefits against downsides) when it comes to actual decisions on using technology but dogmatic Deontologists (seeing morals as rules or moral laws) when it comes to thinking about ethics and technology. 

I am a fan of David Hume’s Indirect Utilitarianism, refined by Harsanyi as preference Utilitarianism. For a good discussion on how this relates to ethical issue and AI, see Stuart Russell’s excellent book, Chapter 9, Human Compatible (2019), where he attempts to translate this into achievable, controlled but effective AI. Curiously, Hume found himself cancelled by a few morally deluded students at the University of Edinburgh recently and they removed his name from the building which housed the Department of Philosophy. This was doubling down on Religious Deontologists refusing him a Professorship in the 18th century when he was one of the most respected intellectuals in the whole of Europe. Both groups are deluded Deontologists. He remains, in my opinion, the finest of the English speaking philosophers.This tension has existed in ethical thinking since the Enlightenment.

In truth, most of what passes for Ethics in AI these days is lazy ‘moralizing’, moral high horses ridden by people with absolute certainty about their own values and rules, as if they were God-given. More than this they want to impose those rules on others. They call themselves ‘ethicists’ but it is thinly disguised activism, as there is no real attempt to balance the debate out with the considerable benefits. It’s an odd form of moral philosophy that only considers the downsides. 

Google, Google Scholar, AI mediated timelines on almost all social media, the filtering out of harmful and pornography material into our email boxes, the protection of our bank accounts – all use AI. The future suggests that other huge near-term upsides in terms of learning, healthcare and productivity are well underway.

There is a big difference between ‘ethics’ and ‘moralising’. Even a  basic understanding of ethics will reveal the complexity of the subject. We have thousands of years of serious intellectual debate around deontological, rights-based, duty-based, utilitarian and other ways of thinking about ethics. A pity we give it so little thought before passing judgement.

2. Duplicity

Thomas Nagel points out, in his book 'Equality and Partiality', that we often pronounce strong deontological, moral opinions but rarely apply them in our own behaviour. We talk a lot about, say climate change, but drive large cars and fly off regularly on vacation. We talk about the climate emergency in academia but fly off for conferences at the drop of a sunhat, don’t deliver learning online and believe in spending €28 billion flying largely rich students around Europe through Erasmus. You may want all of your AI to be fully ‘transparent’. That’s fine, but stop using Google and Google Scholar and almost every other online service as they all use AI and it is far from transparent. My favourite example are those who are happy to 'probe' my unconscious in 'unconscious bias' training but decry the use of student data in learning on htebgrounds of privacy!

I’m just back from Senegal, where my fellow debating colleague Michael, from Kenya, berated the white saviours for denying the opportunities that AI offers Africa. Denying young aspiring workers to do human reinforcement training pays above the average wage and gives people a step into IT employment. It’s bizarre, he says, for white saviours on 80k to see this as exploitation.

3. To focus on AI is to focus on the wrong problem

Rather than climate change, the possibility of nuclear war, a demographic time bomb or increasing inequalities – AI is getting it in the neck, yet it may just solve some of these real and present problems. In particular, it may well increase productivity, democratise education and dramatically reduce the costs of healthcare. These are upsides that should not be thwarted by idle speculation.

At its most extreme, this speculation, that 'AI will lead to extinction of the human species' seems to have turned into the Doomsday tail that wags the black dog, despite the fact there is no evidence at all that this is possible or likely. Focus on what is likely not the fear-mongering that caught your attention on Twitter.

4. New technology always induces an exaggerated bout of ethical concern

Every man, women their uncle, aunt and dog, is an armchair ethicist but this is hardly new. It was ever thus. Plotus made the same point about the sundial in the 3rd century: 

The gods confound the man who first found out how to distinguish hours! 
Confound him too who in this place set up a sundial to cut and hack my days so wretchedly

into small portions!

Plato thought writing would harm learning and memory in the Phaedrus, the Catholic Church fought the printing press (we still idiotically teach Latin in schools), travelling in trains at speed was going to kill us, rock ‘n roll spelled the end of civilisation, calculators would paralyse our ability to do arithmetic, Y@K was going to cause the world to implode, computer games would turn us into violent psychopaths, screen time would rot the brain, the internet, Wikipedia, smartphones, social media… now AI. 


As Stephen Pinker righty spotted a predictable combination of negativity and confirmation bias leads to a predictable reaction to any new technology. This inexorably leads to an over-egging of ethical issues as they confirm your initial bias.

 

5. Fake distractive ethics

Curiously, much of the language and examples in the layperson’s mind, has come from shallow and fake news, which is actually a real concern in AI, with deep fakes. Take the famous NYT article where the journalist claimed ChatGPT had told him to leave his wife. On further reading it shows he had prompted it towards this answer. If some stranger in a bar dropped you the line that his marriage was on the rocks, you’d put a significant bet on him being right to leave his wife. ChatGPT was actually on the money. It was a classic GIGO, Garbage In: Garbage Out, poke the bear story. Then there was that AI guided missile that supposedly returned back and hunted down its launcher - never happened – complete fake. The endless stream of clickbait ‘look it can’t do this’, mostly using ChatGPT 3.5 (a bit like using Wikipedia circa 2004), flooded social media. This is the worst AI will ever be but hey, let’s not consider the fact that first release technology almost always leads to dramatic improvement. Think long-term folks before using short-term clickbait to make judgements. 

 

6. Argument from authority

Then there is the argument from authority. I’m a Professor say people in strongly worded letter to the world, therefore I must be right. Two things matter here, domain experts often have a lousy track record and a lack of expertise in philosophy, moral philosophy, the history of technology, politics and economics. To be fair experts in AI are worth listening to as they understand what is often difficult to understand and opaque technology. Generative AI, in particular, is difficult to comprehend, in terms of both what it is, how it works and why it works. It confounds even AI experts. But they are not experts on politics, ethics or regulation.

 

The letters that appeared in both 2015 and 2023, pushed by Tegmark’s Future of Life Institute (whose role is ethical oversight), use the argument from authority. We’re academics, we know what’s right for you the masses. It demanded that we immediately stop releasing AI for six months until the regulators caught up – a ridiculous and naive request that showed their political, economic and social naivety. I dislike this ‘letter writing’ lobbying. First it had names of people who demanded they be taken off the list as they had not given permission and some have since rescinded the statement but authority alone is never enough.

 

Conclusion

This tsunami of shallow moralising is almost perfectly illustrated in Higher Education, where most of the debate around ethics has focused on plagiarism, when the actual problem is crap assessment. There is little consideration of the huge upsides and benefits for teachers and students alike. Learning, in my view is the biggest beneficiary of this new form of AI, health care second. Hundreds of millions are already using it to learn.

On climbing into personal pulpits, we may fail to realise the benefits in learning. Personalised learning, allowing any learner to learn anything, at any time from any place is becoming a reality. Functioning, endlessly patient tutors, that can teach any subject at any level in any language are on the horizon, universal teachers with a degree in any subject and driven by good learning science and pedagogy. The benefits for inclusion and accessibility are enormous, as its potential to teach in any language.

It is not that there are no ethical problems just that objective ethical debate is harmed when it becomes enveloped in a culture of absolute values and intolerant moralising. For every ethical problem that arises, there seems to be glib answers that are simple, confidently pronounced and often wrong.

I wrote this because I feel we are now in the position, in some countries and sectors, especially education, in getting bogged down in a swamp of amateur moralising on AI, suppressing the benefits. This has already happened in the EU, where the atmosphere is one of general negativity, seeing their role as regulators not creators. But the EU is only 5.7% of the world’s population. Google has not released Bard in the EU, OpenAI have set up shop in London and when Italy banned ChatGPT it spooked investors. We are in danger of throwing the baby out with the bathwater – and the bath. In practice learners are using this tech anyway, they are bypassing institutional inertia and high-horse ethical posing. Eric Atwell, at Leeds University, noted that all of his AI students ticked ‘not interested’ when it came to taking a module on ethics. They have a point. They know they’ll get a lot of moralising and not much in the way of ethics. It is unethical not to be using AI in learning.

Indeed, ethics may be doing more harm than good by making AI less useful and efficient. guardrailing and alignment may well be reducing the effectiveness of generative AI by placing too many constraints on output.

Leeds leads the way on HE and online learning events

Solid Online Learning Summit at Leeds University, open debate and discussion and some great people as speakers as well as expertise in the audience. I could only be there for one of the two days but it was worth the trip to Leeds, my second in a week. I like Leeds.

Irrepressible Neil Mosley 

First up, the irrepressible Neil Mosley, a knowledgeable, productive and honest broker of information on online learning in HE. Knows his stuff. He outlined the growth in the UK HE online learning market. I say growth but at 400k, in reality, it is a bit lacklustre, a point also made by both myself and the Paul Backsish. One could conclude that this is little more than a bit of an earner on the side, especially for foreign student income, rather than the strategic execution we see in the US. Taken by surprise by Covid, they seem to be retreating back into the old model and necessary expansion is slight. There is no real strategic intention to reduce costs and scale with online offers, as it is often an attempt to milk the lucrative 'Masters Degree' market.

His characterisation of the ‘partnerships’ market was good:

OPMs (Online Programme Management)

Ex-MOOC platform companies

Short Course Companies

Service Companies (learning design etc)

The whole MOOC movement made lots of mistakes and they’ve now turned into ‘courses’. The disaster that was Futurelearn, an organisation that simply ripped out cash from UK Universities, distracted them from the real task of online learning and collapsed as they had no business expertise. Hiring your CEO from BBC Radio condemned them to a long decline into irrelevance. The OU was meant to open up HE to a wider audience and could have led the charge into online learning but the old boys club took over and has thwarted them at evert turn.

Neil then looked at growth in the numbers and types of courses:

Degrees

MOOCs

Premium Short Courses

Micro-credentials

 

Micro-credentials

It is worth bringing in a later panel at this point on ‘Micro-credentials’, which must be one of the most disastrous bits of HE marketing ever… such a stupid word, an explicit recognition that what you offer is a trite piece of paper, badge or some such nonsense. It is such a stupid, demeaning and diminished term. The audience knew this but the panel seemed happy with it because it could be ‘translated’ – the worse response to any question on the day. This is what happens when you get people who know nothing about marketing talking about marketing. Not for the first time did the audience show real insights and expertise.

This rose by any other name stinks. A blatant attempt to, yet again, steal market share from those who do skills training well; FE and private providers. HE are hopeless at skills stuff but smell the cash and have been down lobbying the DfE, the panellist from Staffordshire admitted as much. The other panellist, from Wales, seemed to live on EU Erasmus grants, which have, rightly in my view, dried up. I did like the woman from Mexico who was blunt and honest about her very different context. Once again money gets sucked up from actual skills delivery to pretend skills delivery in HE. They can’t do this and justify this immoral move by tagging on the term ‘Lifelong Learning’. It doesn’t wash. HE is NOT in the Lifelong Learning sector, never was and never will be. There was also some baloney about ‘badges’ from a ‘badges’ man who we were told was some sort of lackey in the Royal Household. They will learn the hard way and fail to make money. Paul Bacsish made much the same point. I like Paul – he’s been around the block several times and has a good nose for this waste, which I remember him describing as ‘doomed to succeed’.

Learning Engineering

I enjoyed Aaron Kessler’s talk on Learning Engineering, although I’m not a fan of the term ‘engineering’ here as it is being used analogously. I feel that learning is a wide and messy business and doesn’t always fit neatly into this paradigm. The insistence of using learners in the process of design suffers, I think, from the obvious fact that they don’t know what they don’t know and are often delusional about good learning theory and practice. But the talk was sound, as it stated what is obvious, that process matters, implementation is hard and evaluation harder. The push towards data was also, rightly, emphasised. One again an audience member pointed out that most don’t have the luxury for the complexities of abstract model as they have tight deadlines (great point). Aaron very kindly gave me a copy of the ‘Learning Engineering Toolkit’ book, which has some pretty good stuff. I tackled the same stuff in my ‘Learning Experience Design’ book. We’re all in the same boat here, rowing in the same direction.

Ethics and AI

My contribution was a short talk on Ethics and AI. I made the point that most Ethical AI, is not ethics at all but ‘moralising’. It’s a complex issue diminished when barely disguises activism enters the room. Lots of moral high horses are being ridden into the debate, clouding expertise. The fact that HE focused almost entirely in plagiarism as the moral issue says how far behind we are in our thinking about the use of AI in HE. The problem is not AI but crap assessment. My message was a bit depressing as I now think the UK and EU are way behind on both AI and AI for learning. The US and China are streaking ahead as we wallow in bad regulation. Eric Atwell who teaches AI at Leeds very kindly summed my talk up by agreeing with every last thing I had said! This was gratifying as I find a great deal of good sense comes from practitioners, as opposed to arrivistes who have jumped on the ethical bandwagon. Adam Nosel made some good points about coaching and the need to maintain the human and social elements, as did Andrew Kirkton on some of the nitty gritty issues in HE.

Podcasts
I had breakfast with Bo from Warwick who was doing some great work on podcasting in her institution. It is a subject close to my heart. We are stuck in a traditional paradigm in learning design, ignoring one of the most important mediums of our day. Not to use podcasting in learning is mad, as hundreds of millions listen to learning podcasts every day, of their own volition. We know a lot about how to do these well and Bo was pn point here. Good to see young experts get a voice at this event.

Conclusion
These were merely my impressions written on the train back to Brighton, not an exhaustive summary and even though I disagreed with some, that is the point. Margaret Korosec Jo-Anne Murray, Megan Parsons and the rest of the team did a great job here, encouraging honest, open and sometimes uncomfortable debate. That’s the point. This is about moving forward, learning something new and moving on. To do that we need to look outwards. I’d have loved to have seen some people from FE here as well as private providers (there were some). But this was only the first event. It was a shame I couldn’t stay for the second day, the Tapas meal was fun, Leeds I love, and I met and spoke to some great people. Look forward to the second.