Been going for many years… and like the vibe… fruity mixture of tech, academic, government from 73 countries. There's the added attraction of Berlin, a Christmas market right across the street and free drinks at the Marlene Bar!
Gave three talks, all on AI theme… AI and ethics (the overblown hysteria), Learning Analytics (how to and real examples), Video and AI (research and how to). A number of the sponsors were companies that use AI and it was a solid theme this year, rightly, as it has already changed why, what and how we learn.
As is usually the case at conferences I found the smaller presentations and conversations more useful than the keynotes. Great start with Julian Stodd, who was his usual articulate and incisive self. He talked about the weirdness of HR trying to ‘impose’ values and compliance training on people, attacking people’s sense of self and agency. But one phrase that really resonated with me was the ‘humility to listen’. There’s a lot of depth to those three words…
The opening keynotes were a trio of very different fruits. The Max Planc/MIT guy gave a solid talk, showed the Frey and Osborne report (2013), but got the date wrong – it wasn’t 2016 – this matters as it was a paper that predicted the 47% jobs at risk of automation over a decade in the US. We are 6 years in and there is pretty much full employment in the US. Toby Walsh eviscerated this report and talked at this conference two years ago – so we seemed to be going backwards. The Chinese guy was clearly giving a sales pitch but at least he had data and citations to back up his case. Audrey Watters gave her standard ‘it’s largely agitprop, ideology and propaganda’ replete with soviet posters. Oddly she mentioned being jeered at a summit in Iceland. I was there - it was a very small audience and the first question she was asked (by a woman) was whether she was throwing the baby out with the bathwater, neither was there a 3D cat. She rightly showed some claims that were unsubstantiated but out of context and actually several are evidence-based but Audrey’s was so keen to show that everyone else was ideological that she missed the fact that hers was the most ideological talk of the three. But oh how academe clapped.
The keynotes on the second day (HE session) tackled the future of HE. Professor Shirley Alexander showed the shocking costs, debts and default rates of HE – it is basically out of control on costs. But her solution, literally on the next slide was a huge, spanking new building they’ve just erected and some writing feedback software. I was convinced by nether the erection nor software, which has been around for decades. Bryan Alexander is always up for some fun and opened his talk in a Death Metal voice. Had a great conversation with Bryan about AI afterwards and he did the futurist thing – 3D printing, drones etc…didn’t really see any scalable solutions that tackled the cost issue.
One feature of learning conferences is a general refusal to face up to political issues such as cost and inequality. It is assumed that education is an intrinsic good, no matter what the cost. No reflection on WHY Brexit, Trump, Gilet Jaunes and other political upheavals are happening, only a firm belief that we keep on doing what we do, no matter the cost. This is myopic. Bryan Caplan tried with his keynote last year, with real evidence, but once again we seemed to have gone backwards. I had a ton of conversations in the bar, in restaurants and over coffees on these issues. A refreshingly straight talk with Mirjam Neelen was one of many.
I liked the practical sessions on learning analytics. It is complex subject but offers a way forward that builds on a platform of data that can be used to describe, analyse, predict and prescribe learning solutions. With smart software (AI), it frees us from the fairly static delivery of media, which online learning has done for over 30 years. Speaking with the wonderfully named Thor and Christian Glahn, we opened up the world of xAPI, LXPs, LRSs and adaptive learning. Here lies some real solutions to the problems posed by the keynotes.
Sure there are ethical issues and I gave a session explaining that AI is not as good as you think and not as bad as you fear. We went through a menu of ethical issues: Existential, employment, bias, race, gender and transparency. Every man, woman and their dog is setting up and ethics and AI committee, pouring out recommendations and edicts, often based on a thin understanding of ethics and the technology. Many seem designed to give people an excuse to avoid it and do nothing.
Enjoyed Mathew Day’s session on the use of video which is uploaded to the International Space Station, which they use just before they do a task. That’s what I call cosmic, performance support. I was on just before him and showed the evidence in learning theory on why video on its own is rarely enough for deep learning, as well as key evidence on what makes a good learning video, much of it counterintuitive – POV, slower pace, edit points, not so much talking heads, maximum length, adding active learning and so on.
So many interesting chats with people I knew and met for the first time. What I did walk away with was a sense that people are waking up to the possibilities of AI in learning, especially for teaching, Henri Palmer of TUI gave a great case study, showing how one can deliver a large project, super-fast at a fraction of the cost using AI created online content. Great to hear that her team won a Gold Award for that project the night before in London.
Final dinner in Lutter and Wegner, an old German restaurant was great. Harold did his pitch-perfect Ian Paisley impression at full volume with much clinking of glasses… wine and schnapps. When you’re sitting next to people from Norway, Poland, France, Belgium and Trinidad – you can’t go far wrong.
BIG thanks to Channa, Astrid, Rosa, Rebecca, Harold and the team for inviting me… open people who not only do a great job organising this event but are also open-minded enough to encourage critical thinking…