Showing posts sorted by date for query Google just announced. Sort by relevance Show all posts
Showing posts sorted by date for query Google just announced. Sort by relevance Show all posts

Saturday, May 06, 2023

Learning technologies 2023 - a tale of two events

Spent two solid days at Learning Technologies Conference in London, Europe’s largest organisational learning technology event. First up, a big thanks to Donald Taylor for inviting me to speak. He was everywhere, with his team, keeping the show on the road.We’re oft confused and that happened twice at this event. At the KoganPage bookstall someone asked me to sign my book but it was the other Donald! Mine had sold out, however, I gave her a spare copy. The other was a summarising video of the conference where my talk on ‘AI changing work and learning’ was captioned as a talk by Donald Taylor. He has solid, bona fide Scottish roots in the Glasgow shipyards, so it’s an honour to be connected by confusion!

Learning Technologies was great but I found it a Janus-faced event. One face was the inward looking exhibition, the other the outward facing Conference. 

 

Exhibition

A vast loud, noisy exhibition, with so many stand lights and hot air, it turned into a sweaty hell. I did the rounds. Same old, same old. It was like going back to Disneyland, all smiles and promises of fun times but reflecting an embalmed, vastly overpriced and vanishing world. It there any other industry that produces so much that is so disliked by so many? With a small number of exceptions, there was barely a mention of AI, except in that ‘we’re building it into our product’ sort of way, tinkering.

 

This giant ‘cheese’ factory was churning out courses, stored in LMSs, pouring out the same old scorn, sorry SCORM, data, that ends up as donuts on dashboards. Text - graphic – MCQ – all of the above - repeat. The whole junkyard has become a parody of itself, disengaged from real people and the real world, whose reaction to their latest Leadership, Diversity or Resilience course, is invariably an ‘eye-roll’. We evaluate nothing, which has resulted in the over-production of over-engineered and over-wrought, Disneyfied courses. It is a supply, not demand, industry, not listening to actual business needs, but imposing a therapeutic and moral nonsense. Next thing is they’ll be probing my unconsciousness – hold on…

 

There was much jaw jaw on skills, but so often that manifested itself in Leadership nonsense, DEI or faddish topics. This year’s thing is 'Resilience' yet another hopeless construct from L&D. An excuse for third rate courses, seeing employees as having yet another deficit or disease, of which they must be 'cured'. This therapeutic culture is relentlessly top-down and arrogant. Employees have this stuff force-fed to them, rather than using it autonomously, as they have been doing with Google, YouTube and social media for two decades.

 

This is a technology conference but the technology so often felt like something out of the early 2000s, that’s because it is something out of the Cambrian explosion of LMSs created in early 2000s, with content that has changed little in the last two decades. It lacks the smartness of contemporary tech – the AI, the data-driven approach, the dialogue of social media.


I'm being a little unfair, as this is the technology that was available, became embedded at the enterprise level, integrated with other software and was difficult to update. On content, however, there is less excuse.

 

Meanwhile, literally over the same two days, one edtech company had a half a billion wiped off its market cap, Pearson had a dead cat bounce, IBM has announced that ChatGPT would replace many of its HR staff and the world outside of Disneyland moved on, bypassing this supply pipe, reacting to real demand.

 

Conference

Over the corridor, by contrast, in the conference, AI was the BIG topic. It wasn’t that it was coming, it was already here, with hundreds of millions using it for work. Like some super popular performance support and productivity tool, it seems to have by-passed L&D and most of the vendors. The sisters and brothers are clearly doing it for themselves with AI.

 

There was passing reference to it in David Kelly’s talk, although the talk seemed quite basic, aimed at people new to old ideas like ‘personalisation’ and ‘performance support’. I was genuinely puzzled at the statement about not publishing their Devlearn US sessions online as it would not be equitable. At a Learning Technologies conference that seemed like a cop out.


The Red-something analyst, Dani, had her versions of the Fosway four-way grids, showing her pet companies, oddly absent were some of the European players than those on her slides. Her grasp of the AI phenomenon was thin as gruel. Not sure why we have US people who don’t really know the European market, telling us about our own turf. Fosway are miles better. I tried to suggest some names she had missed but she wasn't interested and fobbed me off. Real analysts, who work deep inside the investment community are way more knowledgeable than these ‘let’s send out some survey questions’ qualitative research houses.

 

I hugely enjoyed the ‘AI for Lifelong Learning’ talk, as Conrado Schlochauer was spot on in saying that adults don’t need all of these courses, as they want to be self-directed and that ChatGPT was the way to go. It was easily the best talk on Life Long Learning I've seen. Although it took a strange turn at the end with the claim that AI was making us illiterate fools, stuck in their echo-chambers. I find that argument unpalatable. The world is full of people in their own bubbles calling out others for being in bubbles. 

 

Talking of bubbles, I find major conference sessions such as ‘Women in Learning’ particularly inward looking. It is a technology conference, not a general L&D conference. I’m thinking about suggesting a ‘Poor People in Learning’ as a counter to the trend to spend all of the budget on Leadership and DEI training that deliberately excludes working class people. L&D seems to assume that everyone works at home or in an office, real practical skills have been underfunded or abandoned in the L&D world and we wonder why the world is falling apart.

 

What I found really heartening was the recognition among almost everyone I met that AI was a Big Bang thing, not just for L&D but for work and the entire species. The debates were intense and informed, Why? Everyone had used it and had their minds blown by it. They immediately saw its potency and potential. I gave a session that was packed to the guddles with people eager to hear what impact this is having on work and learning. That impact is already profound. I also presented to a large room full of students who were as smart as whips, asking all the right questions about AI. 

 

But what really mattered was the myriad of conversations I had in passing, in the pub and restaurants was exciting. A ton of conversations with old friends, and even more valuable lots of new friend made, too many to mention. I particularly loved the enthusiasm of youth, who really did seem a little tired of the old and genuinely wanted to ring in the new.

Wednesday, May 09, 2018

Google just announced an AI bot that could change teaching & learning…. consequences are both exciting & terrifying…

Bot reversal
Revealed during a Google conference, Google Duplex stole the show. They stunned the audience with two telephone conversations, to real businesses, initiated and competed by a bot. If anything, the real people in the businesses sounded more confused than the bot. The bots were from Google Assistant and delivered by Google Duplex. Note that this reverses the usual person speaks to bot. In fact, it’s hard to tell which one is real. Here, the bot is speaking to real people. We are about to see a whole range of things done by humans replaced by bots in customer service.
Lessons in learning
This reversal is interesting in education and training, as it supports the idea of a bot as a tutor, teacher, trainer or mentor. I've already written about how bots can be used in learning. The learners remain real but the teaching could be, to a degree, automated. Most of the time we talk to each other through dialogue. This is how things get done in the real world, it is also how many of us learn. Good teachers engage learners in dialogue. But suppose that bots become so good that they can perform one half of this dialogue?
This is a tough call for software. There’s the speech recognition itself. It also has to sound natural, but natural is a bit messy. I can say ‘A meal for four, at four’ – that’s tricky. On top of this, we go fast, pause, change direction, interrupt but also expect fast responses. This is what Google have tackled head-on with neural networks and trained bots.
Domain specific
Google Duplex does not pretend to understands general conversations. It is domain-specific – which is why its first deployment will be customer service over the phone. You need to train it in a specific domain, like hairdressing or doctor appointments, then encapsulate lots of tricks to make it work. But in domain specific areas, we can see how subject-specific teaching bots could do well here. Bots, on say maths or biology or language learning, are sure to benefit from this tech. There is no way the tech is anywhere near ‘replacing teachers but they can certainly augment, enhance, whatever you want to call it, the teacher’s role.
Conclusion

We’re not far off from bots like these being as common as automated check-outs and ATMs. I’ve been working on bots like these for some time and we were quick to realise that this ‘reversal’ is exactly what ‘teaching’ bots needed. There are some real issues around their use, such as our right to know that it is a bot on the other end of the line. And their use in spam calls. But if it makes our lives easier and takes the pain away from dealing with Doctor’s receptionists and call centres – that’s a win for me. If you’re interested in doing something ‘real’ with bots in corporate learning, contact me here….

Tuesday, September 26, 2017

AI on land, sea, air (space) & cyberspace – it’s truly terrifying

Vladamir Putin, announced, to an audience of one million online, that, “Artificial intelligence is the future, not only for Russia, but for all humankind…  It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world… If we become leaders in this area, we will share this know-how with entire world, the same way we share our nuclear technologies today.Elon Musk, tweeted a reply, “China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo”, then, “May be initiated not by the country leaders, but one of the AI's, if it decides that a pre-emptive strike is most probable path to victory.
That pretty much sums up the problem. Large and even small nations, even terrorist groups or lone wolves may soon have the ability to use ‘smart’, autonomous, AI-driven tech in warfare. To be honest, it doesn’t have to be that smart. A mobile device, a drone and explosives are all that one needs to deliver a lethal device from a distance. You may even have left the country when it takes off and delivers its deadly payload. Here’s the rub – sharing may be the last thing we want to do. The problem with sharing, is that anyone can benefit. As soom as machines have the power to decide wholives or dies, we are in uncharted moral territory.
In truth, AI has long been part of the war game. Turing, the father of AI, used it to crack German codes, and thankfully contributed to ending the second World War and let’s not imagine that it has been dormant for the last half a century. The landmine, essentially, a dormant robot that acts autonomously, has been in use since the 17th century. One way to imagine the future is to extend the concept of the landmine. What we now face are smart, active, autonomous landmines, armed with deadly force on land, sea, air, space and cyberspace. AI exists in all war zones, on all fronts – land, sea, air (space) and cyberspace. 
AI on land
Robot soldiers are with us. You can watch Boston Analytics videos on YouTube and see machines that match humans in some, not all, aspects of carrying, shooting and fighting. The era of the AI-driven robot soldier is here. We have to be careful here, as the cognitive side of soldiering is very far from being achieved. That is one of the problems, as dumb, cognitively stupid robot soldiers bring their own problems.
In the DMZ between South and North Korea, robot guards are armed and will shoot on sight. Known as a Lethal Autonomous Weapons System (LAWS) it will shoot on sight, and by sight we mean infrared detection and laser identification and tracking of a target. It has an AI-driven voice recognition system, asks for identification, and can shoot autonomously. You can see why these sentry or rapid response systems have become autonomous. Humans are far too slow in detecting in-coming attacks or targeting with enough accuracy. Many guns are now targeted automatically with sensors and systems way beyond the capabilities of any human.
Uran-9, a Russian unmanned drone tank, gets to places autonomously and can be under human control or not. It’s hard to believe that the autonomous software for such vehicles has not been developed, as it has for self-driving cars. When such vehicles can set off and complete missions, it is hard to see how the risk of human casualties would be seen as an option.

AI at sea
Lethal Autonomous Weapons can already operate on or beneath the sea. Naval mines (let’s call them autonomous robots) have been in operation for centuries. Unmanned submarines have been around for decades and have been used for purposes good and bad, for example, the deliver of drugs using autonomous GPS navigation, as well as finding aircraft that have downed in mid-ocean. In military terms, large submarines capable of travelling thousands of miles, sensor-rich, with payloads, are already in play. Russian drone submarines have already been detected, code-named Kanyon by the Pentagon, they are thought to have a range of up to 6,200 miles with speeds up to 56 knots. They can also deliver nucear payloads. Boeing and many others have also been developng such unmanned subs.
AI in the air
When you fly, the pilot of the 757 switches to autopilot at 1000 feet and you are then technically flying in a robot for the rest of the flight, albeit being supervised by the pilots – fuel consumption, weather and so on. They could have landed using autoland but most pilots still prefer to land the aircraft themselves. The bottom line is that software does most flying better than humans and will soon outclass them on all tasks. Flying is safe precisely because it is highly regulated and smart software is used to ensure safety.
Drones are the most obvious example, largely controlled from the ground, often at huge distances, they are now AI-driven, operate from aircraft carriers, can defend themselves against other aircraft and deliver deadly missiles to selected targets. The days of the fighter plane may be numbered, as drones, free from the problem of seating and coping with a human pilot, is cheaper and can be produced in larger numbers.
This is Taranis, named after the God of Thunder, an unmanned BAE drone that has been tested in Australia on autonomous missions. The tech that has been developed for self-driving cars can be used for autonomous vehicles on land and air. They spot the enemy, rather than friendly pedestrians.
Worringly, there is evidence  that Israle's HARAP system, which is actually an autonomous bomb that literally dive bombs radar installations and self-destructs, has already been used.
Swarm drones have already been tested by the US, to surround and overwhelm a target. Each drone acts independently but also as a group.


Nanoweapons
A terrifying range of nanoweapons, mosquito-like robots and mini-nukes have entered the vocabulary. Nanoweapons: A Growing Threat to Humanity by Louis A. Del Monte is a terrifying account of how nanoweapons may change the whole nature of warfare, making other forms almost redundant. It is the miniaturisation of weaponry that also makes this more of a lethal threat. This world of small weaponry is worrying. Small payloads on small drones, with amazing manoeuvrability are already possible. Stuart Russell has already warned against this one aspect of AI in weaponry in evidence for the UN, which seems, at last, to be moving towards international regulation in the area, just like chemical weapons.
AI in cyberspace
War used to be fought on land, sea and air, with the services -  army, navy and airforce – representing those three theatres of war. It is thought that a brand new front has opened up on the internet but this is not entirely true, as the information and communications war has always been the fourth front. The Persians did it, the Romans were masters of it and it has featured in all modern conflicts. Whenever a new form of communications technology is invented, from clay tablets, paper, printing, broadcast media and the internet, it has been used as a weapon of war.
However, the internet offers a much wider, deeper and difficult arena, as it is global and encrypted. Russia, China, US are the major players, with autonomous bots and viral campaigns in action. China wages a war against freedom of expression within its own country with its infamous Great Firewall of China. Russia has banned LinkedIn and Putin has been explicit in seeing this as the new battlefield. The US is no different, with explicit lies about the surveillance of its own citizens. But it is the smaller state actors that have had real wins – ISIS, North Korea and others. With limited resources they see this theatre as somewhere they can compete and outwit the big boys.
AI as weapon of peace
When you land at advanced airports, you walk through a gate that scans your passport. In a chip on your passport is an image of your face and face recognition software, along with other checks, identifies you as being able to enter the country. You needn't speak to any human on you entire trip. You will soon be able to walk through borders using a mobile phone only. Restricting the movement of criminals and terrorists is being achieved through the use of many types of AI. The war on terror is being fought using AI. It is AI that is identifying and taking down ISIS propaganda. What is required, is a determined effort to use AI to police AI. All robots may have to have black boxes, like aircraft, so that rogue behaviour can be forensically examined. AI may be our best defence against offensive (in both senses of the word) AI.
Conclusion


A University, KAIST, the MIT of S Korea, has just backed down from doing military research with AI after a seriousl letter from some major AI researchers, with Global reputations, threatening a boycott. Google staff lambasted the CEO when they found that Goggle's Tensorflow was being used in wide range imaging from military drones. It is heartening that this is coming from within hte AI and tech community. There are already UN meetings with multilateral support to discuss and decide desirable international laws, such as those covering chemical weapons, landlines and laser blinding technology. The two main goals at the moment are 1) Target selection and 2) the application of violent force. But it's hard seeing this stick. The US, Russia and Chna have been lukewarm on further regulation, seeing existing laws as adequate. Some aspects of AI remain opaque. For example, the mosre siophisticated machine learning andneural netwrors become, the less we know about what is actually happening within AI.
What is worrying, however, is that while many of the above examples are known, you can bet that this is merely the tip of a chilling iceberg, as most of these weapons and systems are being developed in deep secrecy. Musk and many others, especially the AI research and development community, are screaming out for regulation at an international level on this front. Our politicians seem ill-equipped to deal with these developments, so it is up to the AI community and those in the 'know' to press this home. This is an arms race that is far more dangerous than the nuclear race, where only large nations and humans have been in control and calls for a declaration of war on AI weaponry. We are facing a future where even small nations, rogue states and actors within states could get hold of this technology. That is a terrifying prospect.