AI Confidential with Hannah Fry is suspect. None of the three episodes makes any attempt at comparing upsides and downsides. Almost all of the time is spent, often laboriously, on the downsides, so they come across as hatchet jobs. For a documentary series on AI, there is a serious lack of data, studies and balance. That’s a shame.
Episode 1 AI girlfriends
First episode, on Jaswant, who broke into Windsor Castle (and was caught) was just weird. He had visited Amritsar, site of a massacre of Indians in 2018, became obsessed about colonial injustice, failed his exams, lonely during COVID, depressed, thought he was a Sith Lord and was seriously psychotic, so mentally ill that he was diagnosed as being in full blown psychosis when he climbed the wall and was eventually not charged with the crime but locked up in Broadmoor.
Yet his Replica girlfriend bot is presented as the cause, a primitive pre-LLM ELIZA. The problem is a complete lack of balance and consideration of multiple causes. The police thought he would have done this anyway without the bot. They find this obsessive behaviour in many young terrorists. The mistake, common when fingering AI, is to see such things in terms of a single cause, when it is clearly muti-variant. And when things are multivariant, the new kid on the block, in this case AI, gets the blame.
I’d recommend you read ‘Love Machines’ by researcher James Muldoon, who takes a wider look at how bots are used by millions as companions, friends, girlfriends, lovers, mentors, therapists, advisors, coaches, and deathbots. There’s good evidence to show that far from manufacturing assassins, the technology helps with loneliness, avoiding embarrassment, as it is non-judgemental and affirmative. People feel heard, understood and supported, as the bots are calm and anonymous. They can break the silence and stave off the isolation. It is balanced in that it compares the upsides and downsides. Fry’s three programmes are all lopsided.
Episode 2 Self-driving cars
Rafaela Vasquez, the safety driver in Uber's self-driving Volvo, was distracted and looking down at her personal cell phone, even before she was on the road, streaming an episode of ‘The Voice’ on Hulu. In the moments leading up to the crash, that killed pedestrian Elaine Herzberg, Dashcam footage showed her looking down for about 5.3 seconds immediately before the impact. and the National Transportation Safety Board determined she spent roughly 34% of the trip looking at her phone rather than monitoring the road. The police deemed the crash ‘entirely avoidable’ if she had been attentive. Vasquez was charged with negligent homicide in 2020 and pleaded guilty in 2023. She pleaded guilty and was sentenced to three years of supervised probation. Fry at this point blamed the car, when this was clearly a human error. You’d think she was innocent when listening to Fry. She may be a mathematician but she’s no journalist.
Enter George McGee was found culpable in the civil lawsuit stemming from the April 2019 fatal crash in Key Largo, Florida. He admitted to police that he dropped his phone, while driving on ‘cruise’, and looked down to retrieve it while driving his Tesla on Autopilot. Note he was using Tesla's Autopilot at the time of the 2019 crash, not Full Self-Driving, which was not available until 2020.
This caused the vehicle to run a stop sign at around 62 mph and strike a parked SUV. This killed Naibel Leon and injured her boyfriend, Dillon Angulo. A jury in 2025 assigned him the majority of the blame 67%, with Tesla at 33%, and he settled privately with the plaintiffs prior to the trial against Tesla. It is true that Tesla paid the dead woman’s family a massive amount of money but this is a complicated case of a largely culpable driver and some AI.
In a California incident, they did not mention that the NTSB investigation revealed that Autopilot was engaged for nearly 19 minutes prior to the crash, and Huang's hands were not detected on the steering wheel for the six seconds before impact; data also showed he was playing a video game on his phone during the drive.
In the Utah incident, Heather Lommatzsch's Tesla Model S, with Autopilot engaged, rear-ended a stationary fire truck while she was looking at her phone. She sustained a broken foot and the fire truck driver reported minor whiplash; no fatalities. Police determined she was culpable for distraction and over-reliance on Autopilot, issuing her a misdemeanour traffic citation for failure to keep a proper lookout.
Overall, this was a hatchet job. No overall safety stats, which are positive but happy to feature footage of nutjob luddites in balaclavas.
Episode 3 Healthcare
This was the strangest of all. As it was not AI in the dock but a US medical insurance company who were gouging customers. It’s a business model, not AI to blame. Sure they were denying people proper care but this is a feature of the American system, that maximises profit not care.
It was a bait and switch story, where the murder of the CEO by Luigi Amlioni, was warped into AI being the villain. Al was certainly being used but if this were spreadsheet with a formula, even humans making these decisons, the bottom line is greed not maths. Again, no balance.
I did a lot of research on AI in healthcare for my book AI and Productivity and have an entore chapter on the subject. I feel that Fry has focused too much on one case, in one business, in one country, doing one task, when there is evidence that AI is being beneficial across many areas in healthcare.
Conclusion
I’ve given up on the BBC Radio 4 stuff on AI, as it is truly awful, but expected more from Fry, as she’s a good presenter, smart and has the background to understand the technology. What she doesn’t have is the journalistic training and experience to see the big picture, so this series descends into rather long-winded hatchet jobs.

No comments:
Post a Comment