Dispatches on Channel 4. 12 households, supposedly undecided voters, were put to the test and force-fed AI deepfakes. Split into two groups, At the end they go through a mock election.
The programme lays its stall out from the start – they’re out to conform what they already believe. This is not research, it’s TV’s odd version of confirmation bias. The three experts are the usual suspects and, guess who, the famously fragile interviewer Cathy Newman. They are actually zealots for deepfake danger – they are willing this to happen. A dead-giveaway are the past examples of deepfakes they pull up – all of left-wing folk – Hillary Clinton and Sadiq Khan. One of the three experts is, of course, a Labour Party Comms expert!
Here’s the rub. Fed a limited diet of information that massively increases the ratio of fake to real news renders the whole experiment useless. The presence Channel 4, of camera crews, lighting as they are set up to watch the fakes adds to the confirmation that this is authoritative content. They are deliberately set up to receive this ‘leaked’ news. It completely destroys any notion of ecological neutrality in the experiment. In fact, you’re almost forcing htem into believing what you’re telling them. The programme actually becomes a case study in bad research and investigation – it actually becomes fake new in itself, a ridiculously biased experience masquerading as supposedly authoritative journalism.
Actual research
Hugo Mercier’s book Not Born Yesterday debunks the foundations of this moral panic about deepfakes, with research that shows it is marginal at the edges, they are quickly debunked and few actually believe them. He argues that humans are not as gullible as often portrayed. Instead, they are selective about the information they believe and share and explores how social dynamics and evolutionary history have shaped human reasoning and belief systems, making us more resistant to deception than commonly assumed. Most of us weren’t born yesterday. Language didn’t evolve to be immediately believed as true.
Brendan Nyhan a world class researcher, who has extensively studied the impact and implications of deepfakes for many years, is clear. His research focuses on the potential threats posed by deepfakes to democratic processes and public trust. Nyhan argues that while deepfakes represent a significant technological advancement, their real-world impact on public perception and misinformation might be more limited than often suggested. He emphasises that the most concerning scenarios, where deepfakes could substantially alter public opinion or significantly disrupt political processes, are less common and that the actual use and effectiveness of deepfakes in altering mass perceptions have been relatively limited so far.
Deepfakes touch a nerve
They are easy to latch on to as an issue of ethical concern. Yet despite the technology being around for many years, there has been no deepfake apocalypse. The surprising thing about deepfakes is that there are so few of them. That is not to say it cannot happen. But it is an issue that demands some cool thinking.
Deepfakes have been around for a long time. Roman emperors sometimes had their predecessors' portraits altered to resemble themselves, thereby rewriting history to suit their narrative or to claim a lineage. Fakes in print and photography have been around as long as those media have existed.
In my own field, learning, a huge number have for decades, Dale’s Cone is entirely made up, based on a fake citation, fake numbers put on a fake pyramid. Yet I have seen a Vice Principal of a University and no end of Keynotes at conferences and educationalist use it in their presentations. I have written about such fakery for years and a lesson I learnt a long time ago was that we tend to ignore deepfakes when they suit our own agendas. No one complained when a flood of naked Trump images flooded the web, but if it’s from the Trump camp, people go apeshit. In other words, the debate often tends to be partisan.
When did recent AI deepfake anxiety start?
Deepfakes, as they're understood today, refer specifically to media that's been altered or created using deep learning, a subset of artificial intelligence (AI) technology.
The more recent worries about AI creating deepfakes have been around since 2017 when ‘deepfake’ (portmanteau of deep learning & fake) was used to create images and videos. It was on Reddit that a user called ‘Deepfake’ starting positing videos in 2017 of videos with celebrities superimposed on other bodies.
Since then, the technology has advanced rapidly, leading to more realistic deepfakes that are increasingly difficult to detect. This has raised significant ethical, legal, and social concerns regarding privacy, consent, misinformation, and the potential for exploitation. Yet there is little evidence that they are having any effect of either beliefs or elections.
Deliberate deepfakes
The first widely known instance of a political AI deepfake surfaced in April 2018. This was a video of former U.S. President Barack Obama, made by Jordan Peele in collaboration with BuzzFeed and the director’s production company, Monkeypaw Productions. In the video, Obama appears to say a series of controversial statements. However, it was actually Jordan Peele's voice, an impressionist and comedian, using AI technology to manipulate Obama's lip movements to match his speech. We also readily forget that it was Obama who pioneered the harvesting of social media data to target voters with political messaging.
The Obama video was actually created as a public service announcement to raise awareness about the potential misuse of deepfake technology in spreading misinformation and the importance of media literacy. It wasn't intended to deceive but rather to educate the public about the capabilities and potential dangers of deepfake technology, especially concerning its use in politics and media.
In 2019, artists created deepfake videos of UK politicians including Boris Johnson and Jeremy Corbyn, in which they appeared to endorse each other for Prime Minister. These videos were made to raise awareness about the threat of deepfakes in elections and politics
In 2020, the most notable deepfake video of Belgian Prime Minister Sophie Wilmès showed her give a speech where she linked COVID-19 to environmental damage and the need to take action on climate change. This video was actually created by an environmental organization to raise awareness about climate change.
In other words, many of the most notable deepfakes have been for awareness, satire, or educational purposes.
Debunked deepfakes
Most deepfakes are quickly debunked. In 2022, during the Russia-Ukraine conflict, a deepfake video of Ukrainian President Volodymyr Zelensky was circulated. In the video, he appeared to be making a statement asking Ukrainian soldiers to lay down their arms. Deepfakes, like this, are usually quickly identified and debunked, but it shows how harmful misinformation during sensitive times like a military conflict, can be dangerous.
More recent images of Donald Trump were explicitly stated to be deepfakes by their author. They had missing fingers, odd teeth, a long upside down nail on his hand and weird words on hats and clothes, so quickly identified. At the moment they are easy to detect and debunk. That won’t always be the case, which brings us to detection.
Deepfake detection
As AI develops, deepfake production becomes more possible but so do advances in AI and digital forensics for detection. You can train models to tell the difference by analysing facial expressions, eye movement, lip sync and overall facial consistency. There are subtleties in facial movements and expressions, blood vessel giveaways, as well as eye blinking, breathing, blood pulses and other movements that are difficult to replicate in deepfakes. Another is checks for consistency, in lighting, reflections, shadows and backgrounds. Frame by frame checking can also reveal flickers and other signs of fakery. Then there’s audio detection, with a whole rack of its own techniques. On top of all this are forensic checks on the origins, metadata and compression artefacts that can reveal the creation, tampering or its unreliable source. Let’s also remember that humans can also be used to check, as our brains are fine-tuned to find these tell-tale signs, so human moderation still has a role.
As deepfake technology becomes more sophisticated, the challenge of detecting them increase but these techniques are constantly evolving, and companies often use a combination of methods to improve accuracy and reliability. There is also a lot of sharing of knowledge across companies to keep ahead of the game.
So it is easier to detect deepfakes that many think. There are plenty of tell-tale signs that AI can use to detect, police and prevent them from being shown. These techniques have been honed for years and that is the reason why so few ever actually surface on social media platforms. Facebook, Google, X and others have been working on this for years.
It is also the case, as Yann Lecun keeps telling us, that deepfakes are largely caught, quickly spotted and eliminated. AI does a good job in policing AI deepfakes. That is why they have not been caught flat-footed on the issue.
No comments:
Post a Comment