Last August, over 7 million users watched a fake video of footballer Kylian Mbappé as a child, scolded by his father, in just a few days. And one only needs to glance at the comments to see it: many internet users fell for this viral deepfake.
And this example is not unique: according to the World Economic Forum, the number of online contents qualified as "deepfakes" increased by 900% between 2019 and 2020. Experts even predict that 90% of online content could be artificially generated by 2026.
So what are the new challenges related to detecting and exposing such contents? And more importantly, what are the best practices to maintain journalistic credibility in the age of deepfakes? Here's our analysis.
And this example is not unique: according to the World Economic Forum, the number of online contents qualified as "deepfakes" increased by 900% between 2019 and 2020. Experts even predict that 90% of online content could be artificially generated by 2026.
So what are the new challenges related to detecting and exposing such contents? And more importantly, what are the best practices to maintain journalistic credibility in the age of deepfakes? Here's our analysis.
Deepfakes: A New Menace of Disinformation
Deepfakes are videos, images, or audio recordings created using artificial intelligence to falsify the appearance or speech of a person. Most often, deepfakes involve superimposing the face and voice of a public figure onto an existing video to generate a misleading visual sequence.
The typical example is a video of Barack Obama loudly proclaiming, "Donald Trump is a total and complete dipshit." This deepfake, created in 2018 by comedian Jordan Peele in collaboration with BuzzFeed News, aimed to alert the public to the potential dangers of these new types of content.
Deepfakes spread like wildfire on the web and, contrary to this example, are often generated by individuals with dubious intentions. Last July, for example, someone posted on X (formerly Twitter) a fake interview of Elon Musk praising a cryptocurrency platform. It was actually manipulated content created to deceive users and redirect them to an online scam (source: Franceinfo).
Journalistic Credibility Tested by Deepfakes
Deepfakes can even deceive the most seasoned journalists. This is emphasized by Ben Nimmo, a principal researcher in information defense at the Digital Forensic Research Lab at the Atlantic Council: "They could lead journalists to make mistakes" (source: Africa Check).
This significantly impacts the public's perception of the media sphere and threatens journalistic credibility, as public trust in the media weakens year by year. According to the 35th Kantar-La Croix Media Trust Barometer, only 49% of French people declared trusting the media in 2022, compared to 56% in 2018.
Restoring trust in the media is more than ever a challenge in the age of deepfakes. Especially since they sometimes create areas of uncertainty that are very difficult to clarify.
A recent supposed phone interview of Donald Trump by the television channel Real America's Voice, for example, cast doubt. The tone of voice and irregularities in the speech of the former American president indeed question the authenticity of the exchange, which many observers attribute to a deepfake. A former member of Trump's campaign team expressed his suspicions on Twitter: "I don't know who did this interview, but it doesn't sound like Donald Trump" (source: Slate).
As the former president has not clarified the situation, doubt continues to hang over the truthfulness of this interview and, consequently, over the seriousness of the journalists who organized it.
Hunting Down Deepfakes: How to Reveal the Imposture
Concrete techniques exist, explained notably in this article by Libération. The simplest is to spot visual inconsistencies in videos by carefully observing eye blinks, the orientation of light, or lip movements.
In the case of audio content, identifying deepfakes is more difficult, but clues can hide in speech rate, breathing, or intonation. This is explained by Factuel, AFP's fact-checking service, in this video decrypting the fake voicemail from Emmanuel Macron to the leader of the military coup in power in Guinea.
Moreover, automatic detection tools exist: Deepware, Truepic, or Microsoft Video Authenticator analyze videos, audios, or images and give a verdict on the authenticity of content. The recently introduced FakeCatcher tool by Intel claims a 96% accuracy rate.
This high rate, though impressive, highlights the problem with these techniques and tools: they are never 100% reliable, and the constant advances in artificial intelligence make detecting deepfakes increasingly difficult.
Source Verification in the Age of Deepfakes
Faced with suspicious information, journalists must exercise even more caution and meticulously verify their sources. It is also now essential for information professionals to continuously train themselves in the face of rapidly evolving disinformation threats and, if necessary, collaborate with experts in advanced technologies to evaluate certain sources.
To leave nothing to chance, many media outlets have equipped themselves with fact-checking services, with advanced skills in detecting deepfakes. This is the case, for example, with Le Monde, which has developed recognized expertise in video analysis and verification.
Ultimately, this is the ambivalence of the impact of deepfakes on journalists' work. While these new contents undoubtedly pose a threat to the profession, they also present an opportunity to strengthen the role of journalists in fact-checking and thus establish them as true defenders of truth.
Ingrid de Chevigny