AI is now being blamed by some for past real-life gaffes. This creates a liar's dividend and risks destabilising the concept of truth itself.

More...

In a recent edition of  Creator Briefing (#130) we noted that 2024 is the biggest global election year in history with more than 50 elections being held. We forecast these elections will be the first on record significantly influenced by … influencer marketing. 

We also talked about political deepfakes and how 143 deep-fake video advertisements impersonating UK Prime Minister Rishi Sunak were created and paid to be promoted on Meta’s platform between 08 Dec. 2023 and 08 Jan. 2024 - according to research from Fenimore Harper Communications.

This week I want to touch on what one academic has termed ‘the liars dividend’ - how Donald Trump is now attempting to pass off some of his past real-life gaffes as examples of AI deepfakery.

Trump recently rubbished a video ad which compiled some of the former president’s well-documented public gaffes. Taking to Truth Social he wrote “The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligence) in their Fake television commercials.” 

Liars dividend

AI creates a “liar’s dividend,” says Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation. In an interview with the Washington Post Prof Farid continues “When you actually do catch a police officer or politician saying something awful, they have plausible deniability” in the age of AI.

Libby Lange, an analyst at Graphika, the misinformation tracking organisation, was also interviewed by WaPo. She told the paper that AI “destabilizes the concept of truth itself.”

Lange concludes with some chilling words: “If everything could be fake, and if everyone’s claiming everything is fake or manipulated in some way, there’s really no sense of ground truth. Politically motivated actors, especially, can take whatever interpretation they choose.”

Scott Guthrie is a professional adviser within the influencer marketing industry. He is an event speaker, university guest lecturer, media commentator on influencer marketing and active blogger. He works with brands, agencies and platforms to achieve meaningful results from influencer marketing. That tells you something about him but it's not giving you a lot of detail, is it? So, read more here.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

AI is now being blamed by some for past real-life gaffes. This creates a liar's dividend and risks destabilising the concept of truth itself.

More...

In a recent edition of  Creator Briefing (#130) we noted that 2024 is the biggest global election year in history with more than 50 elections being held. We forecast these elections will be the first on record significantly influenced by … influencer marketing. 

We also talked about political deepfakes and how 143 deep-fake video advertisements impersonating UK Prime Minister Rishi Sunak were created and paid to be promoted on Meta’s platform between 08 Dec. 2023 and 08 Jan. 2024 - according to research from Fenimore Harper Communications.

This week I want to touch on what one academic has termed ‘the liars dividend’ - how Donald Trump is now attempting to pass off some of his past real-life gaffes as examples of AI deepfakery.

Trump recently rubbished a video ad which compiled some of the former president’s well-documented public gaffes. Taking to Truth Social he wrote “The perverts and losers at the failed and once disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligence) in their Fake television commercials.” 

Liars dividend

AI creates a “liar’s dividend,” says Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation. In an interview with the Washington Post Prof Farid continues “When you actually do catch a police officer or politician saying something awful, they have plausible deniability” in the age of AI.

Libby Lange, an analyst at Graphika, the misinformation tracking organisation, was also interviewed by WaPo. She told the paper that AI “destabilizes the concept of truth itself.”

Lange concludes with some chilling words: “If everything could be fake, and if everyone’s claiming everything is fake or manipulated in some way, there’s really no sense of ground truth. Politically motivated actors, especially, can take whatever interpretation they choose.”