TikToks featuring realistic-looking virtual influencers such as Miquela, will have to be marked to show the creators are not humans from April 21st.
TikToks featuring realistic-looking virtual influencers such as Miquela, Shudu and Imma will have to be marked to show the creators are not humans under newly revamped community guidelines published this week by the video-sharing app.
From April 21st "Synthetic or manipulated media that shows realistic scenes must be clearly disclosed. This can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’" according to the new rules.
Regulators and politicians around the world are increasingly investigating image manipulation and the portrayal of body image on social media.
ASCI, India’s advertising regulator, became the first national watchdog to mention dealing with virtual influencers within its disclosure guidelines back in 2021. Their rules state “a virtual influencer must additionally disclose to consumers that they are not interacting with a real human being. This disclosure must be upfront and prominent.”
There have long been advocates within the influencer marketing industry who believe virtual humans should be watermarked. For these advocates these marks would designate three pieces of information to protect consumers:
- Disclosure - The virtual human is virtual, not human.
- Ownership - The details of its owner.
- Motivation - The motivations driving the creation of this content.
In the UK Dr Luke Evans MP has introduced a Body Image Bill in Parliament which calls for commercial images featuring digitally altered bodies to be labelled.
There has been an acquisition collapse in terms of both cost and level of ability required to create synthetic media. Synthesia, an AI video creation platform, enables marketers and educators to create videos free of charge which use life-like avatars to narrate text keyed in.
Synthetic media is algorithmically created or modified media — the nefarious version of which are deepfakes. The outputs range from fun 3D avatars consumers can use in gaming to the sinister manipulation of politicians’ speeches.
Writing in the Routledge text book Influencer Marketing: Building Brand Communities and Engagement I suggested the need for global regulation to protect consumers from the potential harm these technological advances might bring.
For example, in 2019 a digitally altered video showing Nancy Pelosi, the speaker of the United States House of Representatives, appearing to slur drunkenly through a speech, was widely shared on social media for instance. US president, Donald Trump, was amongst those who shared the deepfake video posting the clip on Twitter with the caption: “PELOSI STAMMERS THROUGH NEWS CONFERENCE.”
Under the new TikTok new community guidelines, the Pelosi deepfake would not be allowed. The guidelines forbid “synthetic media that contains the likeness of any real private figure”. Whilst the platform does allow more latitude for public figures, it warns against these well-known people being the subject of abuse.
TikTok is also seeking to protect people from being misled about political or financial issues. “We do not allow synthetic media of public figures if the content is used for endorsements or violates any other policy. This includes prohibitions on hate speech, sexual exploitation, and serious forms of harassment.”