As conversations around the U.S. TikTok ban continue to heighten, the video-sharing platform has updated its content moderation policies to address rapidly growing (and impressively realistic) AI technologies, according to The Verge.
The updated Community Guidelines include new restrictions on posting AI deepfakes, which have gained widespread popularity across the platform in recent months. While previous rules on the topic were limited to bans on content that could “distort the truth of events” or “cause significant harm,” the updated policy includes a new section for “synthetic and manipulated media,” which notes that all realistic AI-generated content must be “clearly disclosed.” Specifically, users must state that a video is a deepfake either in the caption or with a sticker.
The app notes that it will ban manipulated content that features “the likeness of any real private figure” along with those that falsely showcase a public figure endorsing products or violating other guidelines. For clarity, TikTok defines a public figure as a person with “a significant public role, such as a government official, politician, business leader, or celebrity” that is over the age of 18.
TikTok’s latest policy update arrives while its parent company ByteDance faces continued pressure from the U.S. government, which has reportedly threatened to publicly ban the app due to potential national security risks, if ByteDance does not sell its stakes. TikTok already faces a ban on government devices in the U.S., U.K., New Zealand and Canada.