The video-sharing social-networking service TikTok says it has “a zero tolerance stance” on accounts linked to anti-Semitism and other forms of bigotry.
“In addition,” it said in an Aug. 20 post, “we remove race-based harassment and the denial of violent tragedies, such as the Holocaust and slavery. We may also take off-platform behavior into consideration as we establish our policies, such as an account belonging to the leader of a known hate group, to protect people against harm.”
The post was published six days before the museum at Auschwitz young people were portraying themselves on TikTok as victims of the Holocaust in a “hurtful and offensive” manner.
TikTok told JNS that the platform “blocked the #holocaustchallenge earlier this week to discourage people from participating. We do not condone content like this and are redirecting searches for it to our Community Guidelines to further educate users about our policies and the supportive, inclusive community we are working to foster on TikTok.”
Last Monday, a TikTok video surfaced of US Army Second Lt. Nathan Freihofer, an influencer on the video-sharing social-networking service with almost 3 million followers, making a Holocaust joke.
Counter Extremism Project senior research analyst Josh Lipowsky told JNS that while social-media companies such as TikTok have been taking steps to combat bigotry, more needs to be done.
He said that TikTok’s “comprehensive and specific hate-speech policies … are a step in the right direction for social media,” as “Twitter, Facebook, and other platforms have also instituted policies forbidding hate speech and symbols based on race, religion, sex, and other protected criteria.”
“These policies are all great on paper, but we need to see the tech companies proactively enforcing [them] to protect their users,” he stressed. “TikTok and other social-media companies have to do more than just pay lip service to fighting extremism on their platforms. They need to quickly and uniformly enforce their policies to protect against the abuse of their platforms. These are private platforms and the companies that own them have a responsibility to the public to ensure that extremists are not abusing their sites.”
Lipowsky said “some may argue that this is an infringement of free speech, but the fact is these are private companies that have every legal right to limit how their services are used.”
Furthermore, he added, “they have the moral responsibility to ensure that their platforms are not being subverted for the promulgation of hate speech. If extremists manipulate these services into platforms for hate and recruitment to extremist ideologies, then the tech companies are ultimately responsible for those consequences.”