While news channels across the world broadcast the first images of destruction in Ukraine, not a day after Russia invaded Ukraine, doctored and misleading footage went viral on TikTok.
One video of several military planes in the sky captioned “this is crazy,” had hashtags for Ukraine and Russia. But the footage, now removed from TikTok, was not from today's conflict. The video was old and the planes, American.
Another TikTok showed paratroopers who were supposedly descending towards the war zone. But a reverse search showed the footage first surfaced in 2016. It reached 20 million views before it was taken down.
The popular app, used by more than 1 billion people, has been amplifying videos, allegedly from Ukraine, that show old conflicts and scenes from movies. Some people are even seeking to profit by showing misleading 'livestreams' that can attract many thousands of dollars in donations.
Some social users have pointed out how easy it can be to publish misinformation.
Now, Tiktok is now facing fresh scrutiny on its algorithm as it deals with a wave of war misinformation.
So, what is the newbie on the social media scene doing about it?
TikTok’s algorithm presents unique challenges
Although misinformation has ramped up on all social media platforms after Ukraine's invasion, TikTok's coveted algorithm presents a unique issue, said Tama Leaver, Professor of Internet Studies at Perth's Curtin University, who is researching TikTok’s policing of its app.
The algorithm offers a highly personalised experience through its "For You" page.
If a person positively interacts with a video - say by watching, sharing or liking it - TikTok then shows it to more people who it thinks share similar interests.
The process then repeats itself, and if this positive feedback loop happens enough times, the video can go viral.
"It's much more likely that people are going to see it on TikTok because TikTok will have amplified it before it knows to pull it down," Mr Leaver said.
Like other social media apps, Mr Leaver said as more people report the video for misinformation, it's more likely to rise in the misinformation moderation queue - a catch-22.
“It is relying on human intervention, it's usually relying on a significant number of people reporting this as problematic before it does actually end up in a queue for review,” Professor Leaver told The Feed.
In the comments section of misleading videos on TikTok, some users are quick to point out that footage is inauthentic. But comments from others confirm they didn't come to the same conclusion.
What do we know about the newest social media giant’s response to the Ukraine invasion?
TikTok tightened its misinformation policy in 2020 ahead of the US election, banning "harmful" misinformation that threatened the community and the larger public.
In response to the Ukraine invasion, TikTok added additional resources to the moderation of content in Russian and Ukrainian to monitor content for misinformation, hate speech and incitement to violence. The move is in line with measures taken by Meta, formerly Facebook.
"We continue to closely monitor the situation, with increased resources to respond to emerging trends and remove violative content, including harmful misinformation and promotion of violence,” a spokesperson from TikTok told The Feed in a statement.
“We also partner with independent fact-checking organisations to further aid our efforts to help TikTok remain a safe and authentic place.”
In a new development, on Monday 7 March, TikTok announced it was rom Russia.
The move came in response to Russian President Vladimir Putin signing into law a bill introducing jail terms of up to 15 years for publishing "fake news" about the Russian army.
Just months ago TikTok was found to have kept COVID-19 misinformation on its app for months, according to NewsGuard, an organisation that monitors online misinformation.
But war misinformation, as opposed to misinformation of COVID-19, presents special issues.
"Grainy footage is one of the markers of authentic material from a conflict zone – but this makes also means that it’s easier to doctor," said Professor Leaver.
Human moderators sometimes won’t know that what they’re looking at is false, he added. And it’s a problem felt by all platforms, not just TikTok.
“We’ve seen a very significant amount of misinformation, which is information that is incorrect, but also disinformation (produced deliberately to deceive) .. and sometimes it’s very hard to tell whose interest that disinformation is serving," said Professor Leaver.
To slow down the spread of misinformation and disinformation in relation to the invasion of Ukraine, he added TikTok might need to manually intervene on the algorithm and stop related content from being recommended as often.
"There are upsides and downsides, but slowing the rate of recommendation for videos to do with conflict is one of those low-hanging fruits where they could intervene in the algorithm without breaking it."
The downside is that this would also slow the spread of legitimate footage.
While Professor Leaver said more could always be done - like getting more human moderators - the public should also be fair to the company which only really gained mainstream popularity last year.
An example of misinformation spreading on TikTok. The footage is from a film. Credit: TikTok
“It's a much younger platform. Facebook has had the best part of 20 years to get this right. And it took probably 15 years of getting it wrong," he said.
“It's trying very hard to catch up and do the right thing in many places. But at the same time, the TikTok algorithm is its secret weapon."
TikTok is owned by a Chinese company. Does that make a difference?
On Friday, Prime Minister Scott Morrison singled out China for failing to call out the Russian invasion of Ukraine and impose sanctions.
Mr Morrison said it was concerning China was easing trade restrictions with Russia at a time when most nations were imposing sanctions.
Reporting from China’s state-run media outlet, The Global Times, on Sunday confirmed a neutral stance from the country.
Despite this position, Professor Leaver told The Feed he isn’t convinced that ByteDance, TikTok’s parent company, being owned by a Chinese company is affecting its response to moderation of misinformation and disinformation on the app.
“For the most part, Tik Tok and Douyin, which is the Chinese version of TikTok, are actually managed a bit separately.”
Professor Leaver said the way the algorithm of Douyin is policed in mainland China is different, and there is currently no evidence of that spilling into Tiktok.
“If politically, China is more sympathetic to Russia I don't think that will have a lot of impact on content moderation, " he said.
"At the moment, TikTok is desperately trying to prove to the English-speaking world that it can be trusted just as much, if not more, than Instagram.”
Peter Lewis, the director of the Australia Institute's Centre for Responsible Technology, sees it a little differently.
“I think you'd be naive to think there weren't contextual issues at play, " he said.
“It’s not a deep state conspiracy but I also think that the pressure to moderate the content from external politics will be less than you might be seeing in the US at the moment.”
Other social media platforms are also taking steps
Other tech giants have also taken steps in the space to combat misinformation and disinformation.
Twitter, Meta (formerly Facebook), YouTube and Google have now all banned the Russian state-owned media outlet Russia Today and other Russian channels from receiving money for advertisements that run with their videos.
In a statement, Meta said it had established a “special operations” team with Ukrainian and Russian speakers who are monitoring the platform to respond to issues “around the clock" to respond in real-time.
Meta also expanded its third-party fact-checking capacity in Russian and Ukrainian.
Facebook has also blocked access to a number of accounts in Ukraine, some of which are associated with Russian state-backed media outlets. TikTok did the same thing on Tuesday.
Nick Clegg, the vice president of global affairs at Meta, acknowledged on Twitter some Ukranians’ suggestions to ban Facebook and Instagram from Russia entirely, but said doing so “would silence important expression at a crucial time.”