As you can see in this example, live-stream hosts will now have the option to mute specific viewers for a period of the broadcast – or the entire stream, if they so choose.
As explained by TikTok:
“Now, the host or their trusted helper can temporarily mute an unkind viewer for a few seconds or minutes, or for the duration of the LIVE. If an account is muted for any amount of time, that person’s entire comment history will also be removed. Hosts on LIVE can already turn off comments or limit potentially harmful comments using a keyword filter. We hope these new controls further empower hosts and audiences alike to have safe and entertaining livestreams.”
The added capacity to remove all of the users’ previous comments is a big addition, which could help to manage live-stream interaction, and reduce unwelcome distractions flooding the comment stream.
Which has always been a problematic element. Twitter was forced to update its rules around live-stream interaction back in 2018, after various investigations showed that women and young people, in particular, tended to attract all manner of offensive remarks and comments during their broadcasts.
And as noted, with TikTok exploring live-stream commerce, via various partnerships with big name brands, it needs to also provide a brand and consumer safe environment, in order to maximize appeal. With this in mind, having the capacity to quickly cut off inappropriate commentators, and negate their impact, could be a valuable addition.
TikTok also added a new live-stream moderators option back in July, to provide extra management options in this respect.
The announcement comes within a broader overview of TikTok’s latest Community Guidelines Enforcement report, which outlines all of the actions TikTok took due to platform rule violations between April and June this year.
TikTok notes that it removed more than 81 million videos in the period, equating to less than 1% of all videos uploaded on the platform – which would suggest that TikTok is now seeing more than 90 million videos uploaded to the platform every day. Which makes sense, given the app is now up to a billion users, but it does add some extra scope to the growth of the platform.
“Of those videos, we identified and removed 93.0% within 24 hours of being posted and 94.1% before a user reported them. 87.5% of removed content had zero views, which is an improvement since our last report (81.8%).”
TikTok also notes that its new alerts which prompt users to reconsider potentially offensive comments, which it added back in March, are also having an impact.
“The effect of these prompts has already been felt, with nearly 4 in 10 people choosing to withdraw and edit their comment. Though not everyone chooses to change their comments, we’re encouraged by the impact of features like this and we continue to develop and try new interventions to prevent potential abuse.”
Twitter and Instagram have also implemented similar prompts, which, based on this data, could go some way in reducing angst in replies.
User safety is a major focus for TikTok, with the app’s appeal to younger audiences also, potentially, facilitating unwanted exposure and connection, if left unchecked. The platform has come under scrutiny in several regions in the past for its failure to protect young users from harm, and with concerns around its previous moderation processes, defined by Chinese regulations, TikTok knows that it’s under heavy scrutiny on this front, and that it needs to work hard to maintain trust.
Which is why measures like this are important, while they’ll also, ultimately, help the app maximize advertiser interest by providing a safer, more welcoming environment.