Last year, we began testing a new reporting mechanism for misleading information in the US, South Korea and Australia.
Today, we are expanding this pilot to Brazil, Spain and the Philippines.https://t.co/H80pMVLTJY
— Secure Twitter ???????? (@TwitterSeguroBR) January 17, 2022
Launched in August last year, Twitter’s latest effort to combat misinformation focuses on audience trends and perception of such as a means to determine common issues with the platform, and what people feel compelled to report, pointing to things that they don’t want to see.
The process adds an additional ‘It’s misleading’ option to your tweet reporting tools, providing another means to flag concerning claims.
Which is obviously not a foolproof way to detect and remove misleading content – but as noted, the idea is not so much focused on direct enforcement, as such, but more on broader trends based on how many people report certain tweets, and what people report.
As Twitter explained as part of the initial launch:
“Although we may not take action on this report or respond to you directly, we will use this report to develop new ways to reduce misleading info. This could include limiting its visibility, providing additional context, and creating new policies.”
So essentially, the concept is that if, say, 100, or 1,000 people report the same tweet for ‘political misinformation’, that’ll likely get Twitter’s attention, which may help Twitter identify what users don’t want to see, and want the platform to take action against, even if it’s not actually in violation of the current rules.
So it’s more of a research tool than an enforcement option – which is a better approach, because enabling users to dictate removals by mass-reporting in this way could definitely lead to misuse.
That, in some ways, has borne true in its initial testing – as explained by Head of Site Integrity Yoel Roth:
“On average, only about 10% of misinfo reports were actionable -compared to 20-30% for other policy areas. A key driver of this was “off-topic” reports that don’t contain misinfo at all.”
In other words, a lot of the tweets reported through this manual option were not an actual concern, which highlight the challenges in using user reports as an enforcement measure.
But Roth notes that the data they have gathered has been valuable either way:
“We’re already seeing clear benefits from reporting for the second use case (aggregate analysis) – especially when it comes to non-text-based misinfo, such as media and URLs linking to off-platform misinformation.”
So it may not be a great avenue for direct action on each reported tweet, but as a research tool, the initiative has helped Twitter determine more areas of focus, which contributes to its broader effort to eliminate misinformation within the tweet eco-system.
A big element of this is bots, with various research reports indicating that Twitter bots are key amplifiers of misinformation and politically biased information.
In early 2020, at the height of the Australian bushfire crisis, researchers from Queensland University detected a massive network of Twitter bots that had been spreading misinformation about the Australian bushfire crisis and amplifying anti-climate change conspiracy theories in opposition to established facts. Other examinations have found that bot profiles, at times, contribute up to 60% of tweet activity around some trending events.
Twitter is constantly working to better identify bot networks and eliminate any influence they may have, but this expanded reporting process may help to identify additional bot trends, as well as providing insight into the actual reach of bot pushes via expanded user reporting.
There are various ways in which such insight could be of value, even if it doesn’t result in direct action against offending tweets, as such. And it’ll be interesting to see how Twitter’s expansion of the program improves the initiative, and how it also pairs with its ongoing ‘Birdwatch’ reporting program to detect platform misuse.
Essentially, this program won’t drive a sudden influx of direct removals, eliminating offending tweets based on the variable sensibilities of each user. But it will help to identify key content trends and user concerns, which will contribute to Twitter’s broader effort to better detect these movements, and reduce their influence.