Last week, the Australian High Court upheld a ruling which, in some circumstances, could see Australian media outlets held liable for user comments left on their respective Facebook Pages.
The finding has sparked a new range of concerns around potentially limiting journalistic free speech, and impeding reporting capacity. But the complexity of the case goes deeper than the initial headline. Yes, the High Court ruling does provide more scope for media outlets to be held legally accountable for comments made on their social media pages, but the full nuance of the ruling is more specifically aimed at ensuring incendiary posts are not being shared with the clear intent of baiting comments and shares.
The case stems from an investigation in 2016, which found that inmates of a youth detention center in Darwin had been severely mistreated, even tortured, during their confinement. Within the subsequent media coverage of the incident, some outlets had sought to provide more context on the victims of this torture, with a handful of publications singling out the criminal records of said victims as an alternate narrative in the case.
One of the former inmates, Dylan Voller, claims that the subsequent media depictions of him were both incorrect and defamatory, which lead to Voller seeking legal damages for the published claims. Voller himself had become the focus of several articles, including a pierce in The Australian headlined “Dylan Voller’s list of jailhouse incidents tops 200”, which highlighted the many wrongs Voller had reportedly committed that had lead to his incarceration.
The case as it relates to Facebook comments, specifically, came about when these reports were republished to the Facebook Pages of the outlets in question. The core of Voller’s argument is that the framing of these articles, within Facebook posts specifically, prompted negative comments from users of the platform, which Voller’s defense team has argued was designed to provoke more comments and engagement on these posts, and therefore garner more reach within Facebook’s algorithm.
As such, the essence of the case boils down to a critical point – it’s not that publications can now be sued for people’s comments on their Facebook posts, in simplified terms, but it relates to how the content is framed in such posts, and whether there can be a definitive link shown between the Facebook post itself, and whether that has lured defamatory comments, and community perception, which can harm an individual (it’s not clear that the same regulations would extend to an entity, as such).
Indeed, in the original case notes, Voller’s legal team argued that the publications in question:
“Should have known that there was a “significant risk of defamatory observations” after posting, partly due to the nature of the articles”
As such, the complexities here extend far beyond the topline finding that publishers can now be sued for comments posted to their Facebook Page, because the real impetus here is that those publishing any content to Facebook on behalf of a media publisher need to be more careful in the actual wording of their posts. Because if subsequent defamatory comments can be linked back to the post itself, and the publisher is then found to have incited such response, then legal action can be sought.
In other words, publishers can re-share whatever they like, so long as they remain aligned to the facts, and don’t look to share intentionally incendiary social media posts around any such incident.
Case in point, here’s another article published by The Australian on the Dylan Voller case, which, as you can imagine, has also attracted a long list of critical and negative remarks.
But the post itself is not defamatory, it’s merely stating the facts – it’s a quote from an MP, and there’s no direct evidence to suggest that the publisher has sought to bait Facebook users into commenting based on the article shared.
Which is the real point in question here – the ruling puts more onus on publishers to consider the framing of their Facebook posts as a means to lure comments. If the publisher is seen to be inciting negative comments, then they can be held liable for such – but there has to be definitive evidence to show both damages to the individual and intent within their social media post, specifically, not the linked article, which could then lead to prosecution.
Which may actually be a better way to go. Over the past decade, media incentives have been altered so significantly by online algorithms because of the clear benefit for publishers to share anger-inducing, emotionally charged headlines in order to spark comments and shares, which then ensures maximum reach.
That’s extended to misinterpretations, half-truths and downright lies in order to trigger that user response, and if there’s a way that publishers can be held accountable for such, that seems like a beneficial approach, as opposed to proposed reforms to Section 230 laws in the US which would more severely limit press freedoms.
Again, this ruling relates to Facebook posts specifically, and the wording of such being designed to trigger emotional response in order to lure engagement. Proving a definitive link between a Facebook update and any personal damages will still remain difficult, as it is in all cases of defamation. But maybe, this finding will prompt Facebook Page managers at media outlets to be more factual in their updates, as opposed to comment-baiting to trigger algorithm reach.
As such, while it does open up media outlets to increased liability, it could actually be a path forward for instituting more factual reporting, and holding publishers to account for triggering online mob attacks based on their angling of a case.
Because it’s clear that this is happening – the best way to attract comments and shares on Facebook is to trigger emotional reaction, which then prompts people to comment, share, etc.
If a Facebook post is found to be clearly prompting such, and that can cause reputational damage, that seems like a positive step – though inevitably it does come with increased risk for social media managers.