Social Media has a moderation problem and it’s bigger than Trump

By Tom Woods

Even for today’s hyperpartisan context, the state of public discussion surrounding President Trump’s ban from a number of social media platforms has been depressingly poor. On the American right, outlets such as Fox News have predictably parroted Trump’s words that inciting violence now lies under the margins of “free speech”. Somewhat more disappointingly however, many on the left have leapt to over-simplified legalistic arguments that fail to cut to the core of the issue. The argument usually follows that, as Twitter, Facebook or the numerous other platforms who have banned Trump are private companies, they have the right to remove his, or indeed anyone else’s speech. Anyone who has a basic understanding of the US law knows that this is constitutionally sound, given that the First Amendment only prohibits restrictions on speech made by state actors. But, if deployed in isolation, this argument is lazy and misses the point.

The central question that should be asked here is not whether these platforms were vindicated to block Trump out, it’s whether deserved the right to do so in the first place. Private companies aren’t required to have the same firm checks and balances demanded of governmental institutions in their internal operations. As a result, regardless of one’s opinion on the decisions to ban Trump, they generally weren’t taken according to consistent policy logic or rigid, democratic internal law. Instead, they were arbitrary and marked significant U-turns against prior company policy. At the highest level of internal policymaking, the lines between free speech and incitement of violence are shockingly malleable and in the hands of the subjective opinion of a few executives. It’s shocking that more people aren’t asking whether social media companies deserve the power to regulate public discourse in such a manner. This is especially so given how poorly managed and regulated they are. These companies have time and again proven themselves to be both incapable and unwilling to effectively manage the online space, putting inadequate resources into moderation, censoring content along racial lines, and failing to crack down on misinformation. In light of this, regulation emerges as a necessity to ensuring that the internet can prevail as a zone for free and open discussion while preventing the spread of hatred and violence.

Censorship on these platforms is frequently portrayed as a clash between conservatives who claim to have been “deplatformed” against their political adversaries. Right-wing firebrands such as Adriana Cohen have complained with some justification that their content has been removed or unfairly restricted, making it harder for people to access. However, the issue doesn’t just concern the right: many of society’s most marginalised groups have seen their content disproportionately blocked based on poorly enforced rules. Just a few months ago, black plus-size model Nyome Nicholas-Williams was banned from Instagram after posting artistic nude photos which did not violate the firm’s terms of service, a decision that reeked of arbitrariness and a systemically racist company culture. Censorship has also been caused by the weakness of AI and the algorithms sites use to police themselves. TikTok’s issues here are well-documented, but discriminatory algorithms also plague Instagram and Facebook.

Big tech’s posturing against the “free speech” platform Parler in recent days has also been problematic. After it emerged that much of the planning for the recent attempted insurrection on the Capitol was co-ordinated on the site, Amazon Web Services withdrew its hosting services from them. This was partially justified: Parler failed to remove speech inciting violence before the insurrection at the Capitol and has been notably slow to moderate content. However, it was strangely hypocritical and almost suggestive of an ulterior motive given that Amazon Web Services recently agreed to extend their relationship with Twitter, whose tackling of misinformation has also been shockingly weak. Indeed, Twitter and similar sites were the primary facilitator of the mainstreaming of conspiracy theories such as the ludicrous QAnon and firm action from them against the hate and misinformation these groups spread only came following intense public scrutiny from the Western media. In this vein, their response has been similar to their response to alleged Russian interference in the 2016 election. In countries where publics enjoy fewer freedoms and journalistic scrutiny is weaker, social media platforms have been even poorer at regulating speech, willingly ignoring incitement to violence and passively aiding violent regimes. This has been painfully evident in the Philippines, where threats towards journalistic and political opponents of the country’s strongman leader Duterte on Facebook have gone untouched. At best, this shows negligence on Facebook’s behalf. At worst, it suggests that the company’s business partnerships with Duterte’s government, which include the imposition of underwater cables, have led to it turning a blind eye to his supporters’ seditious manipulation of public discourse.

Of course, many of the challenges facing firms are unprecedented and mistakes will be made, particularly in the largely uncharted area of content-policing through algorithms. However, given the urgent social responsibility of tackling dangerous content, not enough effort is being put behind moderation policy. Facebook, for instance, has demonstrated a lack of social purpose in the way it supports its moderation. Last year, a NYU report suggested that the firm should double the number of moderators it currently hires. Facebook failed to take this action, despite analysis clearly showing that it can afford to as one of the world’s most profitable companies. Content moderation workers are also given meagre pay and support given their gruelling jobs, which involves trawling across some of the most vile, hate-filled content on the internet and has resulted in numerous workers developing PTSD. Even if content moderation is to remain privately guided, pressure needs to be put on companies to direct more funds into it.

The point here is that these companies, despite some effort and genuine commitment to change, have demonstrated that they do not deserve anywhere near as much power as they currently wield. Regulation must be enforced to limit the power of executives, establish strict rules that draw a clear boundary between right and wrong, and punish firms that fail to enact necessary measures. Policymakers need to tread lightly, however. Legal experts have warned that numerous proposed solutions to these issues could actually exacerbate the position of free expression online and entrench the power of big tech. Approaches will also massively vary state-by-state given the greater legal restrictions of countries such as the USA.

CDA 230, the US legal code that stipulates that online platforms cannot be held responsible for third-party content on their websites has recently become a bogeyman for both left and right in the US. Both Trump and Biden rallied against the law on the campaign trail for seemingly good reason: repealing it would allow them to regulate the aspects of social media that they despise and compel the big tech giants to obey strict rules. However, its repeal could also unintentionally damage smaller and independent sites. By forcing all sites to follow strict rules, a repeal would open the door to thousands of lawsuits against small and independent firms, which, even if won, would still significantly damage them in legal fees. The result, as Stanford Law professor Daphne Keller argues, would be these small firms acting to limit third-party expression on their platforms out of fear of legal issues, leaving this to the larger firms who could afford to comply with new regulations.

Another potential quagmire facing any regulation by the US government is the First Amendment, which explicitly states that Congress cannot pass any law to prohibit any free expression with very few exceptions. It thus may be possible for US lawmakers to combat the very worst incitement to violence and terrorist content on these platforms, but, as misinformation and politically radical content falls under free speech, compelling platforms to make fundamental changes would be much harder.

The institution to have made the greatest process in addressing the issues is the EU through its Digital Services Act (DSA) and Digital Markets Act (DMA). These proposed measures, unhampered by the same restrictions on free speech that prevent effective action in the US, may significantly help the aforementioned problems in Europe. Whilst drawing up careful boundaries on what constitutes prohibited speech, other regulations are being introduced to ensure that smaller sites are protected from the vast legal pressure that could follow a hastily introduced law. Proposed reforms include fines for companies which could total up to 5% of total annual turnover and thorough processes by which actors can appeal against content they produce being “censored”. However, the strictest prohibitions will only apply to the very largest platforms (those with over 45 million users) who will presumably be able to handle the immense scrutiny that follows the introduction of these regulations. In Germany, similar laws have already had some effect in tackling hate speech online, and Google, Facebook, and Twitter have taken action in transparently reporting how much content they take down to the German government. However, again, the most challenging issue to tackle is disinformation.

The issue of what content providers can and cannot do is not going to be solved in any meaningful sense for many years. The EU hopes to have pushed through the DSA and DMA by 2023, whilst other countries such as the US seem to be lagging even further behind in terms of regulation. Regulation needs to be firm in weakening the grip of social media tycoons on online discourse. However, care must be taken to ensure that the cure is not worse than the disease itself and places further power into the hands of these corporations. These are complex, thorny issues and it will take some time for the best remedies to emerge. For now, the only thing that will effectively incentivise reform on the behalf of big tech is further public scrutiny and lively debate that raises awareness around these issues.

The views expressed in this article are the author’s own and may not reflect the opinions of The St Andrews Economist.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s