top of page
Writer's pictureSaum Chaudhuri

Social Media Moderation

Who watches the watchmen? This question lies at the heart of the ongoing debate about social media content moderation. As the digital age unfolds, an increasing number of individuals source their news, wholly or in part, from social media. There have been many positive changes due to the rise of these platforms. For instance, the democratization of information has amplified voices once marginalized, fueling global movements like Black Lives Matter, #MeToo, and the Arab Spring. However, these platforms present unique challenges, from hate speech to scams to misinformation. Calls for tighter moderation grow louder in response to these concerns.


History of Online Moderation


In 1994, the brokerage firm Stratton Oakmont, best known for its portrayal in Scorsese’s The Wolf of Wall Street, sued the early tech company, Prodigy, for libel over a user's post on their forum. Ultimately, Stratton Oakmont prevailed in the suit because Prodigy employed some forms of moderation on their platform. Recognizing the potentially disastrous implications of this ruling on a growing tech sector, Congress enacted Section 230 of the Communications Decency Act in 1996. This legislation empowered companies to moderate content as they saw fit, while also shielding them from legal repercussions stemming from user-generated content. While platforms like YouTube gained protection, potential victims can still pursue individual content creators, as was the case with right-wing conspiracy theorist Alex Jones.

With this legal obstacle addressed, American social media giants flourished without constraint. As of October 2023, it's estimated that 78% of the global adult population uses social media, with most of these platforms either American or Chinese-owned.

Recent Events


The internet has long grappled with problematic content, including harassment. However, the swift rise of smartphones and social media during the early 2010s was transformative for the internet landscape, causing a sharp uptick in such difficulties . The 2016 U.S. Presidential Election spotlighted the threat of disinformation, as Russian agents purchased political ads on platforms like Facebook. The consequences of the COVID-19 pandemic further exacerbated misinformation, with people spending significantly more time communicating online. Faced with mounting scrutiny, social media firms intensified their moderation efforts.

Yet criticisms arose from across the political spectrum regarding content decisions, with contentious topics ranging from vaccine misinformation, to the Hunter Biden laptop story, to the Capitol riots on January 6th. More recent challenges include AI-generated content and wartime propaganda. An Associated Press poll indicated that 76% of US social media users have encountered disinformation on social media platforms.


Internationally, insufficient moderation and amplification of hate speech has had devastating outcomes. Meta, which owns three of the world's top five social media platforms by user count, has faced scrutiny. Specifically, Facebook has been implicated in exacerbating the Rohingya genocide in Myanmar and the Tigray conflict in Ethiopia.

Governmental Responses Governments abroad have formulated distinct laws addressing social media moderation. Many nations have enacted stringent controls on permissible content, and those in violation face imprisonment or surveillance. AI-enhanced automated monitoring tools are increasingly employed to detect dissent. Countries like China, Iran, and Russia are at the forefront of suppressing oppositional voices. Even in the UK, over 9,000 individuals, many without criminal records, are under close surveillance.


Conversely, Americans exhibit a stronger inclination towards safeguarding online freedom of expression. Apart from legislation targeting sex trafficking and child protection, Section 230 remains largely intact. Efforts by the US government to broaden control over social media have been countered by legal challenges and the short-lived existence of the Disinformation Governance Board. Intriguingly, both Trump and Biden have advocated for amending Section 230, albeit with contrasting motivations.


The complexities surrounding social media moderation underscore the global challenges of governing a digital realm. While the democratization of information has empowered voices and driven societal change, it has also paved the way for misinformation, hate speech, and harmful narratives. As tech giants grapple with their responsibilities and governments formulate their responses, the ultimate question persists: how do we ensure both freedom of expression and the safety of users on these platforms? The path forward is unclear, but one thing is certain: the decisions made on this matter will shape digital, and thus global, discourse for years to come.


Additional Reading Materials








18 views0 comments

Comments


bottom of page