top of page
Search

Social Media Moderation: Who's Responsible?

  • Emory Business Ethics
  • Sep 18, 2020
  • 5 min read

Updated: Sep 20, 2020

Moderation in social media is a tricky topic. Some people support social media taking down posts and comments that do not meet the site's guidelines, but others say that this is a form of censorship, violating their First Amendment rights. So, who is responsible for taking charge on moderating social media content? And should it be moderated at all?



Context


SOCIAL MEDIA PLATFORMS CONTRIBUTE TO ONLINE CENSORSHIP


Even though the First Amendment protects free speech, it only restricts the government for persecuting individuals based on speech. However, because social media companies are private and outside of the government's scope, they have the ability to decide what speech they will allow on their site.

Also, In a recent statement against strict censorship on social media, Mark Zuckerberg said that he believed that Facebook "should enable as much expression as possible."


STATISTICS ON ARTIFICIAL INTELLIGENCE AND SOCIAL MEDIA MODERATION


99.2% of comments removed on YouTube were flagged by Artificial Intelligence (AI) systems. 95% of Facebook's comments that were removed were also flagged by AI. AI may be free from human error, but there can still be biases built into it by the programmers of the AI, even if it is unintentional.


WHAT IS A SOCIAL MEDIA ALGORITHM?


Social media algorithms sort posts in a user's feed based on different factors. It prioritizes content that the algorithm think that the user would like to see, and the content is determined by the user's behavior.

They exist due to push brands wanting to pay a premium for advertisements and also because it can sift information efficiently for a user. Algorithms involve data science and machine learning, parsing data and ranking posts on company-specific criteria.


SOCIAL MEDIA ALGORITHMS IN USE


Two important topics that involve heavy usage of algorithms are the COVID-19 pandemic and the 2020 election/politics in general.

At the beginning of the pandemic, Paige Williams, a writer for the New Yorker, posted some helpful guidance about how to stop COVID-19 from spreading, but it was censored because Facebook's algorithm flagged this information as spam. In addition, when people sewed masks to donate to healthcare workers through Facebook, their accounts were deleted, as the algorithm viewed this as an attempt to profit from the virus.

Regarding politics, a Pew Research Center survey found that 90% of Republicans believed that social media companies censored certain political viewpoints. 59% of Democrats believed the same, alongside nearly 75% of adults in the United States. However, in a 2019 review of political pages on Facebook, it was found that conservative and liberal pages both performed equally well.



Laws


FIRST AMENDMENT


The First Amendment protects several ideals: Freedom of expression, the marketplace of ideas, and public forum doctrine.

To clarify, freedom of expression protects people and private entities from state and government agents. The marketplace of ideas dictates that more speech is the best antidote to false speech. However, with many false posts going viral nowadays, this may not be a foregone conclusion as previously thought. Finally, public forums, such as streets, parks, and sidewalks, have been known to be open to all expression. There is no consensus on if social media is a public forum or not.


COMMUNICATIONS DECENCY ACT OF 1996


This is regarded as the "most important internet legislation created." This comes from Section 230 of the act, which mentions that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider," meaning that platforms and internet service providers cannot be held responsible for their users' creations.

A potential amendment could be proposed to this act, which was passed before the invention of social media. As for now, the government cannot make social media companies respond to the fake news problem, and the act also protects algorithms that can target messages to vulnerable audiences.



The Debate: Who Should be Held Responsible for Verifying Information?


CONSUMERS/INDIVIDUALS


One viewpoint is that the burden for responsibility can be placed on the individual browsing social media. It is difficult, as verifying information is a difficult task, but with the help of independent fact-checkers, it is a possibility. Also, people can come together to challenge social media companies when they feel like those companies are not doing enough about false information on social media platforms. One example of this is the Stop Hate for Profit campaign, which pressured many nonprofits and organizations to stop advertising on Facebook. After Facebook's market cap plummeted $72 billion, Facebook was forced to agree to terms put forward by the campaign.


THE GOVERNMENT


Another argument is that the government should step in to protect democratic processes by censoring false information. However, there needs to be a way to prevent the government from simply using this power to influence people politically. A possible solution to this problem would be to have state-approved, private, professional journalists with a license and a unified set of rules that must be followed to retain the license.


THE COMPANY'S RESPONSIBILITY


Finally, the last argument mentions that the social media company itself should be responsible for handling content on their platform. This is also the most common approach to moderating social media. As companies own the rights to all content on their platform, they can reserve the right to moderate, flag, and remove content. Some examples of moderated content include flagging false information, removing inappropriate content (such as sex and violence), and restricting the viewing of unsafe behaviors (like stunts and anti-mask behavior)



Social Media Moderation: Facebook vs. Twitter


FACEBOOK


Facebook has an older and more conservative user base. In addition, Facebook is geared towards posting family-friendly content. Their approach to moderation is more lenient, with some level of fact checking. They choose to prefer to maintain free speech, and this is supported by Mark Zuckerberg, as was mentioned earlier.


TWITTER


Twitter has a different audience than Facebook, with a young, liberal base of people on the platform. In addition, they host a lot of political content, especially considering that President Trump uses Twitter as his main source of communicating to the public. Their approach to moderation is more aggressive in nature, with more ways to flag a tweet and have it reviewed.



What's Next?


EXECUTIVE ORDER ON PREVENTING ONLINE CENSORSHIP


This has been proposed by the current presidential administration; they wanted to intervene to prevent alleged biases in one direction of the political spectrum. However, this is in danger of regressing into improper censorship.


SENATOR JOSH HAWLEY'S (R-MO) BILL


This bill requires social media companies to periodically demonstrate to the FTC that they are providing a forum free to political censorship. However, there could be some unintended effects if this or the executive order above passes. It discourages the free flow of content online, and moderators can err on the side of caution to exclude anything remotely offensive to anyone.



Questions for Discussion


Should social media content be moderated?


Who holds the responsibility to check content? Is it individuals, the government, the company, or someone else?


Referencing the Facebook vs. Twitter case, which approach is more ethical? Maybe a combination of both?


If social media companies moderate content, what is the line between creating a good experience and censoring users?



 
 
 

Comentários


© 2023 BY CREATIVE CORNER. PROUDLY CREATED WITH WIX.COM

info@mysite.com   |   500 Terry Francois Street San Francisco, CA 94158

  • Facebook Basic Black
  • Twitter Basic Black
  • Black Instagram Icon
bottom of page