Censorship or Civic Responsibility: Reporting on how internet companies limit content

The following has not yet been verified. Please improve it by logging in and editing it. If you believe that is not sufficient to solve the problem, please discuss it with the community on the Talk Page. If you think that this article should be removed, please contact [email protected]

Freedom of speech operates under different rules when on the world’s most popular online platforms. Youtube deleted over 5 million videos between October and December 2017, before anyone saw the video, because the user was deemed to have violated policy. Facebook announced that it deletes roughly 66,000 posts a week in June 2017 for content deemed to be hate-speech or worse (Adweek). 

Whether internet companies should restrict certain content posted by users has triggered a debate between those who preach civic responsibility and those who worry about having a gatekeeper over which speech is “appropriate.”

The WikiTribune community is reporting on the future of speech online. This news stubs is dedicated to explaining the challenges and possible solutions to hate-speech, bias and misinformation on some of the largest platforms. 

Challenges

Conduit for hate-speech: Myanmar is an example of the real-life consequences of unfettered hate-speech and misinformation on social media, according to human rights groups. With the hermit nation only gaining internet access in 2011, social media has been used by nationalist figures to stoke ethnic conflict, particularly against Muslim minorities (Tech Crunch). 

Perception of bias: American conservatives believe that internet companies are too active in moderating their platforms. Senator Ted Cruz grilled Facebook CEO Mark Zuckerberg during his congressional hearing on what he saw as conservative viewpoints being disproportionately blocked by the social media company.

Conservative pundit Dennis Prager sued Youtube, owned by Alphabet, for restricting the reach of his organization’s video content. He argued they were targeted because they expressed conservative view points. A district judge dismissed the lawsuit ruling that as a private company Youtube is permitted to prioritize certain content on its platform without it being a violation of freedom of speech.

Misinformation: The ability of individuals to profit from ads attached to articles, has incentivized some individuals to produce fake stories that are likely to be shared and clicked upon (Wired). 

  • Conspiracy Theories: 

What are other challenges?

Talk

 

Solutions

Artificial Intelligence: With millions of posts every week, tech companies see A.I as the future of identifying and removing inappropriate content. Zuckerberg brought up the need for more sophisticated A.I in his congressional hearings as a way to to avoid biases amongst moderators, as well as quickly identify hate speech that could otherwise take days to remove by human hands. 

Wikipedia as news source: Youtube announced it will place information from Wikipedia next to Youtube videos deemed to be “conspiracy theories” as a way to combat misinformation on the platform (CNN). Similar to Google, also owned by Alphabet Inc., Wikipedia’s evolving list of neutral information would be used to provide quick neutral information. Wikimedia, the parent group of Wikipedia, said it has not authorized the platform for such uses.

Preferred news sources: 

What are other responsible solutions?

Talk

 

Image information

  • TODO tags

      Is there a problem with this article? [Join] today to let people know and help build the news.
      • Share
        Share

      Subscribe to our newsletter

      Be the first to collaborate on our developing articles

      WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Connect with us on Discord Email us