Censorship or civic responsibility: Reporting on how internet companies limit content

The following has not yet been verified. Please improve it by logging in and editing it. If you believe that is not sufficient to solve the problem, please discuss it with the community on the Talk Page. If you think that this article should be removed, please contact [email protected]

Freedom of speech operates under different rules on the world’s most popular online platforms. Youtube deleted over 5 million videos between October and December 2017, often before anyone saw the videos, because the posting party was deemed to have violated policy. In 2017, Facebook announced it deletes roughly 66,000 posts a week due to content deemed to be hate-speech or worse (Adweek). 

Whether internet companies should restrict certain types of content posted by users has triggered debate between those who preach corporate civic responsibility and those who worry about gatekeepers deciding what speech is “appropriate.”

The WikiTribune community is reporting on free speech online. This news stubs is dedicated to explaining challenges and possible solutions to hate-speech, bias and misinformation on the largest internet platforms. 


Conduit for hate-speech: Myanmar provides an example of real-life consequences of unfettered hate-speech and misinformation on social media, according to human rights groups. With the nation only gaining internet access in 2011, social media has been used by nationalist figures to stoke ethnic conflict, particularly against Muslim minorities (Tech Crunch). 

Perception of bias: Some believe internet companies are too active in moderating their platforms. U.S. Senator Ted Cruz grilled Facebook CEO Mark Zuckerberg during the Facebook founder’s April 2018 congressional hearings on what Cruz saw as politically conservative viewpoints disproportionately blocked by the social media company.

American conservative pundit Dennis Prager sued Youtube, owned by Alphabet, for restricting the reach of his organization’s video content. Prager argued his content was targeted because it expressed conservative viewpoints. A judge dismissed the lawsuit ruling that as a private company Youtube is permitted to prioritize certain content on its platform without violating freedom of speech laws.

Misinformation: The ability of individuals to profit from advertisements attached to articles has incentivized some individuals to produce fake stories that are likely to be shared and clicked upon (Wired). 

  • Conspiracy Theories: 

What are other challenges?



Artificial Intelligence: With millions of posts every week, tech companies see AI as the future of identifying and removing inappropriate content. Zuckerberg brought up the need for more sophisticated AI in his congressional hearings as a way to avoid bias among moderators, as well as quickly identify hate-speech that could otherwise take days to remove by human hands. 

Wikipedia as news source: Youtube announced it will place information from Wikipedia next to Youtube videos deemed to be “conspiracy theories” as a way to combat misinformation on the platform (CNN). Similar to Google, also owned by Alphabet Inc., Wikipedia’s evolving list of neutral information would be used to provide quick neutral information. Wikimedia, the parent group of Wikipedia, said it has not authorized the platform for such uses.

Preferred news sources: 

What are other responsible solutions?



Image information

  • TODO tags

      Is there a problem with this article? [Join] today to let people know and help build the news.
      • Share

      Subscribe to our newsletter

      Be the first to collaborate on our developing articles

      WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Connect with us on Discord Email us