The European Commission’s president proposed that Google, Facebook and Twitter remove online extremist content within an hour or face fines, in his annual State of the Union address to the European Parliament.
Add other countries and companies
Add other countries and companiesEdit
The EU gave the firms three months in March to show they were acting faster to take down radical posts, but EU regulators concluded they hadn’t done enough.
The Commission’s new proposal, which will need backing from the EU countries and European Parliament, is that these tech companies will be fined up to four percent of their annual global turnover if they systematically fail to remove online extremist content within an hour of being notified (New York times). The proposal would also require internet platforms to provide annual transparency reports to show they are trying to tackle abuse.
This would go further than the current voluntary code of conduct on hate speech which Facebook, Microsoft, Twitter and YouTube joined in 2016. That code requires participants to if necessary remove hateful online content within 24 hours of being notified and does not give governments the right to take down content. According to the latest review of the code the companies removed 70 percent of content reported to them as illegal hate speech within 24 hours (European Commission). Other companies have since announced they plan to join the voluntary code (The Verge).
What are the rules for removing online extremist content within other countries?
Germany – In October 2017, the country introduced social media sites having 24 hours to remove hate speech after being notified for straightforward cases or face fines of up to €50 million ($57 million), according to Techcrunch.
Something missing from the story? Say so
Something missing from the story? Say soTalk