Social media firms fail to protect young people from bullying, says poll


You can edit or expand the story

Edit

Social media firms should do more to tackle cyberbullying on their platforms, according to an inquiry into children’s and young people’s mental health. The investigation was carried out by the Children’s Society, a charity aligned with the Church of England.

A poll of young people commissioned by the Children’s Society and children’s mental health charity YoungMinds suggested Facebook, Snapchat, and Twitter were failing to protect young people from harassment and online bullying. Around 46 percent of respondents said they had experienced threatening, intimidating or nasty messages on social media, email or text, according to the Children’s Society. Meanwhile, 14 percent of respondents said they had experienced cyberbullying in the previous month.

Social media companies were also urged to do more to prevent online child grooming – an adult gaining the trust of a child with the intention of manipulating and sexually exploiting them. This was on news that 1,316 offences were recorded in the first six months after a new child-grooming law came into force in England and Wales, The Times reported in January.

Children’s charity NSPCC, which campaigned successfully for the new law, is urging ministers to do more to force social media companies to crack down on online grooming (The Times). It was found that of the 1,316 cases of grooming, 31 percent were on Facebook, 18 percent on Snapchat, and 14 percent on Instagram. The rest were about incidences on Whatsapp, via text, in person or other online platforms such as Xbox Live.

Before the new law that prohibits online sexual communication with a child, police could not intervene until groomers attempted to meet with victims in person.

But the new voluntary code, which is part of the government-led internet safety strategy, “doesn’t go far enough,” the NSPCC said. Governments and social networks aren’t properly working together to stop this crime from happening, said the NSPCC’s head of child safety online, Tony Stower.

The charity said that artificial intelligence and algorithms that already exist to detect extremist content could be utilized to flag suspected groomers to moderators and even stop some messages being sent automatically.

This follows an announcement from Danish authorities that they plan to charge more than 1,000 people who shared an explicit video of two teenagers engaging in sexual activity on Facebook. The people will be prosecuted for distributing child pornography, despite most of them being 15 or 16 at the time they shared the video.

Discuss or suggest changes

Talk

  • Share
    Share

Subscribe to our newsletter and be the first to collaborate on our developing articles:

WikiTribune Open menu Close Search Like Back Next Open menu Close menu Play video RSS Feed Share on Facebook Share on Twitter Share on Reddit Follow us on Instagram Follow us on Youtube Connect with us on Linkedin Email us