Infringement, Censorship and Safety: Big Tech and Accountability

01 Dec 2022

Infringement, Censorship and Safety: Big Tech and Accountability

It was announced this week that the UK government have decided to axe a section of the online safety bill which would force tech companies to take down content that is by law not illegal but by subject matter potentially harmful. This ‘U-turn’ is believed to have taken place amongst debate around freedom of speech and censorship.

Many argue that this is a ‘weakening’ of the bill and the very fact that the “prevention of harm with an emphasis on freedom of speech undermines the very purpose of the bill”- Labour, Shadow Culture Secretary Lucy Powell. The bill places more emphasis on adults censoring content and the user filtering out harmful content, which by default removes a heavy amount of accountability from tech companies. Suicide risk charity group Samaritans said increased user controls should not replace tech company accountability yet balancing this without infringing on freedom of speech is quite tricky

YoungMinds (A mental health charity for children) discovered a shocking statistic; 89% of young people who have previously suffered from mental health issues reported that social media encouraged intrusive thoughts and harmful behaviours. With over 50% admitting to actively seeking out disturbing content in order to feel discomfort.

With “legal but harmful” being scrapped from the Online Safety Bill, what restrictions will be put in place to prevent young children from seeing such damaging content? As many of us are aware, social media tailors content based on the types of videos and photos you have looked at before. The same study found that 22% automatically received disturbing content more than once a week due to previous searches.

Mental health often can “hide in plain sight”. The tragic death of Molly Russell back in 2017 highlights the horrendous impact social media can have on our young community. Unfortunately, this case does not stand alone, nor is it likely to be the last.

The government will be introducing an update to the bill to counter “legal but harmful” and boost child safety. Companies will be required to enforce age verification in order to prevent children accessing harmful or adult content, companies that fail to do so will be penalized. While politician Michelle Donelean insists that safety will be “strengthened” how confident are we that the age verification process will eliminate this problem? Many children have access to their parents’ laptops or devices, and will there be loopholes for young children actively seeking harmful content?

How do we balance “freedom of speech” and child safety on social media and is it possible?

Take our poll and let us know

https://www.linkedin.com/posts/katie-morgan-williams-a51046202_following-my-recent-article-discussing-the-activity-7004047908559855616-nHds/?utm_source=share&utm_medium=member_desktop

Get in touch

Katie Morgan-Williams

Katie Morgan-Williams

Consultant