UK in a bid to promote free speech will allow ‘legal but harmful’ online content.

UK in a bid to promote free speech will allow ‘legal but harmful’ online content.

The British government dropped a plan to require computer companies to remove legal but harmful internet information after receiving harsh criticism from politicians and civil rights organizations.

On Tuesday, the U.K. defended its choice to weaken the Online Safety Bill, an ambitious but divisive effort to combat online bullying, fraud, racism, and other bad content.

The European Union and the United States are also engaged in similar initiatives, although the U.K.’s was one of the most comprehensive. The measure originally offered regulators broad authority to impose sanctions on social media and digital media giants like Google, Facebook, Twitter, and TikTok.

A demand that the largest platforms remove “legal but harmful” content had raised concerns, that it would result in censorship and compromise the right to free speech.

That portion of the plan has now been dropped by the Conservative government of Prime Minister Rishi Sunak, who took office last month, on the grounds that it may “over-criminalize” online content. The change is intended to enable the bill to be passed by Parliament by the middle of 2023 after it has been stuck there for the previous 18 months.

The chance that “tech businesses or future governments could exploit the legislation as a license to restrict valid ideas” was eliminated, according to Digital Secretary Michelle Donelan.

She told Sky News, “It was the formation of a quasi-legal category between unlawful and legal.” “A government shouldn’t be acting in that way. It’s perplexing. Online and offline sets of laws would be created differently as a result.

Instead, the bill mandates that businesses define and uphold unambiguous terms of service. As long as it’s not against the law, businesses are permitted to permit adults to submit and view objectionable or dangerous content. However, platforms that make a commitment to prohibit racist, homophobic, or other offensive content but later break that commitment risk being fined up to 10% of their yearly revenue.

The law also mandates that businesses assist users in avoiding legal but potentially harmful content, such as the glorification of eating disorders, misogyny, and some other forms of abuse, by providing warnings, content moderation, or other tools.

Companies must also demonstrate how they enforce user age restrictions meant to prevent youngsters from viewing hazardous content.

The bill continues to make certain online behaviors illegal, such as cyber flashing, which is the act of sending someone unwelcome graphic photos, and epilepsy trolling, which is the act of sending flashing images that might cause seizures. Additionally, it criminalizes helping or encouraging self-harm. This action was taken in response to a campaign by the family of Molly Russell, a 14-year-old who committed suicide in 2017 after watching internet material encouraging self-harm and suicide.

Her father, Ian Russell, expressed his relief that the bill was at least moving forward. However, he claimed that it was “very difficult to understand” why safeguards against dangerous material had been reduced.

Donelan emphasized that children would still be protected and that “legal but harmful” material would only be allowed for adults.

As a result of this bill, the material that Molly Russell saw will not be permitted, she declared.

Facebook20k
Twitter60k
100k
Instagram500k
600k