
A laptop chained and padlocked in an attempt to keep it safe.
UK’s Internet Watchdog Finalizes First Set of Rules for Online Safety Law
The UK’s online safety regulator Ofcom has announced the release of its first set of guidelines under the country’s new Online Safety Act. The measures aim to better protect children and adults from harmful content, including terror and hate speech, intimate image abuse, and other forms of digital violence.
In a statement, Rachel Dawes, Ofcom’s Executive Director for Online Safety, emphasized the importance of implementing these rules to ensure a safer online environment. “We want to make sure that we’re not just blocking things but actually helping people to have safe online experiences,” she said.
Under the new guidelines, social media platforms and other online services will be required to implement stricter age verification measures to prevent minors from accessing inappropriate content. The regulator has also emphasized the need for stronger filters and checks to ensure that harmful material is removed from online spaces.
Additionally, Ofcom aims to take a more proactive approach in addressing online threats, including crisis response protocols for emergency events like last summer’s riots. The agency will also explore the use of AI-powered tools to tackle illegal harms and provide guidelines on reporting suspicious content.
The regulator has also made it clear that it intends to push for further measures to safeguard children’s online experiences, including stricter rules around account creation and more robust monitoring of user-generated content.
While these guidelines mark an important step forward in addressing the complexities of online safety, many have expressed concerns about the need for faster action. “We need to take swift and decisive action to protect the most vulnerable members of our society,” said a UK-based advocacy group.
As Ofcom continues to work on implementing these new rules, it is clear that the agency recognizes the importance of staying ahead of emerging technologies like generative AI and other potential online threats.
The full set of guidelines is expected to be finalized by April, with many of these measures coming into effect in 2024.
Source: techcrunch.com