
A laptop chained and padlocked in an attempt to keep it safe.
UK’s internet watchdog finalizes first set of rules for Online Safety law
The UK’s internet regulator Ofcom has finalized its first set of rules to enforce the country’s new Online Safety Act, which aims to protect children and vulnerable adults from harmful content online.
According to Ofcom, the rules will require social media platforms, messaging services, and other online services to implement a range of measures to ensure user safety. These measures include:
* Age verification: Platforms will be required to verify the age of users through a process that will not rely solely on self-declaration. This means that platforms may need to introduce new technologies or methods to determine a user’s age.
* Filtering and moderation: Ofcom has emphasized the importance of effective filtering and moderation processes to prevent harmful content from being shared online. Platforms will be required to implement robust measures to detect and remove harmful content, including CSAM (Child Sexual Abuse Material).
* Crisis response protocols: The regulator is planning to introduce crisis response protocols for emergency events such as last summer’s riots.
* Blocking accounts of those who have shared CSAM: Ofcom has proposed blocking the accounts of individuals who have shared Child Sexual Abuse Material.
* Guidance on using AI to tackle illegal harms: The regulator will provide guidance on how AI can be used effectively to combat illegal activities online, such as child sexual abuse material and other harmful content.
In addition to these measures, Ofcom’s policy statement also highlights its commitment to ensuring that online services prioritize the well-being of children and vulnerable adults. This includes introducing stricter rules around intimate image abuse, hate speech, and terrorism-related content.
Ofcom is set to work on further measures throughout 2024, including the introduction of new age verification requirements for online services and implementing a system where children’s accounts are set to private by default, preventing them from being contacted by strangers.
Dawes emphasized that these changes will be implemented in phases, with the regulator working closely with industry stakeholders to ensure compliance.
Source: techcrunch.com