After years of scrutiny, debate and delays, the online safety bill is being introduced in Parliament today.
The bill is a wide-reaching piece of legislation that attempts to cover as much of the internet as possible, from cyber-bullying to online pornography to fraud.
Criticisms that the bill is confusing and convoluted as well as too far-reaching have followed it every step of its journey to finally coming into effect.
The bill’s main change is in how social media platforms, search engines, and other websites handle harmful content by giving Ofcom the authority to determine if an online service is hosting illegal activity.
Tech bosses that impede Ofcom investigations could face up to two years n jail under the new laws.
Digital secretary Nadine Dorries said that “tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platform”.
Much of the focus is on blocking online content that is seen as harassment or harmful.
Tech firms will also need to prevent scam adverts from appearing on their platforms.
Companies that fall foul of the Online Safety Bill could be hit with fines of up to 10% of global revenues.
However, there are concerns from those in tech that the issues of digital identity and online fraud protection addressed in the bill are not specific enough.
Martin Wilson, CEO of Digital Identity Net, believes that “fraud, which is a huge and rapidly growing problem in the UK, needs to be at the front and centre of this bill”.
Wilson told UKTN: “It’s a major problem, particularly in the payments space where there was a 71% increase in authorised push payment (APP) fraud in the first half of 2021.”
According to Wilson, the introduction of user verification to the bill was a step towards tackling online fraud.
However, he said “it isn’t clear what verified accounts look like and its suggestions on how people can verify their identity are flawed”.