Press "Enter" to skip to content

Big Tech’s push to self-regulate harmful content in New Zealand is ‘weak attempt to preempt regulation’, critics say

Big technology companies including Facebook’s parent Meta and Google have signed a pact to self-regulate themselves over harmful content shared across digital platforms in New Zealand.

The move comes as regulators across the globe grapple with ways to make the internet a safer and less hostile place for users.

Indonesia last week pushed a strict new online content law into force, while the European Union (EU) recently reached political agreement on new legislation under the Digital Services Act, which has provisions for expediting the removal of illegal content. And the U.K., meanwhile, is proposing a new Online Safety Bill for regulating online content and speech, though this is currently on pause.

The Aotearoa New Zealand Code of Practice for Online Safety and Harms first emerged in draft form last year, accompanied by a call for public feedback. It’s essentially a self-regulatory framework designed to make Big Tech more proactive in removing “harmful content” from the internet — however, critics argue that it’s merely a “weak attempt to preempt regulation,” both in New Zealand and further afield.

Lobbying

The new framework is spearheaded by a number of organizations, including Netsafe, a not-for-profit body focused on promoting and giving guidance on online safety. While it is a non-governmental organization, Netsafe does receive support from various government departments, and it’s also responsible for administering New Zealand’s Harmful Digital Communications Act (HDCA), which was passed in 2015.

Also on board is NZTech, a membership-based lobby group that includes support from hundreds of companies, including Amazon an Google.

With today’s news, Meta, Google, TikTok, Amazon, and Twitter have all signed up to adhere to the new code of practice, which sets out a commitment to curb a broad range of content including bullying and harassment; disinformation; and hate speech. As signatories, the companies are tasked with publishing an annual report on their progress in terms of adhering to the code, and they may even be “subject to sanctions” for breaching their commitments, though it’s not clear what these sanctions might be.

It is worth noting that while the official announcement says that the code “obligates” the tech companies to reduce harmful content, it’s not legally binding, which is why many are skeptical of its real impact. Moreover, with New Zealand’s Department of Internal Affairs (DIA) currently conducting an online content regulatory review, it seems that the intentions of the new code of practice may be to influence any new regulations that do come to the fore.

“We’ve long supported calls for regulation to address online safety and have been working collaboratively with industry, government, and safety organisations to advance the Code,” Nick McDonnell, head of public policy, New Zealand and Pacific Islands at Meta, said in a statement. “This is an important step in the right direction and will further complement the government’s work on content regulation in the future.”

A “Meta-led effort”

Mandy Henk, CEO at Tohatoha NZ, a not-for-profit that lobbies for a more “equitable internet,” said that the new code “looks to us like a Meta-led effort to subvert a New Zealand institution so that they can claim legitimacy without having done the work to earn it,” according to a blog post Tohatoha published earlier today.

“In our view, this is a weak attempt to preempt regulation — in New Zealand and overseas — by promoting an industry-led model that avoids the real change and real accountability needed to protect communities, individuals and the health of our democracy, which is being subjected to enormous amounts of disinformation designed to increase hate, and destroy social cohesion,” Henk wrote.

Indeed, Henk also called into question the code’s legitimacy given that NZTech will be in charge of establishing and administering the code.

“NZTech is a technology industry advocacy group that lacks the legitimacy and community accountability to administer a Code of Practice of this nature,” Henk wrote.

There has been a growing sense around the world that social media self-regulation isn’t working, and in the U.S. there has been increasing calls for Congress to enforce regulations at a federal level. In New Zealand and other jurisdictions around the world, a similar conflict is emerging — continue to let Big Tech to self-regulate, or usher in tighter controls enshrined in enforceable laws?

“We badly need regulation of online content developed through a government led process,” Henk wrote. “Only government has the legitimacy and resourcing needed to bring together the diverse voices needed to develop a regulatory framework that protects the rights of internet users, including freedom of expression and freedom from hate and harassment.”

source: TechCrunch