Press Story

  • Deep global cooperation is crucial but it needs to be built on strong national supervision of AI risks, new report says

  • Over-reliance on self-regulation puts unwarranted trust in powerful corporations to ‘mark their own homework’ and risks runaway market dominance

  • Governments need to set out their own bold strategy for AI, to make sure future of tech delivers for public good, not just for profits

With the eyes of the world on the UK’s global AI summit next week, IPPR is warning that policy makers could miss the moment as the summit’s focus on self-regulation will lead to AI being monopolised by a few global players.

Just as governments failed to overhaul financial regulation until after the 2008 crisis, and are only belatedly responding to the challenges of the social media revolution, so they now risk being too slow and unambitious in planning how to get the most from AI while better managing the risks.

Ahead of the conference at Bletchley Park, the US government has had leading AI firms agree to ‘voluntary commitments’ while some EU governments are also seemingly leaning to put much of the onus on corporate good governance. The summit is reportedly aiming to agree a global scientific body to evaluate AI risks. However, a new report from IPPR highlights that not backing this up through strong national supervisory institutions with statutory powers will be viewed as a historic mistake.

Without government supervision and regulation, the market dominance of a few businesses within AI – like Google, Microsoft and Amazon – will likely lead to significant social and economic harms, including higher costs, lower investment, products being built not fully in the customers’ interests, stifling innovations from small businesses and increased risk of misuse.

The report from IPPR sets out an alternative vision, one which encourages innovation, but also introduces supervision and thoughtful regulation to ensure small businesses can also play a part in the AI revolution, and that it delivers for public good, and not just for the profits of a small number of firms.

As part of this, IPPR is calling on the government to establish an Advanced AI Monitoring Hub, to supervise AI companies in the UK, accessing and analysing the deployment of AI to detect emerging harms and risks early on. Such a national body would collaborate with equivalent bodies in other countries, similar to financial service regulators. This body would be more ambitious and better resourced than the government’s proposed AI Safety Institute.

Additionally, the summit should look beyond the issue of regulating risks and lean into developing an industrial strategy for AI. The report says attendees of the conference should discuss how AI can be deployed to tackle big societal challenges, such as improving public health, accelerating environmental policy, boosting science, augmenting rather than displacing jobs and enhancing the delivery of public services.

Policy makers have extensive levers to incentivise this – including subsidies and taxes as well as regulatory powers – and establish a public digital infrastructure to create a mission-driven industrial strategy for AI that delivers for the public good, the report says.

Carsten Jung, senior economist at IPPR, said:

“Regulators and the public are largely in the dark about how AI is being deployed across the economy. But self-regulation didn’t work for social media companies, it didn’t work for the finance sector, and it won’t work for AI. We need to learn lessons from our past mistakes and create a strong supervisory hub for all things AI, right from the start.

“Well-designed supervision and regulation can be successful at tackling risks while also giving firms certainty to invest and innovate.

“There should also be a more purpose–driven strategy for AI. We shouldn’t just passively anticipate technological developments and hope for the best. Like in other areas of the economy, we should specify what good looks like and make sure firms deliver accordingly.

Bhargav Srinivasa Desikan, senior research fellow at IPPR, said:

“We are on the cusp of a new technological revolution. But it can go one of two-ways, either AI follows the same path as social media and hoards power in the hands of a very small number of global companies, delivering profit over purpose, or it can deliver for the public good.

“This is not the time to bury our heads in the sand on how to manage the growing opportunities and risks of AI - we have the tools to make it work. As a computer scientist and AI researcher, I believe that the entire scientific community will benefit from a purpose-driven strategy of public infrastructure and strong technical regulation.”

ENDS

Carsten Jung and Bhargav Srinivasa Desikan, the report’s authors, are available for interview