Governments vow to safety test AI. Now for the tricky bit: how?

Governments from around the world have pledged to work together to test new AI models before they’re unleashed on the public. How will they do that? That’s far from clear.

The UK government issued a statement following this week’s AI Safety Summit at Bletchley Park, stating that “governments” had reached “a shared ambition to invest in public sector capacity for testing and other safety research”. The statement didn’t specify which governments had reached this accord, although the statement distinguishes the agreement on AI testing from the broader Bletchley Declaration, which was agreed by all 27 attending governments and the EU.

The FT reports that the UK, USA and Singapore will be among the countries involved in testing AI, and that companies including ChatGPT creator OpenAI, Google DeepMind, Amazon and Microsoft are among the firms that will submit their products for testing.

“Until now the only people testing the safety of new AI models have been the very companies developing it,” said UK prime minister Rishi Sunak in a statement. “We shouldn’t rely on them to mark their own homework, as many of them agree.

“Today we’ve reached a historic agreement, with governments and AI companies working together to test the safety of their models before and after they are released.”

Testing AI models

It’s not yet clear how the UK government or its international counterparts will find or fund the capacity required to test AI models, particularly if validation is required before these models are released.

Take OpenAI’s GPT model, for example. It has released five major versions in five years, and that’s only one of the many companies now ploughing billions into AI developments. It would need enormously well-resourced testing teams to keep up with the pace of innovation. And there are countless AI projects being developed by companies who aren’t subject to the agreement.

And there’s another problem: the agreement isn’t legally binding. That means AI firms won’t face any punishment if they push products out without government approval.

Sunak himself has admitted that developing legislation to force companies to submit products for testing could take many years.

In the meantime, the US and UK are pressing ahead with plans to develop AI Safety Institutes, while the EU is progressing with its own plans. Will they be able to put the safety brakes on the rapid evolution of AI or is it mere posturing? We’ll only find out in a few years’ time.

Avatar photo
Barry Collins

Barry has 20 years of experience working on national newspapers, websites and magazines. He was editor of PC Pro and is co-editor and co-owner of BigTechQuestion.com. He has published a number of articles on TechFinitive covering data, innovation and cybersecurity.

NEXT UP