This week saw the inaugural Global AI Safety Summit, bringing together senior representatives from the US, Europe, China and beyond to discuss the frontier of AI and what should be done to minimise the risks and make the most of the opportunities.
Coinciding with the event has been the announcement of a myriad of global and national agreements, policies and initiatives. Here, we provide an overview of what they mean and why businesses should care.
What has been announced?
One way to make sense of the various announcements over the past week is to view them across three distinct levels:
-
The high-level statement of intent from 28 countries (including, most notably, the US and China) to develop safe and responsible frontier AI, in the form of the Bletchley Declaration: The short Declaration reflects the challenges and risks posed by frontier AI across borders and commits to continued global cooperation to build a shared understanding of these challenges and collaborate on risk-based policies, while also recognising that the exact nature of legislation and regulation will differ between nation states.
-
The more in-depth International Guiding Principles for Advanced AI systems, drafted and agreed by the smaller group of allied G7 nations, and the Statement on Safety Testing agreed by the G7 plus Australia, the Republic of Korea and Singapore: The Guiding Principles go one step further by setting out what governments expect of organisations developing and using the most advanced AI systems, building on the previously agreed OECD AI Principles. The Statement of Safety Testing secures the agreement of the major frontier AI developers to allow governments to rigorously assess frontier AI models before and after they are deployed. While the agreement is voluntary, the UK Prime Minister has commentedthat ultimately “binding requirements” would probably be necessary.
- The application of the G7 and OECD principles at a domestic level by national governments: The White House this week published its Executive Order on Safe, Secure, and Trustworthy AIsetting out an ambitious programme to establish the standards, tools and tests required to manage AI risks and utilise its procurement and existing legislative levers to drive uptake among developers and users. Vice President Kamala Harris has also announced the establishment of a US AI Safety Institute that will be charged with operationalising existing AI standards. This is not to be confused with the similarly named UK AI Safety Institute, that will develop the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance – though the White House has emphasised that both Institutes will work closely together through information sharing and research collaboration.
What will this mean in practice?
There will be regulation.Governments globally have either announced or are developing plans to embed these safety and security principles in domestic regulation. That said, governments are taking different stances on the exact shape of these regulations and the timescales they are working to. For example:
- The White House’s Executive Order marks a real step change in the US’s approach to safeguarding AI, with Vice President Harris callingfor legislation and arguing that “history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritise profit over the wellbeing of their customers, the security of our communities and the stability of our democracies.” It remains to be seen, however, how successfully the White House’s ambitions can be achieved in absence of the legislation it would like to see.
- The EU AI Act is expected by some to be passed by the end of this year, despite significant discussion and pushback against its plans to explicitly prohibit the development of very high-risk AI systems.
- The UK has this week reaffirmed its commitment to a principles-based sector-specific approach to regulation within its current legal framework, while also implying that it needs more time to research and understand the risks before legislating.
- Australia has indicated its intention to take inspiration from what others, including the US, are doing on regulation, so is unlikely to be a first mover.
Governments don’t want AI developers to assure their own systems – or, as the UK Prime Minister put it, “we should not rely on them marking their own homework.” In addition to the voluntary Statement on Safety Testing, the need for safety testing features in the Bletchley Declaration, while the G7 guidance emphasises that this should include both “internal and independent external testing measures” as well as “a combination of methods such as red teaming.” Indeed, third party assurance of high-risk AI systems looks set to be required across all major economies’ domestic regulatory frameworks.
There are positive signs of global cooperation on what is, ultimately, a global issue. The Bletchley Declaration should not only be applauded for its diverse range of signatories but also its commitments to keep the conversation going through two further AI Safety Summits in South Korea and France next year. While we must be realistic about what can be achieved across this many nation states, clarity of mission and measurable targets that leaders can track progress against will help to ensure the continued success of these international gatherings.
Why should businesses care?
Policymakers are laser-focused on setting guardrails for AI developers. As regulations emerge, there are also likely to be responsibilities placed on the users and deployers of AI systems too. Therefore, whether you are a developer or user of AI, you should consider how the evolving (and likely differing) regulatory frameworks will impact your organisation, ensuring that privacy, information security and ethics are considered from the outset and across international borders.
For further reading on this topic, NCC Group’s recent Whitepaper – Safety, Security, Privacy & Prompts: Cyber Resilience in the Age of Artificial Intelligence – seeks to set a baseline of understanding for some of the key AI concepts, threats and opportunities, supporting business decision makers’ thinking and strategies in this fast-paced, exciting new technological era.
Contact
NCC Group Press Office
All media enquires relating to NCC Group plc.