Saltar a la navegación Saltar al contenido principal Ir al pie de página

News Reaction: UK Parliament Committee Adopts NCC Group's Recommendations on Large Language Models Security Risks

02 febrero 2024

Today, the UK Parliament Communications and Digital Committee shared its final report following an inquiry into the UK's approach to regulating generative AI and large language models (LLMs): UK will miss AI goldrush unless Government adopts a more positive vision.

The Committee launched its inquiry in July last year to consider how the UK can best respond to the opportunities and risks posed by LLMs and other forms of generative AI.

In September, NCC Group's Chief Scientist Chris Anley was invited as an 'expert witness' to give evidence to the Committee. During the session, Chris highlighted that LLMs provide cyber attackers with "moderate" but "noteworthy" efficiency and capability gains. He also brought the Committee's attention to the need to consider the cyber security of the generative AI models themselves, noting that, with the right inputs, malicious actors could cause models to leak sensitive data or trick the models into returning false outputs.

The Committee's final report setting out its recommendations to the Government reflects NCC Group's insights into the threat landscape and urges the Government to "work with industry at pace to scale existing mitigations in the areas of cyber security."

It also calls on the Government to:

  • Set out options for encouraging developers to build systems that are safe by design rather than focusing on retrospective guardrails.
  • Develop mandatory safety tests for high risk, high impact models, as well as accredited standards and auditing practices for the broader LLM landscape (noting that these must not be tick box exercises).
  • Publish an AI risk taxonomy and risk register aligned with the National Security Risk Assessment.
  • Build the relevant skills within Government, including more training for officials to improve technical know how, expanded systems of secondments from industry, academia, and civil society, and focused support for regulators.
  • Explore the options and feasibility of acquiring a sovereign LLM capability designed to the highest ethical and security standards.

Commenting on the paper, NCC Group's Chief Scientist Chris Anley says,

"Large Language Models offer substantial benefits, but those benefits come with associated risks. The Committee has very eloquently described the risks and formulated reasonable, actionable recommendations to mitigate them.

This is an extremely well-considered report, covering a wide range of evidence from many contributors, and we strongly support the Committee's recommendations."

What's next?

The UK Government is now required to digest and respond to each of the Committee's recommendations, setting out how it will implement it or explaining why it is not.

NCC Group is passionate about sharing our insights from operating at the 'front line' of cyber security with policymakers so that they can make informed decisions about emerging technologies. We look forward to continuing to engage with the UK Government and policymakers worldwide to support a more secure and resilient digital future for all.

Contact

Newsroom Contact Logo

NCC Group Press Office

press@nccgroup.com