Skip to content

Urgent safety concerns raised

Redefining the position of the UK Artificial Intelligence Safety Institute within a broader UK governmental structure

Alarming focus on security measures: concerns raised
Alarming focus on security measures: concerns raised

Urgent safety concerns raised

The UK AI Safety Institute, currently without legal powers to block or impose conditions on the release of AI models, is grappling with the growing trust crisis in AI safety. Recent evaluations have highlighted inadequate safety measures and a lack of coherent plans to control potentially dangerous AI systems [1][3].

The rapid advancement towards artificial general intelligence is outpacing companies' preparedness to manage associated risks. In a 2025 Future of Life Institute AI Safety Index assessment, seven leading AI companies received no grade higher than a C+, signalling the need for more stringent safety measures [1].

National AI Safety Institutes (AISIs) are key to addressing these safety challenges within regulatory and governance frameworks. They are well-positioned to lead the establishment of risk thresholds and develop testing methodologies, providing structured safety cases to demonstrate that AI systems meet defined safety standards [2].

However, AISIs operate with varying priorities and mandates across countries. For instance, the UK focuses on frontier AI safety, while Singapore emphasises application-specific outcomes. This potential fragmentation underscores the importance of international collaboration among AISIs to harmonise risk assessment standards and oversight [2].

Initiatives like the International AI Safety Network aim to coordinate expertise and resources across countries to establish common safety frameworks that are adaptive and resilient across borders [2].

In the UK, the safety of an AI system is not an inherent property that can be evaluated in a vacuum. Working with empowered and resourced sectoral regulators to develop frameworks for testing AI products in particular contexts for safety and efficacy is necessary [4].

Fees or levies on industry may become necessary to fund effective AI regulation, a typical approach in other highly regulated sectors such as pharmaceuticals and finance [4]. The UK AISI needs to be integrated into a regulatory structure with complementary parts that can provide appropriate, context-specific assurance that AI systems are safe and effective for their intended use [4].

Established safety-driven regulatory systems typically cost more than £100 million a year to run effectively, and the skills and compute demanded by effective AI regulation may drive this figure even higher [4]. AI safety should mean keeping people and society safe from the range of risks and harms that AI systems cause [4].

Existing evaluation methods like red teaming and benchmarking have technical and practical limitations [4]. Access granted to AISIs usually consists of prompting the model via an Application Programming Interface (API), with no ability to scrutinise the datasets used for training [4].

AI safety is not an established term, and there is little agreement on what risks it covers [4]. The European Union's AI Office has begun setting itself up with a mandate to evaluate 'frontier models' [4].

Current evaluation practices are better suited to the interests of companies than publics or regulators [4]. These access requirements should incorporate information on the model supply chain, such as estimates of energy and water costs for training and inference, and labour practices [4].

As the urgency to address AI safety grows, the UK Government should acknowledge that a credible AI governance regime will ultimately require legislation. Comprehensive legislation will be necessary to provide the statutory powers needed for the UK AISI and downstream regulators, as well as to fix other gaps in the UK's regulatory framework [4].

[1] Future of Life Institute. (2025). AI Safety Index Report. Retrieved from https://futureoflife.org/ai-safety-index/ [2] International AI Safety Network. (n.d.). About Us. Retrieved from https://internationalaisafetynetwork.org/about/ [3] Mordvintsev, I., et al. (2023). Evaluating the Safety of Large Language Models. Retrieved from https://arxiv.org/abs/2303.14333 [4] House of Lords Select Committee on AI (2022). Regulating AI: UK approach and international cooperation. Retrieved from https://publications.parliament.uk/pa/ld202122/ldselect/ldai/111/11102.htm

  1. Recognizing the necessity for enhanced AI safety measures, it's crucial that the UK AISI collaborates with other National AI Safety Institutes, like the International AI Safety Network, to harmonize risk assessment standards and create a unified, adaptive, and resilient safety framework across borders.
  2. As AI technology continues to advance rapidly, it's essential for the UK government to acknowledge the need for comprehensive legislation that empowers the UK AISI and downstream regulators to provide statutory powers needed to ensure the safety and effectiveness of AI systems in their intended contexts.

Read also:

    Latest