Skip to content

EU AI Act's High-Risk Standards Delayed, Potentially Pushing Back Regulation

The EU's first AI regulation is behind schedule. This could delay important safety and transparency standards for high-risk AI systems.

There is a poster in which there is a robot, there are animated persons who are operating the...
There is a poster in which there is a robot, there are animated persons who are operating the robot, there are artificial birds flying in the air, there are planets, there is ground, there are stars in the sky, there is watermark, there are numbers and texts.

EU AI Act's High-Risk Standards Delayed, Potentially Pushing Back Regulation

The EU Commission has tasked European standardization organizations with creating harmonized European standards (hEN) for high-risk AI systems' technical design. The AI Act, the first EU-level AI regulation, is set to introduce these standards. However, the development process is behind schedule, potentially delaying the entry into force of requirements for high-risk AI systems.

The AI Act, a risk-based framework, categorizes AI systems into prohibited, high-risk, low-risk, and minimal risk. Notably, it also covers General Purpose AI (GPAI) for the first time. The joint committee for AI (JTC 21), with DIN representing Germany, is developing these standards. Despite the efforts, the development process is significantly delayed, raising concerns about meeting the AI Act's timeline.

The AI Act, aiming to foster innovation while mitigating AI risks and protecting fundamental rights, is poised to bring a uniform regulatory framework. High-risk AI systems, such as those in critical infrastructure or law enforcement, will face stringent transparency and reliability requirements. However, the delayed development of hEN may impact the timely implementation of these crucial standards.

Read also:

Latest