EU AI Act's High-Risk Standards Delayed, Potentially Pushing Back Regulation
The EU Commission has tasked European standardization organizations with creating harmonized European standards (hEN) for high-risk AI systems' technical design. The AI Act, the first EU-level AI regulation, is set to introduce these standards. However, the development process is behind schedule, potentially delaying the entry into force of requirements for high-risk AI systems.
The AI Act, a risk-based framework, categorizes AI systems into prohibited, high-risk, low-risk, and minimal risk. Notably, it also covers General Purpose AI (GPAI) for the first time. The joint committee for AI (JTC 21), with DIN representing Germany, is developing these standards. Despite the efforts, the development process is significantly delayed, raising concerns about meeting the AI Act's timeline.
The AI Act, aiming to foster innovation while mitigating AI risks and protecting fundamental rights, is poised to bring a uniform regulatory framework. High-risk AI systems, such as those in critical infrastructure or law enforcement, will face stringent transparency and reliability requirements. However, the delayed development of hEN may impact the timely implementation of these crucial standards.
Read also:
- EPA Administrator Zeldin travels to Iowa, reveals fresh EPA DEF guidelines, attends State Fair, commemorates One Big Beautiful Bill
- Hitachi Rail's Next-Gen SelTrac to Revolutionize Urban Transit with C$100m Investment
- Leaders at HIT Forum 2025: Middle Powers Key to Asia's Security
- Samsung, SK Hynix Partner with OpenAI for AI Chip Boost, Driving South Korea's Tech Industry