Human Decision-Making in Humans Lacks a Unifying Control Mechanism; Similarly, a Single Regulator for AI Decision-Making is Unwarranted, According to the Center for Data Innovation
The Center for Data Innovation, a renowned think tank focused on technology policy, has outlined ten principles for AI regulation that prioritise innovation while addressing real risks. These principles aim to create a regulatory environment that fosters safety and accountability without imposing undue burdens on AI developers and deployers.
The proposed principles emphasise national coordination, voluntary industry involvement, proportionate risk focus, and safeguards for innovation incentives. Daniel Castro, a senior policy analyst at the Center for Data Innovation, has proposed these principles to ensure that AI regulation is smart, risk-based, and transparent.
1. **Avoid a Fragmented Regulatory Landscape**: The Center for Data Innovation advocates for a unified national framework to prevent a patchwork of varying state rules that could hinder innovation and divert investment.
2. **Focus on Risk-Based Regulation**: Regulatory efforts should be proportionate to the actual risks posed by AI applications, with a particular focus on high-risk systems.
3. **Promote Transparency and Accountability**: Developers should be required to disclose relevant information about AI systems, enabling oversight without stifling innovation.
4. **Incorporate Third-Party Risk Assessments**: Independent evaluation of AI systems is encouraged to build a robust evidence base on risks and safety practices.
5. **Encourage Voluntary Industry Collaboration**: Voluntary partnerships between regulators and industry are favoured over heavy-handed mandates to ensure flexible, adaptive regulation aligned with technological progress.
6. **Respect Innovation Incentives**: Regulation should preserve incentives for research, entrepreneurship, and experimentation, avoiding overregulation that slows development or drives innovation offshore.
7. **Implement Safe Harbours for Developers and Evaluators**: AI creators and evaluators should be protected from excessive liability or enforcement actions when complying with good faith safety and transparency measures.
8. **Integrate Existing Regulatory Frameworks**: Instead of creating AI-specific siloed rules, AI considerations should be incorporated into current laws and sector regulations to streamline compliance and leverage existing expertise.
9. **Ensure Human Oversight and Control**: Human judgment and professional responsibility should be maintained in AI deployment to uphold ethical standards and prevent unintended harms.
10. **Balance Privacy, Security, and Fairness**: Fundamental rights, including data privacy, safety, and non-discrimination, should be upheld, while avoiding regulations that excessively constrain innovation in pursuit of these goals.
These principles are designed to address real risks while preserving the agility, creativity, and competitive advantages that drive AI innovation. They align with broader expert consensus to avoid fragmented, ambiguous rules that increase costs and reduce experimentation.
However, Hodan Omaar, another senior policy analyst at the Center for Data Innovation, has expressed concerns about a proposal that would create a national regulator for AI and charge it with issuing licenses to companies that make AI. Omaar argues that regulating all AI under one agency would be as ill-advised as regulating all human decision-making under one agency.
In a separate report, Omaar has also published a report card on U.S. AI policy, highlighting areas where Congress can support the responsible deployment of AI and create an innovation-friendly regulatory environment. Congress can increase the technical expertise of federal regulators, develop sector-specific AI strategies, and focus on risk-based regulation to foster responsible AI innovation.
In conclusion, the Center for Data Innovation's proposed principles for AI regulation aim to strike a balance between risk management and innovation support, advocating for smart, risk-based, and transparent AI regulation that minimises innovation harm. This approach emphasises national coordination, voluntary industry involvement, proportionate risk focus, and safeguards for innovation incentives, ensuring safety and accountability without imposing unnecessary burdens on AI developers and deployers.
- The proposed principles to AI regulation, as suggested by the Center for Data Innovation, aim to prevent a fragmented regulatory landscape by advocating for a unified national framework.
- To address real risks, the Center for Data Innovation promotes a focus on risk-based regulation, where regulatory efforts are proportionate to the actual risks posed by AI applications.
- Developers should be required to foster transparency and accountability in AI systems, disclosing relevant information that enables oversight without stifling innovation.
- Independent evaluation of AI systems is encouraged to build a robust evidence base on risks and safety practices, with involuntary third-party risk assessments.
- Voluntary collaboration between regulators and industry is favored over heavy-handed mandates, ensuring that regulation remains adaptive and aligned with technological progress.