Skip to content

Federal authorities prohibit 'conscious' AI systems, however, large language models remain unaware of the specifics

Enforcing uniformity is limited to their education and programming.

White House imposes a ban on 'aware' AI systems, while large language models remain unenlightened...
White House imposes a ban on 'aware' AI systems, while large language models remain unenlightened about the situational facts

Federal authorities prohibit 'conscious' AI systems, however, large language models remain unaware of the specifics

The federal government is taking steps towards deploying AI systems that reject perceived ideological bias, while promoting truthfulness and neutrality. This shift is primarily being achieved through deregulatory efforts, executive mandates, and experimental regulatory environments, rather than through detailed, prescriptive compliance frameworks.

The catalyst for this change is the executive order titled "Preventing Woke AI in the Federal Government," which was issued as part of the Trump administration's AI Action Plan in January 2025. This order is one of three new orders that accompany the AI Action Plan, underscoring a focus on removing ideological bias and promoting neutrality in government AI tools.

The AI Action Plan prioritizes regulatory reforms and meta-frameworks for AI governance that encourage technological innovation, adoption, and open models, but do not emphasize stringent regulatory requirements for ideological content. Instead, it promotes a more deregulatory stance.

Federal agencies, including the Office of Management and Budget (OMB), are directed to revise or repeal regulatory barriers, which could include constraints related to content moderation or fairness guidelines that the previous administration may have favored.

The rollout of these policies aims to ensure that government AI systems adhere to factual accuracy ("truthfulness") and ideological neutrality by design. However, specific implementation details or compliance metrics have not been publicly detailed, reflecting an early stage in policy execution following the executive order and the AI Action Plan.

The plan also emphasizes creating regulatory sandboxes and "AI Centers of Excellence" to experiment with and rapidly iteratively improve AI systems for government use, which presumably include adherence to these new standards.

However, the challenge of ensuring AI systems are truth-seeking and ideologically neutral is a significant one. Ben Zhao, professor of computer science at the University of Chicago, has stated that truth seeking is one of the biggest challenges facing AI today. This challenge is further compounded by the fact that AI models are built to replicate the biased "ideological agendas" present in their training data.

For instance, in 2023, researchers found ChatGPT to have a pro-environmental, left-libertarian ideology. In March, the Anti-Defamation League claimed that GPT, Claude (Anthropic), Gemini (Google), and Llama (Meta) "show bias against Jews and Israel." Attempts to "un-wokeify" LLMs have produced an AI that named itself MechaHitler.

Despite these challenges, the government is pushing forward with these initiatives. Compliance with the executive order's requirements may be a challenge due to the current limitations of AI models. AI models risk being decommissioned and their developers charged for costs if they violate the executive order's truth and ideology requirements when used by civilian agencies.

Notably, the order's requirements do not apply to national security AI systems used by the Defense Department. This exception underscores the delicate balance between promoting ideological neutrality and ensuring national security.

Joshua McKenty, former chief cloud architect at NASA and the co-founder and CEO of Polyguard, believes that no LLM knows what truth is and that they can only favor consistency. He argues that LLMs are built to replicate the biased "ideological agendas" present in their training data.

An experiment referred to as the EigenMorality paper applied the algorithms behind Google's PageRank approach to moral questions, resulting in a "median" position that no one agrees with. This highlights the complexities and challenges in defining and achieving truth and ideological neutrality in AI systems.

In summary, the government is actively moving towards AI deployment that explicitly rejects perceived ideological bias ("wokeness") while promoting truthfulness and neutrality. While the specifics of how AI models are currently meeting these requirements remain limited, the government's focus on deregulation and experimental environments suggests a commitment to ongoing improvement and adaptation in this area.

  1. The federal government's AI Action Plan, issued in January 2025, focuses on deregulatory efforts and experimental regulatory environments to achieve AI systems that reject ideological bias and promote neutrality.
  2. The executive order titled "Preventing Woke AI in the Federal Government" encourages regulatory reforms and meta-frameworks for AI governance, emphasizing technological innovation, but not stringent regulatory requirements for ideological content.
  3. AI Centers of Excellence, created as part of the AI Action Plan, aim to experiment with and rapidly iteratively improve AI systems to adhere to truthfulness and ideological neutrality.
  4. Government AI systems face a significant challenge in ensuring truth-seeking and ideological neutrality, as AI models replicate biased "ideological agendas" present in their training data.
  5. Compliance with the executive order's requirements may be challenging due to the current limitations of AI models, and the order does not apply to national security AI systems used by the Defense Department, emphasizing the delicate balance between promoting ideological neutrality and ensuring national security.

Read also:

    Latest