Tech Sector Attempts to Minimize AI's Prevalent Bias. Now, Trump Plans to Halt 'AI Woke Agenda' Initiatives
Tech companies are navigating a fresh wave of scrutiny over their diversity, equity, and inclusion (DEI) efforts in AI products, following a shift in political priorities. The White House and Republican-led Congress are now focusing on eliminating alleged "woke AI," which they view as a problem that requires immediate attention and resolution.
Subpoenas sent to tech giants such as Amazon, Google, Meta, Microsoft, OpenAI, and over a dozen other companies last month by the House Judiciary Committee are part of this crackdown. Previous initiatives aimed at "advancing equity" in AI development, as well as reducing the production of "harmful and biased outputs," are now under investigation.
The U.S. Commerce Department's standard-setting branch has also revised its appeal for collaboration with outside researchers. Instead of focusing on AI fairness, safety, and "responsible AI," it is instructing scientists to concentrate on minimizing "ideological bias" in a way that will "enable human flourishing and economic competitiveness."
Google, for instance, had made strides in addressing biases in its AI image tools, adopting Ellis Monk's Monk Skin Tone Scale to better portray the diversity of human skin tones. But now experts like Monk, a Harvard University sociologist, are worried that the new climate could hinder future initiatives and funding to make technology work better for everyone.
Despite Trump administration cuts in science, technology, and health funding grants touching on DEI themes, its influence on commercial development of chatbots and other AI products remains indirect. However, the House Judiciary Committee is investigating whether the Biden administration coerced or colluded with tech companies to censor lawful speech, adding to the industry's unease.
AI bias has long been a concern, with studies showing self-driving car technology has trouble detecting darker-skinned pedestrians, and face-matching software for unlocking phones misidentifying Asian faces. Even more alarmingly, Google's own photos app sorted a picture of two Black people into a category labeled as "gorillas."
The recent controversy surrounding Google's Gemini AI chatbot - which, when asked to depict people in various professions, showed a bias towards lighter-skinned faces and men - only serves to intensify the political debate. While Google has tried to place technical guardrails to reduce these disparities, its AI image generator was criticized for perpetuating stereotypes and lead to the term "woke AI."
In the face of this ongoing controversy, it remains to be seen how tech companies will respond and adapt to these new political pressures on their DEI initiatives in AI development.
- The White House and the Republican-led Congress are focusing on eliminating "woke AI," a problem they view as requiring immediate attention and resolution, within technology policy-and-legislation.
- Subpoenas have been sent to tech giants like Amazon, Microsoft, and OpenAI, among others, as part of the House Judiciary Committee's crackdown on the diversity, equity, and inclusion (DEI) efforts in AI products.
- The U.S. Commerce Department's standard-setting branch is instructing scientists to concentrate on minimizing "ideological bias" in technology, enabling human flourishing and economic competitiveness.
- Google's AI image tools were designed to address biases, adopting the Monk Skin Tone Scale to better portray the diversity of human skin tones, but experts like Ellis Monk are worried about the potential impact of the new political climate on future initiatives.
- The House Judiciary Committee is investigating whether the Biden administration coerced or colluded with tech companies to censor lawful speech, adding to the industry's unease.
- Studies have shown that self-driving car technology has trouble detecting darker-skinned pedestrians, and face-matching software misidentifies Asian faces, demonstrating the longstanding concern of AI bias.
- Google's own photos app sorted a picture of two Black people into a category labeled as "gorillas," intensifying the public controversy surrounding AI bias.
- The recent controversy surrounding Google's Gemini AI chatbot, which showed a bias towards lighter-skinned faces and men in depicting people in various professions, only serves to intensify the political debate about the role of technology companies in addressing DEI issues in AI development.

