Skip to content

Microsoft Designates Developers It Filed Lawsuits Against for Misusing Its AI Resources

Microsoft recently made modifications to a lawsuit initiated previously, listing the four individuals it accuses of misusing its AI technologies to generate false deepfake depictions of celebrities.

Microsoft Designates Developers It Filed Lawsuits Against for Misusing Its AI Resources

Microsoft is shaking up its stance on AI safety by tweaking a lawsuit from previous year, initially filed against four developers who allegedly maneuvered past safety barriers on their AI tools, leading to the creation of celebrity deepfakes. The lawsuit, filed in December, was strengthened by a court order granting Microsoft access to a related website, unveiling the identities of the individuals involved.

The four developers, part of a global cybercrime collective called Storm-2139, are said to include Arian Yadegarnia aka “Fiz” from Iran, Alan Krysiak aka “Drago” from the UK, Ricky Yuen aka “cg-dot” from Hong Kong, and Phát Phùng Tấn aka “Asakuri” from Vietnam. Microsoft reportedly has identified other participants but prefers to remain tight-lipped to avoid muddying an ongoing investigation.

Microsoft claims the group managed to bypass security controls on its AI tools, resulting in an "jailbreak" that allowed them to create various convincing content. The group then apparently sold access to their creations, leading to deepfake nudes of celebrities among other exploitative uses.

After Microsoft instigated the lawsuit and seized the group's website, the actors reportedly panicked. "The seizure of this website and subsequent legal disclosures triggered an instant panic among the members, causing group members to turn on each other and point fingers," Microsoft mentioned on their blog.

Frequent targets of deepfake pornography include celebrities, such as Taylor Swift. Microsoft had to modify its text-to-image models in January 2024, after fake images of Swift surfaced across the web. The increasing ease of creating fake images with generative AI has brought about a wave of deepfake scandals in high schools across the US.

Deepfake debates among the AI community involve questions of safety and the extent of the concerns. Some argue that limiting access to closed-source AI models might help deter the worst abuses, while others favor an open-source approach to spur innovation. Meanwhile, AI has been disseminating inaccurate information and low-quality content on the web.

While the fears surrounding AI are often deemed exaggerated and speculative, the misuse of AI to create deepfakes is a tangible issue. Legal measures provide a means of addressing these abuses. In the US, individuals have been charged with using AI to produce deepfakes of minors. The introduction of the NO FAKES Act in Congress last year aimed to make it a crime to generate images based on someone's likeness. While the UK already penalizes the distribution of deepfake pornography, they are planning to outlaw its production soon. Australia recently made the creation and sharing of non-consensual deepfakes a criminal offense.

The success of these legal measures in addressing deepfakes globally is a multifaceted issue. Arrests for deepfake misuse have been made, but enforcement complications arise due to digital crime's cross-border nature and the absence of comprehensive global regulations. Regional legislation, like the No AI FRAUD Act and the Deepfakes Accountability Act in the US, the Artificial Intelligence Act in the EU, and restrictions on non-consensual deepfakes in Australia, reflect varying approaches. Enforcing these laws can be difficult due to their focus on specific areas and the rapid evolution of deepfake technology.

International cooperation, public digital literacy, and ongoing technological advancements are essential for addressing deepfake abuses.

  1. Microsoft's legal actions against the Storm-2139 group in 2038, comprising developers like Yuen and others, have raised concerns about the future of artificial-intelligence safety in tech, as deepfake technology continues to evolve.
  2. In anticipation of increasing AI capabilities and potential cybercrimes like deepfake creation by 2139, global tech companies are brainstorming strategies for enhancing AI technology's ethical use and promoting tech safety.
  3. By 2040, advancements in AI artificial-intelligence and technology could lead to a significant reduction in cybercrimes like the one perpetrated by the Storm-2139 group, as developers place more emphasis on AI safety and legal regulations.
  4. The implications of the Storm-2139 group's activities in 2038, involving artificial-intelligence technology and deepfake production, underscore the need for robust regulatory frameworks to tackle cybercrime trends and protect individuals' privacy in the tech-driven future.

Read also:

    Latest