Women being stripped revealing X's Mark by Men
In recent times, Nigerian social media users like Vivian Nnabue have taken a stand against AI-assisted sexual harassment on platforms. Nnabue's bold move involved making a LinkedIn post with screenshots and tags of the perpetrators' accounts, aiming to bring attention to this pressing issue.
However, the current legal protections for victims of AI-generated sexual harassment in Nigeria are limited and largely inadequate. The Cybercrimes Act of 2015, a key piece of legislation, addresses online harassment and stalking, but it does not explicitly cover AI-generated content such as synthetic media, deepfakes, or automated image-based sexual abuse. This leaves a significant gap in addressing AI-facilitated sexual harassment, as the law assumes that online harm is manually produced.
The Nigeria Data Protection Act (NDPA) of 2023 and the Code of Practice for Interactive Computer Platforms, overseen by the National Information Technology Development Agency (NITDA), provide some recourse for victims, particularly relating to AI-generated impersonation or image-based abuse. Victims can file complaints through NITDA’s mechanisms, such as via their complaint email [email protected].
Despite these provisions, enforcement agencies in Nigeria currently lack the technical capacity and clear regulatory frameworks to effectively investigate and prosecute AI-powered sexual harassment. Experts and civil society are urging Nigeria to update its cyber laws to explicitly recognise AI-generated sexual violence as a crime and to establish mechanisms for takedown, restitution, and prosecution.
Notably, Nigeria is in the process of drafting an Online Harms Protection Bill aimed at addressing emerging forms of tech-enabled abuse, including those amplified by AI tools. This bill, led by NITDA, may enhance protections once enacted but has not yet been implemented.
The ongoing issue of AI-generated sexual harassment is not limited to Nigeria. Across Africa, online users are using AI tools to manipulate, sexualize, or humiliate women. One such example involves the use of X's Grok to undress women or alter their bodies.
Recent incidents, such as the viral video involving Nigerian content creator Asherkine and University of Nigeria Nsukka student Ifeme Rebecca Yahoma, highlight the urgency of this issue. In the video, Asherkine asked Yahoma out on a date, but an anonymous Snapchat user named 'Kenny' spread a fabricated narrative that Yahoo was his girlfriend. 'Kenny' even used deepfakes to manipulate pictures of Yahoo to support his lie.
Prosecution in such cases is often complicated by jurisdictional ambiguities and the difficulty of verifying user identities. The use of tools like Grok to portray a woman in sexual or compromising ways without her consent could amount to criminal defamation of character.
As the Nigerian government continues to draft the Online Harms Protection Bill, it is crucial that the bill explicitly recognises AI-generated sexual violence as a crime and establishes clear mechanisms for takedown, restitution, and prosecution. Until then, victims of AI-generated impersonation or image-based abuse can seek redress under the Nigeria Data Protection Act (NDPA) and can file complaints at [email protected].
In summary, victims of AI-generated sexual harassment in Nigeria face significant legal protection gaps, with ongoing legislative reforms expected to improve the situation in the near future.
Technology is a vital tool used by individuals like Vivian Nnabue to combat AI-assisted sexual harassment on social media platforms, yet technological advancements, such as deepfakes and automated image-based sexual abuse, straddle a legal gray area in Nigeria. The existing laws, including the Nigeria Data Protection Act (NDPA) and the Code of Practice for Interactive Computer Platforms, offer some recourse for victims, but the enforcement agencies lack the necessary capacity and frameworks to effectively investigate and prosecute AI-facilitated sexual harassment. As the Nigerian government works on the Online Harms Protection Bill to address tech-enabled abuse, it is essential that the bill explicitly recognises AI-generated sexual violence as a crime and establishes clear mechanisms for takedown, restitution, and prosecution.