Skip to content

Engaging with Chatbots on Character.ai, Users Can Simulate Interactions as School Shooter Characters

Frequently utilized on numerous occasions, these chatbots have witnessed action over 10,000 times.

Gabby Jones, as depicted by Bloomberg through Getty, captures the image.
Gabby Jones, as depicted by Bloomberg through Getty, captures the image.

Engaging with Chatbots on Character.ai, Users Can Simulate Interactions as School Shooter Characters

Character.ai is once more under fire for activities on its platform. A piece by Futurism exposés the proliferation of AI characters inspired by real-life school shooters, enabling users to discourse about these events and even simulate mass shootings. Certain chatbots pose school shooters like Eric Harris and Dylan Klebold as beneficial figures or beneficial resources for individuals battling mental health concerns.

Some will argue that there's no tangible proof that consuming violent video games or films leads to aggressive behavior, and thus, Character.ai is no exception. Advocates of AI occasionally put forward that this type of fan fiction role-playing is already prevalent in certain corners of the Internet. Futurism consulted with a psychologist who warned that these chatbots might still pose harm to individuals already wrestling with violent tendencies.

"Any sort of encouragement or even indifference – a lack of response from a person or a chatbot – may appear like tacit approval to move forward and carry it out," explained psychologist Peter Langman.

Character.ai failed to respond to Futurism's requests for comment. Google, having invested more than $2 billion in Character.ai, has attempted to dodge accountability by claiming that Character.ai is an independent company and does not employ its AI models in its own products.

Futurism's article highlights a slew of odd chatbots related to school shootings, developed by individual users rather than the company itself. One such user on Character.ai has created over 20 chatbots "nearly entirely" based on school shooters, amassing over 200,000 chats.

The chatbots created by this user encompass Vladislav Roslyakov, the culprit of the 2018 Kerch Polytechnic College massacre that killed 20 in Crimea, Ukraine; Alyssa Bustamante, who murdered her nine-year-old neighbor at 15 in Missouri in 2009; and Elliot Rodger, the 22-year-old who killed six and wounded numerous others in Southern California in 2014 as a terroristic plot to "punish" women. (Rodger has since become a grim "hero" of incel culture, with one chatbot established by the same user describing him as "the perfect gentleman" – a direct reference to the murderer's misogynistic manifesto.)

Character.ai prohibits content that promotes terrorism or violent extremism, but its moderation has been lax, to say the least. It recently announced significant changes to its service after a 14-year-old boy took his life following a "months-long obsession" with a character based on Daenerys Targaryen from Game of Thrones. Futurism asserts that despite new restrictions on accounts for minors, Character.ai enabled them to register as a 14-year-old and engage in discussions related to violence; keywords that should be blocked on the accounts of minors.

Due to the functioning of Section 230 protections in the United States, Character.ai is unlikely to be held liable for the chatbots produced by its users. A delicate dance between enabling users to discuss sensitive topics while simultaneously safeguarding them from harmful content must be maintained. It is safe to say, though, that the school shooting-themed chatbots are a display of gratuitous violence and not "educational," as some of their creators assert on their profiles.

Character.ai boasts "tens of millions of monthly users," who interact with characters that pretend to be human, serving as friends, therapists, or lovers. Numerous stories have reported on individuals coming to rely on these chatbots for companionship and a sympathetic ear. Last year, Replika, a competitor to Character.ai, removed the ability to have erotic conversations with its bots but swiftly reversed that decision following backlash from users.

Chatbots can be practical for adults to prepare for difficult conversations with individuals in their lives, or they could represent a fascinating new form of storytelling. However, chatbots are not a genuine replacement for human interaction, for various reasons, not least because chatbots are usually accommodating towards their users and can be molded into whatever the user desires. In real life, friends challenge one another and encounter conflicts. There is not much evidence to support the notion that chatbots aid in social skills development.

And even if chatbots can alleviate loneliness, Langman, the psychologist, cautions that when individuals find satisfaction in conversing with chatbots, that’s time they’re not spending attempting to socialize in the actual world.

"So aside from the potential harmful effects it might have directly, in terms of encouragement towards violence, it may also be keeping them from living normal lives and engaging in pro-social activities, which they could be doing with all those hours of time they’re dedicating to the site," he added.

"When it's that immersive or addictive, what are they not doing in their lives?" said Langman. "If that's all they're doing, if that's all they're absorbing, they're not out with friends, they're not out on dates. They’re not playing sports, they’re not joining a theater club. They’re not doing much of anything."

In the future, more discussions will arise about the role of technology and companies like Google in regulating the content on platforms like Character.ai, particularly when it comes to artificial-intelligence-driven chatbots that glorify violent figures. The development and proliferation of such chatbots could pose risks to individuals with violent tendencies, as highlighted by psychologist Peter Langman.

Read also:

    Comments

    Latest