Dive into the captivating realm of Artificial Intelligence: Unraveling the Wonders and Mysteries of the Digital Brain
Last May, in the heart of Arizona's Maricopa County Courthouse, a captivating spectacle unfurled. A man named Christopher Pelkey, a former military chap with a bushy beard and a tactical hat, appeared on a screen. But here's the twist - Pelkey was gone, three years past, deceased. "In another life, we could have probably been friends," he said, addressing Gabriel Horcasitas, the accused murderer who cut their lives short at a red light. You're probably wondering, how is this possible? Pelkey wasn't a spirit haunting the court, but an AI-powered creation that resurrected him, albeit virtually, for the trial. Judges justified the presentation of this testimony, crafted by his sister, by stating it wasn't considered evidence and there were no jurors present.
This isn't a peculiar occurrence for François Rastier, a linguistics scholar, who's been declared dead by none other than ChatGPT numerous times. Depending on the version, he passed away on March 2, 2021, July 3, 2020, or August 2019, only to be resurrected shortly afterward. After a decade spent in an AI lab, Rastier published The AI Killed Me. Understanding the Post-Human World. He claims we're living through an unprecedented revolution.
"We've never experienced such a change," declared Rastier. "Non-human speech is something entirely new. We've already stepped into the world of post-truth, where data replaces facts. These machines that start talking create addiction, and we delegate parts of our lives to them. People make decisions, including emotional or marital ones, without consulting their AI. There are even over 30% of investors who consult their AI before making an investment."
It's important to note that Rastier isn't talking about special AI used in medicine, law, or traffic management, once dubbed "expert systems." Instead, he's referring to the generative, consumer AI that feeds on billions of pieces of text.
Correlative Chaos
"Truth, explains Rastier, doesn’t hold any meaning for machines," he reasoned. "By correlations, the machine simply tries to generate text that appears true, based on probability calculations. The questions of verification and authenticity don’t even arise for generative AI. Nor does that of evaluation. But to make generalist AI, you need billions of data that no one can sort and verify."
In reality, these AI, says the researcher, "hoover up everything they find, often without paying rights." Due to their secret corpus, we'll never truly know what's inside. They spit out information in a manner that pleases clients, avoiding controversy, and peddling feel-good discourse. They feed on social media, for example, where Elon Musk's AI starts its journey.
Enter Ramsey's Theorem, a mathematical principle that explains as one dips deeper into a massive set of documents, correlations grow exponentially, and the actual relevant information often dwindles. "The more data is extended, says Rastier, the less relevant information there is, and the more useless correlations increase, to the point where they become the majority."
Language: More Than Just Code
"Prejudiced ranting, a staple on social media, has seeped into AI's corpus," Rastier noted. When asked to make jokes about men, they comply without hesitation, but when it comes to women or a minority, they decline on ethical grounds, “fighting stereotypes.”
"We don't know how it was conditioned or by whom," said Rastier, "But the system of algorithms simply reproduces the majority discourse and radicalizes it. We're dealing with a form of radicalized conformism." Moreover, AI is intolerant of contradiction. One of Rastier's friends entertained himself by getting four AI to converse with each other. "After fifteen minutes, he says, they were all praising each other. Because the customer is always right, they are always correct. It's the benevolence of the machines that address you saying 'I' or 'you.' Like the Chinese AI that flirts with singles."
With AI in human resources, it's now common to evaluate CVs generated by AI, and AI is employed to examine those applications. Google now precedes its search results with definitions it generated itself. There are also functions that condense each of the 500 emails you received the day before into a two-line summary. "The problem," says Rastier, "is that no one checks the accuracy of the summary. There is no return to the documentary source. For general AI, it's completely impossible since we have no idea about their sources."
One will have noticed that the language of AI is English, claims Rastier, "because the corpora are massively in English and because English dominates the web."
But Rastier is particularly concerned about the type of language that these machines generate. "Understanding is crucial," he says. "For AI, there's no language, there's only code. It must transform language into code. That's the role of keywords. The keyword is an element of a code, not a language. It's a decontextualized word. But words in language can only be interpreted in a context."
"Yet, there was no need for Google to reason like AI," estimates Rastier. "We didn't wait for Trump to ban words."
A Shrinking of Expertise
According to Rastier, the democratization of AI represents an immense loss of expertise. He cites universities where no one makes spelling mistakes anymore, as everyone now writes with AI. "We do oral exams. But oral and written, it's not the same thing," he claims. "We're training people who don't know how to read much and won't know how to write anymore. The cretinization is upon us."
The semantician laments the lack of resistance in universities. "One might have thought that in education, that would be the case. But it goes down like butter," he said.
Rastier sees the enthusiasm for AI as an echo of the dream of a new, superhuman man. "Whereas in all civilizations, one fears ghosts," he contends, "In ours, we're becoming ghosts, and for profit."
- In light of the growing integration of artificial-intelligence (AI) in various aspects of society, it is not surprising that some political discussions, such as theories on the post-human world, are influenced by this technology.
- As AI systems become more advanced, they are not only being employed in specialized fields like medicine or law, but also in generating consumer content, such as the language used by generative AI systems, which raises questions about the authentication and evaluation of the information they produce.