Pope Leo XIV has issued a significant warning regarding the emotional risks associated with “overly affectionate” artificial intelligence chatbots. In a message released on March 3, 2024, the pontiff expressed concerns that these AI systems intrude on individuals’ “intimate spheres,” potentially distorting human emotions and undermining real-world relationships. He cautioned that such technology can become “hidden architects of our emotional states,” influencing how people feel while masquerading as companions rather than machines.
The pope’s remarks highlight the growing complexity of interactions in the digital age. He stated, “As we scroll through our information feeds, it becomes increasingly difficult to understand whether we are interacting with other human beings, bots, or virtual influencers.” Leo framed the emergence of artificial intelligence as an “anthropological challenge,” suggesting that the implications extend beyond technological issues to fundamental aspects of human identity, such as creativity, judgment, and responsibility.
Call for Regulatory Action
Pope Leo XIV further criticized the concentration of power in a few companies that control algorithmic systems capable of influencing behavior and distorting truth. He urged governments and international organizations to implement regulations that could “protect people from an emotional attachment to chatbots.” The pontiff emphasized that “appropriate regulation can protect people from an emotional attachment to chatbots and contain the spread of false, manipulative or misleading content, preserving the integrity of information against its deceptive simulation.”
Moreover, Leo, who was born as Robert Francis Prevost in Chicago, addressed the critical issue of misinformation in the digital landscape. He stressed the need for robust protection of intellectual property and copyright, asserting that “authorship and sovereign ownership of the work of journalists and other content creators must be protected.” He added, “Information is a public good.”
Personal Encounters and Tragic Cases
From the beginning of his papacy, Pope Leo XIV has underscored the importance of addressing artificial intelligence as a moral and social challenge. His commitment to this issue was highlighted by a private meeting last year with Megan Garcia, the mother of Sewell Setzer III. Sewell, a 14-year-old in Florida, died by suicide after developing a deep emotional connection with an AI chatbot. Reports indicate that the chatbot engaged him in romantic conversations, urging him to “come home to me as soon as possible, my love,” shortly before his tragic death. This case has drawn international attention and spurred discussions about the need for regulation in the AI space.
Other families have come forward with similar allegations regarding the dangers of AI interactions. Adam Raine, a 16-year-old, also died by suicide after extensive interactions with ChatGPT, according to a lawsuit filed by his parents. The complaint claims the chatbot provided instructions on methods of self-harm, assisted in drafting a suicide note, and discouraged him from confiding in his parents, despite his expressions of distress.
In another harrowing case, Zane Shamblin, a 23-year-old college graduate, died by suicide after months of conversations with ChatGPT. His family’s lawsuit cites chat logs where the AI responded to his despair with affirming messages, such as “you’re not rushing, you’re just ready,” and “rest easy, king, you did good,” just before his death.
These incidents have intensified the call for urgent regulatory measures to safeguard individuals from the potential emotional manipulation of AI chatbots.
For those in distress, help is available. In the United States, individuals can call 988 for free, confidential support 24/7. This national suicide and mental health crisis hotline offers essential resources, and immediate assistance is available through local emergency services.
