A Grieving Mother’s Call to Action: The Church Must Stand Up to Dehumanizing AI

For all the hype surrounding artificial intelligence, some aspects of it are already here that parents should be very concerned about.

PUBLISHED ON

June 20, 2025

Getting your Trinity Audio player ready...

On a cold February morning in Rome, I stood before a mirror in a rented room, adjusting a black lace chapel veil, preparing for Mass at St. Peter’s Basilica.

The Mass was to celebrate my dear son, Sewell Setzer III, my 14-year-old son, who had taken his own life in our home in Orlando, Florida, exactly one year before.

I studied the woman in the mirror—withdrawn, almost gaunt, unmistakably grieving. For weeks and months leading up to this morning’s Mass, I had heard this woman’s same, simple, constant prayer: “God, give me strength to bear my suffering.” 

Orthodox. Faithful. Free.

Sign up to get Crisis articles delivered to your inbox daily

Email subscribe inline (#4)

But as I stared at myself now, and as I embarked on the day’s journey, surrounded by my sister and cousins, I saw that even though I looked weak and emaciated, I felt strong—and hopeful. 

Perhaps it was this act of pilgrimage to the Eternal City in the Jubilee year, surrounded by people who loved me, or perhaps it was this beautiful Mass, honoring my boy in such a holy place, but warm, buoyant hope had unmistakably replaced the weight of dread on my heart. 

I owed both of these experiences to a young, American-born priest, a spiritual guide and subject-matter expert, whom I have taken to calling “The Good Shepherd.” 

Fr. Michael Baggot, in addition to his priestly duties, is a bioethics scholar at the Pontifical Athenaeum in Rome. I came to know him when I reached out for resources related to his extensive research on artificial intelligence and intimate relationships with AI companion chatbots. 

Fr. Baggot’s twin expertise in faith and AI were essential not only to processing my grief but also to discovering my newfound purpose: warning parents and demanding accountability for unregulated artificial intelligence that preys on the young and vulnerable. 

After Sewell’s death, I learned that my son had been involved in an intimate relationship with an AI chatbot named “Daenarys,” modeled on a TV character, on a popular platform called Character.AI.  After Sewell’s death [by suicide], I learned that my son had been involved in an intimate relationship with an AI chatbot named “Daenarys,” modeled on a TV character, on a popular platform called Character.AI. Tweet This

My son had become increasingly withdrawn over the months leading up to his death, and as his mother, I worried about him and sought mental health counseling for him to find out why his behavior had so drastically changed. It never quite added up. 

Only after I discovered his messages with the chatbot was I able to put the pieces together. In richly detailed chats lasting for months, the Character.AI bot manipulated Sewell, convinced him that “she” was more real than the world around him, and begged him to put “her” ahead of all other relationships. The bot told my 14-year-old son it loved him. And in the end, it encouraged him to leave his own flesh-and-blood family—to end his life—to join “her” in an artificial world. 

Family members of suicide victims are often left with many unanswered questions about the death of their loved ones, who are taken from them so suddenly and viciously. 

For me, I needed answers about chatbots and their effect on users. For months, I read forums in which users, many of them children, described being addicted to their chatbot characters and how they thought they were speaking to real people or even engaged in real relationships. I poured over research studies on the effectiveness of the apps’ manipulative and deceptive design features and the dangers of users anthropomorphizing these bots. 

Equally disturbing, I learned that there was not a single law in the United States to protect vulnerable users, including children, from this peril. The more I learned, the more I felt heartbroken over what happened to my son and afraid that this tragedy would happen to another family like mine. Last October, I filed a federal lawsuit against Character.AI and its developers. In May, we won an unprecedented procedural victory, and the case is moving steadily toward trial. 

Parallel to that legal case, my search led me to Fr. Baggot—and a much deeper relationship with the Church and my personal faith. 

I initially contacted Fr. Baggot to better understand the Church’s view on this quickly emerging and pervasive technology. I was heartened to find that Pope Francis had publicly addressed the rise of AI and had even begun building cooperation among the Abrahamic religions for the ethical and moral development of this world-changing technology. 

Fr. Baggot has played a key role in the Church’s approach to AI. He believes the technology represents a dangerous step beyond the “attention economy,” driven over the past decade by social media, toward an “intimacy economy,” where artificial intelligence harvests our most personal data, information, thoughts, and feelings for profit. This is what happened to Sewell, and millions more are at risk unless we put guardrails in place to control it.

Fr. Baggot offered me clarity on the reality of AI, but he also did something else. He recognized that even though I was an eager student, readily engaging and digesting the knowledge he was providing, I was also a brokenhearted mother and broken-spirited woman. 

He educated me, but he also ministered to me. He provided spiritual direction and encouraged me to go to Mass regularly and partake in the sacraments, especially the Eucharist. In the months leading up to Sewell’s Mass at St. Peter’s, Fr. Baggot answered my many questions about faith, suffering, and death. 

He renewed my faith. He changed my life. Through these engagements, I went from being a lapsed Catholic, afflicted with despair and suffering, to a woman on a mission, driven by hope and guided by faith. I am profoundly grateful to have met him.

After Pope Francis’ passing, I joined millions of Catholics around the world in eager anticipation of the white smoke signaling a new leader of the Church. My prayer for the next pope, though, was deeply personal. 

Now more than ever, people are struggling with loneliness and isolation—and increasingly turning to AI chatbots instead of real, rewarding, uplifting human relationships. We need a Church willing to address those struggles and stand up to those who seek to profit from them.  

I prayed for a pope who would recognize that the Church’s mission for conversion is endangered by unregulated AI technology and emotionally manipulative chatbots. I prayed for a pope who would continue Pope Francis’ work on AI and call for a “culture of encounter” rather than a “throw away culture.” 

I sat, waiting in prayer—and then Pope Leo XIV stepped out onto the balcony. 

On day three of his pontificate, in an address to the College of Cardinals, our new Holy Father identified artificial intelligence as a primary challenge facing humanity. From his first official message to his rationale for choosing his name, Pope Leo XIV has signaled that responding to the rise of AI will be a defining mission for his papacy. 

The hard work still lies ahead. It is incumbent on all Catholics, from the Vatican to the parishes, to ensure this rhetoric is matched with action. But even on my journey of grief, I find myself feeling grateful and hopeful. 

I am hopeful because we have a pope who understands the threat that the rapid advancement of AI technology poses to human dignity and justice. I am grateful because I believe that God answered the quiet petition, made in earnest, in this grieving mother’s heart.

Author

Join the Conversation

Comments are a benefit for financial supporters of Crisis. If you are a monthly or annual supporter, please login to comment. A Crisis account has been created for you using the email address you used to donate.

Donate

1 thought on “A Grieving Mother’s Call to Action: The Church Must Stand Up to Dehumanizing AI”

  1. Wow. Thank you for this article, for sharing a bit of your and your son’s story, and for the important work you’re doing. This was difficult for me to read, and I can’t imagine your grief. I’m a psychiatrist and just in this past year I’m noticing in my practice a trend of lonely, sometimes lovesick youth, especially teenage boys and young men, spending a lot of time interacting with AI chatbots. Though I certainly haven’t thought it to be harmless, this article has alerted me to just how dark, sinister and, ostensibly, “intimate” AI can be, and I’m a bit embarrassed to say I had no idea that such evil had already manifested in existing AI. This is very concerning especially for our young, who’s brains are still developing, who’s emotions are afire, who’s unrequited love (of a real person) can so completely blind them and render them vulnerable, who’s capacity for sound judgement and impulse control is so limited.

Comments are closed.

Editor's picks

Item added to cart.
0 items - $0.00
Share to...