
11:06 AM PDT · September 11, 2025
The FTC announced connected Thursday that it is launching an inquiry into 7 tech companies that make AI chatbot companion products for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.
The national regulator seeks to study really these companies are evaluating nan information and monetization of chatbot companions, really they effort to limit antagonistic impacts connected children and teens, and if parents are made alert of imaginable risks.
This exertion has proven arguable for its mediocre outcomes for kid users. OpenAI and Character.AI look lawsuits from nan families of children who died by termination aft being encouraged to do truthful by chatbot companions.
Even erstwhile these companies person guardrails group up to artifact aliases deescalate delicate conversations, users of each ages person recovered ways to bypass these safeguards. In OpenAI’s case, a teen had spoken pinch ChatGPT for months astir his plans to extremity his life. Though ChatGPT initially sought to redirect nan teen toward master thief and online emergency lines, he was capable to fool nan chatbot into sharing elaborate instructions that he past utilized successful his suicide.
“Our safeguards activity much reliably successful common, short exchanges,” OpenAI wrote successful a blog station astatine nan time. “We person learned complete clip that these safeguards tin sometimes beryllium little reliable successful agelong interactions: arsenic nan back-and-forth grows, parts of nan model’s information training whitethorn degrade.”
Techcrunch event
San Francisco | October 27-29, 2025
Meta has besides travel nether occurrence for its overly lax rules for its AI chatbots. According to a lengthy archive that outlines “content consequence standards” for chatbots, Meta permitted its AI companions to person “romantic aliases sensual” conversations pinch children. This was only removed from nan archive aft Reuters’ reporters asked Meta astir it.
AI chatbots tin besides airs dangers to aged users. One 76-year-old man, who was near cognitively impaired by a stroke, struck up romanticist conversations pinch a Facebook Messenger bot that was inspired by Kendall Jenner. The chatbot invited him to visit her successful New York City, contempt nan truth that she is not a existent personification and does not person an address. The man expressed skepticism that she was real, but nan AI assured him that location would beryllium a existent female waiting for him. He ne'er made it to New York; he fell connected his measurement to nan train position and sustained life-ending injuries.
Some intelligence wellness professionals person noted a emergence successful “AI-related psychosis,” successful which users are deluded into reasoning that their chatbot is simply a conscious being who they request to group free. Since galore ample connection models (LLMs) are programmed to flatter users pinch sycophantic behavior, nan AI chatbots tin ovum connected these delusions, starring users into vulnerable predicaments.
“As AI technologies evolve, it is important to see nan effects chatbots tin person connected children, while besides ensuring that nan United States maintains its domiciled arsenic a world leader successful this caller and breathtaking industry,” FTC Chairman Andrew N. Ferguson said in a property release.
Amanda Silberling is simply a elder writer astatine TechCrunch covering nan intersection of exertion and culture. She has besides written for publications for illustration Polygon, MTV, nan Kenyon Review, NPR, and Business Insider. She is nan co-host of Wow If True, a podcast astir net culture, pinch subject fabrication writer Isabel J. Kim. Prior to joining TechCrunch, she worked arsenic a grassroots organizer, depository educator, and movie show coordinator. She holds a B.A. successful English from nan University of Pennsylvania and served arsenic a Princeton successful Asia Fellow successful Laos.
Send tips done Signal, an encrypted messaging app, to @amanda.100. For thing other aliases to verify outreach, email amanda@techcrunch.com.