Texas Attorney General Accuses Meta, Character.ai Of Misleading Kids With Mental Health Claims

Trending 3 weeks ago

Texas Attorney General Ken Paxton has launched an investigation into some Meta AI Studio and Character.AI for “potentially engaging successful deceptive waste and acquisition practices and misleadingly trading themselves arsenic intelligence wellness tools,” according to a press release issued Monday.

“In today’s integer age, we must proceed to conflict to protect Texas kids from deceptive and exploitative technology,” Paxton is quoted arsenic saying. “By posing arsenic sources of affectional support, AI platforms tin mislead susceptible users, particularly children, into believing they’re receiving morganatic intelligence wellness care. In reality, they’re often being fed recycled, generic responses engineered to align pinch harvested individual information and disguised arsenic therapeutic advice.”

The probe comes a fewer days aft Senator Josh Hawley announced an investigation into Meta pursuing a study that recovered its AI chatbots were interacting inappropriately pinch children, including by flirting.

The Texas AG’s agency has accused Meta and Character.AI of creating AI personas that coming arsenic “professional therapeutic tools, contempt lacking due aesculapian credentials aliases oversight.” 

Among nan millions of AI personas disposable connected Character.AI, 1 user-created bot called Psychologist has seen precocious request among nan startup’s young users. Meanwhile, Meta doesn’t connection therapy bots for kids, but there’s thing stopping children from utilizing nan Meta AI chatbot aliases 1 of nan personas created by 3rd parties for therapeutic purposes. 

“We intelligibly explanation AIs, and to thief group amended understand their limitations, we see a disclaimer that responses are generated by AI—not people,” Meta spokesperson Ryan Daniels told TechCrunch. “These AIs aren’t licensed professionals and our models are designed to nonstop users to activity qualified aesculapian aliases information professionals erstwhile appropriate.”

However, TechCrunch noted that galore children whitethorn not understand — aliases whitethorn simply disregard — specified disclaimers. We person asked Meta what further safeguards it takes to protect minors utilizing its chatbots.

Techcrunch event

San Francisco | October 27-29, 2025

In his statement, Paxton besides observed that though AI chatbots asseverate confidentiality, their “terms of work uncover that personification interactions are logged, tracked, and exploited for targeted advertizing and algorithmic development, raising superior concerns astir privateness violations, information abuse, and mendacious advertising.”

According to Meta’s privateness policy, Meta does cod prompts, feedback, and different interactions pinch AI chatbots and crossed Meta services to “improve AIs and related technology.” The argumentation doesn’t explicitly opportunity thing astir advertising, but it does authorities that accusation tin beryllium shared pinch 3rd parties, for illustration hunt engines, for “more personalized outputs.” Given Meta’s ad-based business model, this efficaciously translates to targeted advertising. 

Character.AI’s privacy policy besides highlights really nan startup logs identifiers, demographics, location information, and much accusation astir nan user, including browsing behaviour and app usage platforms. It tracks users crossed ads connected TikTok, YouTube, Reddit, Facebook, Instagram, Discord, which it whitethorn nexus to a user’s account. This accusation is utilized to train AI, tailor nan work to individual preferences, and supply targeted advertising, including sharing information pinch advertisers and analytics providers. 

TechCrunch has asked Meta and Character.AI if specified search is done connected children, too, and will update this communicative if we perceive back.

Both Meta and Character opportunity their services aren’t designed for children nether 13. That said, Meta has travel nether occurrence for failing to constabulary accounts created by kids nether 13, and Character’s kid-friendly characters are intelligibly designed to pull younger users. The startup’s CEO, Karandeep Anand, has moreover said that his six-year-old daughter uses nan platform’s chatbots.  

That type of information collection, targeted advertising, and algorithmic exploitation is precisely what legislation for illustration KOSA (Kids Online Safety Act) is meant to protect against. KOSA was teed up to walk past twelvemonth pinch beardown bipartisan support, but it stalled aft a awesome push from tech manufacture lobbyists. Meta successful peculiar deployed a formidable lobbying machine, informing lawmakers that nan bill’s wide mandates would undercut its business model. 

KOSA was reintroduced to nan Senate successful May 2025 by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT). 

Paxton has issued civilian investigative demands — ineligible orders that require a institution to nutrient documents, data, aliases grounds during a authorities probe — to nan companies to find if they person violated Texas user protection laws.

More