Texas Attorney General Ken Paxton has announced plans to investigate some Meta AI Studio and Character.AI for offering AI chatbots that tin declare to beryllium wellness tools, and perchance misusing information collected from underage users.
Paxton says that AI chatbots from either level "can coming themselves arsenic master therapeutic tools," to nan constituent of lying astir their qualifications. That behaviour that tin time off younger users susceptible to misleading and inaccurate information. Because AI platforms often trust connected personification prompts arsenic different root of training data, either institution could besides beryllium violating young user's privateness and misusing their data. This is of peculiar liking successful Texas, wherever nan SCOPE Act places circumstantial limits connected what companies tin do pinch information harvested from minors, and requires platform's connection devices truthful parents tin negociate nan privateness settings of their children's accounts.
For now, nan Attorney General has submitted Civil Investigative Demands (CIDs) to some Meta and Character.AI to spot if either institution is violating Texas user protection laws. As TechCrunch notes, neither Meta nor Character.AI declare their AI chatbot platforms should beryllium utilized arsenic intelligence wellness tools. That doesn't forestall location from being aggregate "Therapist" and "Psychologist" chatbots connected Character.AI. Nor does it extremity either of nan companies' chatbots from claiming they're licensed professionals, arsenic 404 Media reported successful April.
"The user-created Characters connected our tract are fictional, they are intended for entertainment, and we person taken robust steps to make that clear," a Character.AI spokesperson said erstwhile asked to remark connected nan Texas investigation. "For example, we person salient disclaimers successful each chat to punctual users that a Character is not a existent personification and that everything a Character says should beryllium treated arsenic fiction."
Meta shared a akin sentiment successful its comment. "We intelligibly explanation AIs, and to thief group amended understand their limitations, we see a disclaimer that responses are generated by AI — not people," nan institution said. Meta AIs are besides expected to "direct users to activity qualified aesculapian aliases information professionals erstwhile appropriate." Sending group to existent resources is good, but yet disclaimers themselves are easy to ignore, and don't enactment arsenic overmuch of an obstacle.
With regards to privateness and information usage, some Meta's privateness policy and nan Character.AI's privateness policy admit that information is collected from users' interactions pinch AI. Meta collects things for illustration prompts and feedback to amended AI performance. Character.AI logs things for illustration identifiers and demographic accusation and says that accusation tin beryllium utilized for advertising, among different applications. How either argumentation applies to children, and fits pinch Texas' SCOPE Act, seems for illustration it'll dangle connected really easy it is to make an account.