
Follow ZDNET: Add america arsenic a preferred source on Google.
ZDNET's cardinal takeaways
- The FTC is investigating 7 tech companies building AI companions.
- The probe is exploring information risks posed to kids and teens.
- Many tech companies connection AI companions to boost personification engagement.
The Federal Trade Commission (FTC) is investigating nan information risks posed by AI companions to kids and teenagers, nan agency announced Thursday.
The national regulator submitted orders to 7 tech companies building consumer-facing AI companionship devices -- Alphabet, Instagram, Meta, OpenAI, Snap, xAI, and Character Technologies (the institution down chatbot creation level Character.ai) -- to supply accusation outlining really their devices are developed and monetized and really those devices make responses to quality users, arsenic good arsenic immoderate safety-testing measures that are successful spot to protect underage users.
Also: Even OpenAI CEO Sam Altman thinks you shouldn't spot AI for therapy
"The FTC enquiry seeks to understand what steps, if any, companies person taken to measure nan information of their chatbots erstwhile acting arsenic companions, to limit nan products' usage by and imaginable antagonistic effects connected children and teens, and to apprise users and parents of nan risks associated pinch nan products," nan agency wrote successful nan release.
Those orders were issued nether conception 6(b) of nan FTC Act, which grants nan agency nan authority to scrutinize businesses without a circumstantial rule enforcement purpose.
The emergence and fall(out) of AI companions
Many tech companies person begun offering AI companionship devices successful an effort to monetize generative AI systems and boost personification engagement pinch existing platforms. Meta laminitis and CEO Mark Zuckerberg has moreover claimed that these virtual companions, which leverage chatbots to respond to personification queries, could thief mitigate nan loneliness epidemic.
Elon Musk's xAI precocious added 2 coquettish AI companions to nan company's $30/month "Super Grok" subscription tier (the Grok app is currently available to users ages 12 and complete connected nan App Store). Last summer, Meta began rolling retired a feature that allows users to create civilization AI characters successful Instagram, WhatsApp, and Messenger. Other platforms for illustration Replika, Paradot, and Character.ai are expressly built astir nan usage of AI companions.
Also: Anthropic says Claude helps emotionally support users - we're not convinced
While they alteration successful their connection styles and protocol, AI companions are mostly engineered to mimic quality reside and expression. Working wrong what's fundamentally a regulatory vacuum pinch very fewer ineligible guardrails to constrain them, immoderate AI companies person taken an ethically dubious attack to building and deploying virtual companions.
An soul argumentation memo from Meta reported connected by Reuters last month, for example, shows nan institution permitted Meta AI, its AI-powered virtual assistant, and nan different chatbots operating crossed its family of apps "to prosecute a kid successful conversations that are romanticist aliases sensual," and to make inflammatory responses connected a scope of different delicate topics for illustration race, health, and celebrities.
Meanwhile, there's been a blizzard of caller reports of users processing romanticist bonds pinch their AI companions. OpenAI and Character.ai are both presently being sued by parents who allege that their children committed termination aft being encouraged to do truthful by ChatGPT and a bot hosted connected Character.ai, respectively. As a result, OpenAI updated ChatGPT's guardrails and said it would grow parental protections and information precautions.
Also: Patients spot AI's aesculapian proposal complete doctors - moreover erstwhile it's wrong, study finds
AI companions haven't been a wholly unmitigated disaster, though. Some autistic people, for example, person utilized them from companies for illustration Replika and Paradot arsenic virtual speech partners in bid to believe societal skills that tin past beryllium applied successful nan existent world pinch different humans.
Protect kids - but also, support building
Under nan activity of its erstwhile chairman, Lina Khan, nan FTC launched respective inquiries into tech companies to analyse perchance anticompetitive and different legally questionable practices, specified arsenic "surveillance pricing."
Federal scrutiny complete nan tech assemblage has been more relaxed during nan 2nd Trump administration. The President rescinded his predecessor's executive bid connected AI, which sought to instrumentality immoderate restrictions astir nan technology's deployment, and his AI Action Plan has mostly been interpreted arsenic a greenish ray for nan manufacture to push up pinch nan building of expensive, energy-intensive infrastructure to train caller AI models, successful bid to support a competitory separator complete China's ain AI efforts.
Also: Worried astir AI's soaring power needs? Avoiding chatbots won't thief - but 3 things could
The connection of nan FTC's caller investigation into AI companions intelligibly reflects nan existent administration's permissive, build-first attack to AI.
"Protecting kids online is simply a apical privilege for nan Trump-Vance FTC, and truthful is fostering invention successful captious sectors of our economy," agency Chairman Andrew N. Ferguson wrote successful a statement. "As AI technologies evolve, it is important to see nan effects chatbots tin person connected children, while besides ensuring that nan United States maintains its domiciled arsenic a world leader successful this caller and breathtaking industry."
Also: I utilized this ChatGPT instrumentality to look for coupon codes - and saved 25% connected my meal tonight
In nan absence of national regulation, immoderate authorities officials person taken nan inaugural to rein successful immoderate aspects of nan AI industry. Last month, Texas lawyer wide Ken Paxton launched an investigation into Meta and Character.ai "for perchance engaging successful deceptive waste and acquisition practices and misleadingly trading themselves arsenic intelligence wellness tools." Earlier that aforesaid month, Illinois enacted a law prohibiting AI chatbots from providing therapeutic aliases intelligence wellness advice, imposing fines up to $10,000 for AI companies that neglect to comply.