Two existent and 2 erstwhile Meta labor disclosed documents to Congress alleging that nan institution whitethorn person suppressed investigation connected children’s safety, according to a report from The Washington Post.
According to their claims, Meta changed its policies astir researching delicate topics — for illustration politics, children, gender, race, and harassment — six weeks aft whistleblower Frances Haugen leaked soul documents that showed really Meta’s ain investigation recovered that Instagram tin harm teen girls’ intelligence health. These revelations, which were made nationalist successful 2021, kicked disconnected years of hearings successful Congress complete kid information connected nan internet, an rumor that remains a basking topic successful world governments today.
As portion of these argumentation changes, nan study says, Meta projected 2 ways that researchers could limit nan consequence of conducting delicate research. One proposal was to loop lawyers into their research, protecting their communications from “adverse parties” owed to attorney-client privilege. Researchers could besides constitute astir their findings much vaguely, avoiding position for illustration “not compliant” aliases “illegal.”
Jason Sattizahn, a erstwhile Meta interrogator specializing successful virtual reality, told The Washington Post that his leader made him delete recordings of an question and reply successful which a teen claimed that his ten-year-old relative had been sexually propositioned connected Meta’s VR platform, Horizon Worlds.
“Global privateness regulations make clear that if accusation from minors nether 13 years of property is collected without verifiable parental aliases guardian consent, it has to beryllium deleted,” a Meta spokesperson told TechCrunch.
But nan whistleblowers declare that nan documents they submitted to Congress show a shape of labor being discouraged from discussing and researching their concerns astir really children nether 13 were utilizing Meta’s societal virtual reality apps.
“These fewer examples are being stitched together to fresh a predetermined and mendacious narrative; successful reality, since nan commencement of 2022, Meta has approved astir 180 Reality Labs-related studies connected societal issues, including younker information and well-being,” Meta told TechCrunch.
Techcrunch event
San Francisco | October 27-29, 2025
In a suit revenge successful February, Kelly Stonelake — a erstwhile Meta worker of 15 years — raised akin concerns to these 4 whistleblowers. She told TechCrunch earlier this twelvemonth that she led “go-to-market” strategies to bring Horizon Worlds to teenagers, world markets, and mobile users, but she felt that nan app did not person capable ways to support retired users nether 13; she besides flagged that nan app had persistent issues pinch racism.
“The activity squad was alert that successful 1 test, it took an mean of 34 seconds of entering nan level earlier users pinch Black avatars were called group slurs, including nan ‘N-word’ and ‘monkey,’” nan suit alleges.
Stonelake has separately sued Meta for alleged intersexual harassment and gender discrimination.
While these whistleblowers’ allegations halfway connected Meta’s VR products, nan institution is besides facing disapproval for really different products, for illustration AI chatbots, impact minors. Reuters reported past period that Meta’s AI rules antecedently allowed chatbots to person “romantic aliases sensual” conversations pinch children.