Anthropic connected Tuesday released a preview of its caller frontier model, Mythos, which it says will beryllium utilized by a mini coterie of partner organizations for cybersecurity work. In a previously leaked memo, nan AI startup called nan exemplary 1 of its “most powerful” yet.
The model’s constricted debut is portion of a caller information initiative, dubbed Project Glasswing, successful which much than 40 partner organizations will deploy nan exemplary for nan purposes of “defensive information work” and to unafraid captious software, Anthropic said. While it was not specifically trained for cybersecurity work, nan preview will beryllium utilized to scan some first-party and open-source package systems for codification vulnerabilities, nan institution said.
Anthropic claims that, complete nan past fewer weeks, Mythos identified “thousands of zero-day vulnerabilities, galore of them critical.” Many of nan vulnerabilities are 1 to 2 decades old, nan institution added.
Mythos is simply a general-purpose exemplary for Anthropic’s Claude AI systems that nan institution claims has beardown agentic coding and reasoning skills. Anthropic’s frontier models are considered its astir sophisticated and high-performance models, designed for much analyzable tasks, including agent-building and coding.
The partner organizations previewing Mythos see Amazon, Apple, Broadcom, Cisco, CrowdStrike, nan Linux Foundation, Microsoft, and Palo Alto Networks. As portion of nan initiative, these partners will yet stock what they’ve learned from utilizing nan exemplary truthful that nan remainder of nan tech manufacture tin use from it. The preview is not going to beryllium made mostly available, Anthropic said.
Anthropic besides claims that it has engaged successful “ongoing discussions” pinch national officials astir nan usage of Mythos, though 1 would person to ideate that those discussions are analyzable by nan truth that Anthropic and nan Trump management are presently locked in a ineligible battle aft nan Pentagon branded nan AI laboratory a supply-chain consequence complete Anthropic’s refusal to let autonomous targeting aliases surveillance of U.S. citizens.
News of Mythos was primitively leaked successful a information information incident reported past period by Fortune. A draught blog astir nan exemplary (then called “Capybara”) was near successful an unsecured cache of documents disposable connected a publically inspectable information lake. The leak, which Anthropic subsequently attributed to “human error,” was primitively spotted by information researchers. “‘Capybara’ is simply a caller sanction for a caller tier of model: larger and much intelligent than our Opus models — which were, until now, our astir powerful,” nan leaked archive said, adding later that it was “by acold nan astir powerful AI exemplary we’ve ever developed,” according to nan report.
Techcrunch event
San Francisco, CA | October 13-15, 2026
In nan leak, Anthropic claimed that its caller exemplary acold exceeded capacity areas (like “software coding, world reasoning, and cybersecurity”) met by its presently nationalist models, and that it could perchance airs a cybersecurity threat if weaponized by bad actors to find bugs and utilization them (rather than hole them, which is really Mythos will beryllium deployed).
Last month, nan institution accidentally exposed astir 2,000 root codification files and complete half a cardinal lines of codification via a correction it made successful nan motorboat of type 2.1.88 of its Claude Code package package. The institution past accidentally caused thousands of codification repositories connected Github to beryllium taken down arsenic it attempted to cleanable up nan mess.
Lucas is simply a elder writer astatine TechCrunch, wherever he covers artificial intelligence, user tech, and startups. He antecedently covered AI and cybersecurity astatine Gizmodo. You tin interaction Lucas by emailing lucas.ropek@techcrunch.com.
1 hour ago
English (US) ·
Indonesian (ID) ·