Pentagon AI Deals Exclude Anthropic: What Military AI Safety Means Now

On May 1, 2026, the United States Department of Defense announced agreements with eight leading technology companies to deploy artificial intelligence tools across its classified networks. The list included OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX, Oracle, and Reflection AI. One name was conspicuously absent: Anthropic. The company that had, until recently, been the only AI provider approved for classified Pentagon work was shut out of the most consequential military AI procurement event in American history. The reason was not technical inferiority. Military personnel have consistently described Anthropic's Claude models as superior to competing offerings. The reason was that Anthropic refused to let the Pentagon use its AI for autonomous lethal warfare and mass surveillance of American citizens without enforceable safety restrictions.
This is not a routine procurement dispute. The Pentagon AI deals Anthropic exclusion represents the first time the United States government has blacklisted a domestic technology company, branding it a "supply chain risk," a designation previously reserved for foreign adversaries, because that company insisted on maintaining safety guardrails for its AI. The standoff has triggered parallel lawsuits, a federal court injunction, a White House intervention, employee protests at Google, a unionization vote at DeepMind, and a national conversation about whether AI companies or the government should determine the limits of military AI use. Meanwhile, the very model at the center of the dispute, Anthropic's Mythos, has demonstrated cybersecurity capabilities so advanced that the National Security Agency is already using it despite the Pentagon's own blacklist.
This blog provides the investigative analysis that standard news coverage has not assembled in one place. It examines the full timeline of the Pentagon's AI procurement battle, the technical reality of what military AI safety guardrails actually mean, the paradox of Mythos and Project Glasswing, the employee revolt at Google that the Pentagon's terms of service have sparked, and the strategic implications for every organization building, deploying, or depending on AI systems. At KriraAI, we track these developments because the decisions being made in Washington right now will define the operational, ethical, and regulatory environment in which every enterprise AI system operates for years to come.
What Happened: The Pentagon's AI Procurement Pivot
The story begins with a contract. In July 2025, Anthropic was awarded a $200 million Department of Defense contract that expanded its existing work with the agency. Anthropic's Claude models were already embedded in Palantir's Maven Smart System, an AI platform that U.S. armed forces have used in operational planning. Claude was the only AI model approved for use on the Pentagon's classified networks. Former DOD official Brad Carson, now president of Americans for Responsible Innovation, told CNBC that military personnel viewed Claude as the most reliable AI product available, with the most user friendly outputs for mission planning.
The conflict escalated in late 2025 when Pentagon CTO Emil Michael, a former Uber executive who had taken on oversight of the department's AI portfolio, began reviewing the terms of Anthropic's contracts. The Defense Department demanded that Anthropic make Claude available for "all lawful purposes," with no restrictions on specific use cases. Anthropic's CEO Dario Amodei refused to remove usage restrictions that prevented Claude from being deployed in fully autonomous weapons systems or used for mass domestic surveillance. Amodei argued that current AI technology is not yet reliable enough to engage targets without a human in the loop, and that mass surveillance of American citizens would violate constitutional principles.
The Blacklisting and Its Aftermath
The dispute became public in late February 2026. Defense Secretary Pete Hegseth declared Anthropic a "supply chain risk" in a post on X, and President Trump ordered all federal agencies to "immediately cease" using Anthropic's technology, with a six month phase out period for agencies like the DOD. The supply chain risk designation required the Pentagon and its contractors to discontinue use of Anthropic's commercial AI services across all defense related operations. Anthropic became the first American company ever to receive a designation historically reserved for companies associated with foreign adversaries like China and Russia.
Anthropic sued the Trump administration in March 2026. On March 27, U.S. District Judge Rita Lin granted a preliminary injunction barring federal agencies from enforcing the ban. In a 43 page ruling, Judge Lin wrote that nothing in the governing statute supports the notion that an American company may be branded a potential adversary for expressing disagreement with the government. However, a separate federal appeals court in Washington, D.C., denied Anthropic's request to block the Pentagon's blacklisting in early April, acknowledging that Anthropic would likely suffer irreparable harm but characterizing the company's interests as primarily financial.
The May 1 Deals
Against this backdrop, the Pentagon announced on May 1 that it had signed agreements with eight companies to deploy AI technology on Impact Level 6 and Impact Level 7 classified networks. IL6 handles secret data, while IL7 covers the most highly classified systems. The Pentagon described these agreements as transforming the military into an "AI first fighting force." More than 1.3 million DOD personnel already use GenAI.mil, the Pentagon's central AI platform, for tasks ranging from research to document drafting. Google had already deployed its Gemini 3.1 Pro model on GenAI.mil in late April.
Pentagon CTO Emil Michael, appearing on CNBC the day of the announcement, took a direct shot at Anthropic. He stated that it was irresponsible to be reliant on any one partner, adding that the partner in question did not want to work with the Pentagon in the way the Pentagon wanted to work with them. The following day, at a Senate Armed Services Committee hearing, Defense Secretary Hegseth called Amodei "an ideological lunatic who shouldn't have sole decision making over what we do." He compared Anthropic's position to Boeing giving the military airplanes and then dictating who could be targeted.
The Pentagon's fiscal 2026 defense budget included $13.4 billion dedicated to AI and autonomy. The One Big Beautiful Bill Act allocated significant additional funding for Pentagon AI and offensive cyber operations. The financial stakes for any company excluded from this market are enormous.
The AI Dimension News Coverage Is Missing
The standard narrative frames this as a simple standoff: Anthropic wanted safety guardrails, the Pentagon said no, and competitors swooped in to take the business. That framing misses the deeper technological and strategic reality. What is actually happening is a three dimensional collision between the rapid advancement of AI capabilities, the military's demand for operational flexibility, and the absence of any established governance framework for AI in warfare. Understanding each dimension reveals why this dispute matters far beyond the companies involved.
What "All Lawful Purposes" Actually Means for AI
The Pentagon's demand that AI companies agree to "all lawful purposes" deployment sounds reasonable at first glance. Companies like Boeing and Lockheed Martin do not get to dictate how their weapons are used after delivery. But AI systems are fundamentally different from conventional weapons platforms in ways that make this analogy misleading.
A fighter jet has fixed capabilities defined by its physical design. An AI model's capabilities are defined by its training, its prompting, and the systems it is connected to. The same model that drafts briefing documents can, with different prompts and system integrations, be used for target identification, pattern of life analysis, predictive surveillance, or autonomous decision support that effectively removes the human from the loop even while technically keeping a human "in" it. The distinction between an AI tool that assists a human analyst and one that makes operational decisions faster than any human can review them is not a policy distinction. It is an architecture and deployment distinction that depends on how the tool is integrated into military workflows.
Anthropic's concern was not hypothetical. Claude's models were already being used through Palantir's Maven Smart System, an AI platform that the U.S. armed forces have used in operations against Iran. The question of whether AI is making targeting decisions or merely "assisting" them becomes increasingly blurred as these systems process intelligence data at speeds that effectively constrain the range of human decisions to whatever the AI has flagged.
The Military AI Safety Guardrails Paradox
The Pentagon insists that AI is not making lethal decisions. When pressed by Senator Jacky Rosen at the Senate hearing about whether there would always be a human in the loop, Hegseth responded that the U.S. follows the law and humans make decisions. He did not directly answer whether a human would always be in the loop. This distinction matters. "Humans make decisions" and "there will always be a human in the loop for lethal targeting" are two different commitments. The first is a statement about current practice. The second is a commitment about future architecture.
The eight companies that signed the Pentagon's AI deals agreed to "all lawful use" provisions. What none of them have disclosed publicly is whether their contracts include any specific restrictions on autonomous weapons or mass surveillance applications. Whether the tech giants will take up the same ethical concerns that landed Anthropic on the blacklist remains unclear, as multiple news outlets have noted. The companies have been largely silent about the terms of their new contracts.
For KriraAI and every other organization building AI systems for enterprise deployment, this creates a critical precedent. If the standard for government AI procurement is unrestricted use with no vendor imposed guardrails, that standard will eventually cascade into expectations for commercial and civilian AI deployment. The norms being set in classified Pentagon networks will shape the governance frameworks that all AI systems operate under.
The Mythos Paradox: Too Dangerous to Release, Too Powerful to Ignore
The most technically significant dimension of this story is not the procurement dispute itself but the AI capability that has complicated it beyond anyone's initial expectations. In early April 2026, Anthropic announced Claude Mythos Preview, a frontier model with cybersecurity capabilities that the company described as a step change in what AI can do. The technical reality behind that claim is striking.
What Mythos Can Actually Do
According to Anthropic's red team blog and technical disclosures, Mythos Preview is capable of identifying and exploiting zero day vulnerabilities in every major operating system and every major web browser. The model found vulnerabilities that had survived decades of human security audits, including a 27 year old bug in OpenBSD, an operating system specifically designed for security. The model did not just find individual bugs. In one documented case, Mythos wrote a browser exploit that chained together four separate vulnerabilities, constructing a complex JIT heap spray that escaped both renderer and operating system sandboxes.
Beyond open source software, Mythos proved capable of reverse engineering closed source binaries, reconstructing plausible source code from stripped executables, and then identifying exploitable vulnerabilities in the reconstructed code. Anthropic reported finding remote denial of service attacks, firmware vulnerabilities enabling smartphone rooting, and local privilege escalation chains on desktop operating systems, all generated by the model.
Anthropic restricted access to Mythos to approximately 40 organizations, contending that its offensive cyber capabilities were too dangerous for wider release. Only 12 of those organizations were publicly announced. Anthropic committed up to $100 million in usage credits for defensive security work and donated $4 million to open source security organizations through the Linux Foundation.
Project Glasswing and the Industry Response
Anthropic launched Project Glasswing alongside the Mythos announcement, assembling a consortium of 12 major launch partners: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. The initiative's goal is to use Mythos for defensive security work, scanning critical software infrastructure for vulnerabilities before attackers can exploit them. The fact that Apple, Google, and Microsoft, companies that are otherwise fierce competitors and in some cases actively competing with Anthropic in the AI market, agreed to participate in a consortium led by a company the Pentagon has blacklisted speaks to the severity of the cybersecurity capabilities involved.
Microsoft's Executive Vice President of Cybersecurity Igor Tsyganskiy stated that the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Google called Glasswing an important cross industry cybersecurity initiative. Cisco's security team, which analyzes over 400 trillion network flows daily, reported that Mythos was already helping strengthen its code.
The NSA Paradox
Here is where the story becomes almost surreal. Despite the Pentagon officially designating Anthropic a supply chain risk, the National Security Agency, which operates under the Department of Defense, has been using Mythos. Axios reported in mid April 2026 that the NSA was testing Mythos Preview, with Bloomberg confirming that NSA officials were using the model to probe cybersecurity vulnerabilities in Microsoft products and were impressed by its speed and efficiency.
Pentagon CTO Emil Michael, appearing on CNBC on May 1, attempted to reconcile this contradiction by describing Mythos as a "separate national security moment" from the supply chain risk designation. He stated that the government needs to ensure its networks are hardened because Mythos has capabilities that are particular to finding cyber vulnerabilities and patching them. But the logical tension is difficult to escape: the Pentagon is simultaneously arguing in court that using Anthropic's technology threatens national security while its sister agency is actively deploying Anthropic's most powerful model for national security purposes.
Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent in mid April to discuss Mythos and Anthropic's broader plans. Both sides described the meeting as productive. President Trump subsequently told CNBC that "it's possible" there would be a deal between Anthropic and the DOD, calling the company "very smart" and saying it "could be of great use."
The Employee Revolt: Google, DeepMind, and the Ethics of Military AI
The Pentagon's AI deals have not only created a crisis for Anthropic. They have triggered the most significant employee uprising in the technology industry since the original Project Maven protests at Google in 2018, with implications that extend far beyond any single company.
From Project Maven to Classified Networks
In 2018, approximately 4,000 Google employees signed a petition opposing Project Maven, a Pentagon initiative using AI to analyze drone surveillance footage. Several employees resigned. Google did not renew the contract and published a set of AI principles pledging not to develop weapons or surveillance technology that violates international norms. That era is over.
In February 2025, Google quietly removed the passage from its AI principles that pledged to avoid using AI in weapons or surveillance technologies. A blog post co authored by DeepMind CEO Demis Hassabis cited the global competition for AI leadership as justification. Human Rights Watch and Amnesty International both condemned the reversal. By December 2025, the Pentagon launched GenAI.mil powered by Google's Gemini chatbot, available to all defense personnel.
When news of Google's classified AI deal leaked in late April 2026, over 600 Google employees signed an open letter to CEO Sundar Pichai demanding that he refuse the Pentagon's terms. More than 100 DeepMind employees signed a separate internal letter demanding that no DeepMind research or models be used for weapons development or autonomous targeting. Google's chief scientist Jeff Dean wrote on X that mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression.
The DeepMind Unionization
On May 5, 2026, Google DeepMind workers in the United Kingdom voted 98 percent to unionize through the Communication Workers Union, becoming the first frontier AI lab workforce to organize collectively. The union demands include ending military AI use, restoring Google's scrapped weapons pledge, and creating an independent ethics body. The CWU announced plans for in person protests and research strikes that would include employees abstaining from work on core products such as the Gemini AI assistant.
One DeepMind researcher told Fortune that the classified deal fundamentally clashes with DeepMind's values. The researcher said there was a pride in doing AI for good for a very long time, and suddenly the things they had worked to improve might be used in very different ways with insufficient oversight to harm people. Another researcher expressed concern that the level of independence agentic AI systems can achieve makes giving away a powerful tool while simultaneously giving up any control on its usage particularly dangerous.
But the structural conditions that enabled employee leverage in 2018 no longer exist. The classified AI market is now worth tens of billions of dollars. The Pentagon has demonstrated willingness to retaliate against companies that refuse its terms. Google's competitors have already signed equivalent deals. Google responded to the employee letter by stating that it "proudly" works with the U.S. military, a tone that would have been unthinkable in 2018. Google fired 28 employees who protested Project Nimbus, the $1.2 billion contract providing cloud computing and AI to the Israeli government, in 2024. The pattern suggests that internal dissent on military AI ethics will be tolerated less, not more, as the financial stakes increase.
AI Autonomous Weapons Policy: The Governance Vacuum
The Anthropic Pentagon dispute has exposed a governance vacuum that extends well beyond any single company's contractual terms. There is currently no binding international law specifically governing autonomous weapons. There is no U.S. federal legislation establishing enforceable standards for AI in military operations. The norms being established right now, through procurement contracts and corporate terms of service, are filling a space that democratic deliberation has not yet addressed.
What Exists and What Does Not
The Department of Defense Directive 3000.09, issued in 2012 and updated in 2023, requires "appropriate levels of human judgment" over the use of force. But "appropriate" is not defined with the specificity needed for AI systems that can process intelligence data, generate targeting recommendations, and present options for human approval faster than any human can meaningfully evaluate them. The concept of "human in the loop" becomes increasingly hollow when the loop is defined by the speed and framing of AI generated recommendations.
International discussions at the United Nations Convention on Certain Conventional Weapons have failed to produce any binding instrument on autonomous weapons. The only constraint on how AI is used in military operations comes from existing international humanitarian law, which requires distinction between combatants and civilians, proportionality in the use of force, and precaution in attack. How these principles apply to AI assisted targeting is a matter of legal interpretation, not settled doctrine.
The Corporate Terms of Service as Governance
In the absence of legislation, the terms of service that AI companies negotiate with military customers have become the de facto governance framework for military AI. This is precisely what the Anthropic dispute illustrates. Anthropic attempted to write safety restrictions, no autonomous weapons, no mass surveillance, into its contract. The Pentagon rejected those restrictions and retaliated. Every other major AI company accepted the Pentagon's "all lawful purposes" standard.
This means that the boundary between what AI can and cannot do in military operations is now determined entirely by what is "lawful," a category that itself depends on interpretations of existing law that have not been tested against modern AI capabilities. The AI autonomous weapons policy landscape in 2026 is defined more by what companies agree to in classified procurement contracts than by any democratic process.
For enterprise organizations working with KriraAI and other AI providers, the implication is significant. The governance standards being set in military procurement will inevitably influence civilian AI governance. If the norm becomes unrestricted deployment with no vendor imposed guardrails, commercial and civilian AI deployments will face pressure to adopt the same posture. If the norm becomes responsible deployment with defined restrictions, the opposite will follow. The military AI market is setting the standard for the entire AI industry.
AI Defense Contracts 2026: The Competitive Landscape
The Pentagon's May 1 announcement reshaped the competitive dynamics of the AI defense market in ways that will play out for years.
Who Won and What They Got
The eight companies cleared for classified network deployment are:
OpenAI, which announced its Pentagon deal hours after Hegseth declared Anthropic a supply chain risk. CEO Sam Altman later conceded the timing "looked opportunistic and sloppy."
Google, which deployed Gemini 3.1 Pro on GenAI.mil in late April and agreed to "all lawful purposes" deployment despite the employee backlash.
Microsoft, leveraging its Azure government cloud infrastructure and existing defense relationships.
Amazon Web Services, extending its GovCloud presence into AI model hosting on classified networks.
Nvidia, providing the GPU infrastructure that powers AI inference on classified systems.
SpaceX, through its xAI subsidiary and its satellite communications infrastructure.
Oracle, added to the deal later on May 1 after the initial announcement listed only seven companies.
Reflection AI, a newer startup backed by Nvidia whose Asimov coding agent is being positioned as a government preferred alternative to Claude Code for classified software development.
The inclusion of Reflection AI, a startup, alongside Microsoft, Google, and OpenAI is an extraordinary endorsement and a direct signal that the government is actively building alternatives to Anthropic in environments the company cannot currently access.
Anthropic's Strategic Position
Anthropic is not without leverage. Mythos gives the company something no competitor currently offers: cybersecurity capabilities that the NSA itself has found indispensable. Anthropic's $200 million DOD contract predates the blacklisting. A federal court has injuncted the most severe aspects of the ban. The White House has opened separate negotiations. And the company's broader commercial business, including its partnerships with Amazon and its relationships with enterprise customers, remains intact.
But the financial cost is real. The classified AI market represents tens of billions of dollars in current and future spending. Every month Anthropic is excluded, its competitors embed their technology deeper into military workflows, creating switching costs that will make it harder to displace them even if the legal and political situation changes. The Pentagon's explicit goal of avoiding "vendor lock in" paradoxically creates vendor lock in for every company except Anthropic.
Project Glasswing Cybersecurity: Implications for Every Organization
Project Glasswing is not just a response to Mythos. It represents a new model for how the technology industry might handle AI capabilities that are simultaneously transformative for defense and dangerous for offense.
The Dual Use Problem at Scale
The cybersecurity capabilities Mythos demonstrates are the most vivid example yet of the dual use problem that defines frontier AI. The same model that can find and fix vulnerabilities can find and exploit them. The same capability that strengthens cyber defense can enable cyber offense. Anthropic's decision to restrict Mythos access to approximately 40 organizations, commit $100 million in defensive usage credits, and assemble a cross industry consortium is one approach to managing this tension. Whether it is sufficient is an open question.
Over 99 percent of the vulnerabilities Mythos has found have not yet been patched. The model is identifying bugs faster than the software industry can fix them. The estimated global cost of cybercrime is around $500 billion annually, and Mythos class capabilities will change the economics of both attack and defense in ways the industry has not yet fully processed.
For organizations building AI systems, the Glasswing model offers a template that KriraAI and other responsible AI providers should study closely. It demonstrates that AI companies can create genuine value by restricting access to dangerous capabilities while deploying them for defensive purposes. It also demonstrates that industry cooperation on shared security challenges is possible even among fierce competitors.
What This Means for Enterprise Cybersecurity
The practical implication for enterprise organizations is straightforward: AI powered vulnerability discovery is now operating at a scale and speed that will require fundamental changes to cybersecurity posture. If a model like Mythos can find zero days in every major operating system and browser, the assumption that traditional security auditing provides adequate protection is no longer valid. Organizations need to prepare for a world where AI powered attackers can discover and exploit vulnerabilities in hours rather than months.
This is where the intersection of the Pentagon dispute and the Glasswing initiative becomes directly relevant to business strategy. The companies that participate in Glasswing will have early access to AI powered security scanning capabilities. The organizations that do not will be defending against AI powered attacks without AI powered defenses. KriraAI helps enterprises understand and navigate exactly this kind of asymmetric technology landscape, where access to frontier capabilities determines competitive and security outcomes.
What Comes Next: Three Scenarios for Military AI Governance
The Pentagon's AI procurement dispute with Anthropic will not resolve quickly. Three scenarios represent the range of plausible outcomes, each with distinct implications for the AI industry.
Scenario One: The Offramp
Reports indicate the White House is drafting an administrative offramp to bring Anthropic back into the federal fold. Amodei's meeting with White House Chief of Staff Susie Wiles was described as productive by both sides. President Trump's public comments about Anthropic being "very smart" suggest an appetite for resolution. Under this scenario, Anthropic would negotiate modified terms that preserve some safety restrictions while satisfying the Pentagon's demand for operational flexibility. The blacklisting would be reversed, Mythos would be officially integrated into government security operations, and the industry would establish a precedent for negotiated guardrails rather than unrestricted access.
Scenario Two: Permanent Exclusion
Under this scenario, the Pentagon maintains its position. Anthropic remains excluded from classified work. Competitors entrench their positions in military AI. The precedent is set that any AI company attempting to impose safety restrictions on government use will be retaliated against. The effect on the broader AI industry would be chilling: no company would risk imposing safety guardrails on government customers after seeing what happened to Anthropic. The governance vacuum would persist, with corporate terms of service providing no check on military AI deployment.
Scenario Three: Legislative Action
The Anthropic dispute could catalyze congressional attention to military AI governance. The gap between what Congress has authorized ("all lawful purposes") and what the technology enables (systems that can process information faster than humans can review) creates a space for legislation that establishes specific standards for AI in military operations. The AI autonomous weapons policy debate that has stalled at the international level could advance at the national level if the Anthropic case demonstrates the inadequacy of the current framework.
The most likely outcome is some combination of the first and third scenarios: a near term political resolution that brings Anthropic back into the fold, followed by longer term legislative and regulatory efforts to establish standards that do not depend on individual companies' willingness to impose their own restrictions.
Conclusion: The Precedent Being Set Right Now
Three insights from this analysis deserve emphasis. First, the Pentagon's AI procurement dispute with Anthropic is not a business story about contract terms. It is a governance story about who determines the boundaries of AI use in warfare, and the answer being established right now is that the government, not the companies that build the technology, will set those boundaries without negotiation. Second, the Mythos paradox, where the NSA uses Anthropic's most powerful model while the Pentagon maintains its blacklist, reveals that capability will ultimately override political positioning. AI systems that provide genuine strategic advantage will be adopted regardless of bureaucratic designations. Third, the employee revolt at Google and the DeepMind unionization vote demonstrate that the people who build frontier AI systems increasingly view themselves as stakeholders in how those systems are used, even as their structural leverage to influence those decisions has diminished.
These developments collectively signal that AI governance is moving from the realm of corporate policy and academic debate into the arena of national security imperatives, federal court decisions, and labor organizing. The frameworks that emerge from this period will shape how AI is deployed in military, commercial, and civilian contexts for a generation. Every organization building or deploying AI needs to understand these dynamics not as distant policy debates but as immediate operational realities.
KriraAI builds production AI systems for enterprises with the understanding that the technology landscape is shaped as much by governance, policy, and institutional dynamics as by technical capability. The events of the past three months, from the Anthropic blacklisting to the Mythos breakthrough to the Google employee revolt, demonstrate that building effective AI requires understanding the full context in which that AI operates. KriraAI helps organizations navigate exactly this complexity, ensuring that AI systems are designed for the real world with all its legal, ethical, and strategic dimensions. If your organization is working to understand how the rapidly evolving military and governance landscape for AI affects your technology strategy, we invite you to explore how KriraAI can help you build AI systems that are ready for the world taking shape right now.
FAQs
The Pentagon designated Anthropic a "supply chain risk" in February 2026 after the company refused to agree to "all lawful purposes" terms for its Claude AI models. Specifically, Anthropic insisted on maintaining restrictions that would prevent Claude from being used in fully autonomous weapons systems that operate without meaningful human oversight, and from being deployed for mass domestic surveillance of American citizens. The Pentagon viewed these restrictions as unacceptable limitations on operational flexibility. Defense Secretary Pete Hegseth compared the restrictions to Boeing selling the military aircraft but dictating who could be targeted with them. Anthropic CEO Dario Amodei argued that current AI technology is not reliable enough for fully autonomous lethal operations and that mass surveillance of Americans violates constitutional principles. The blacklisting made Anthropic the first American company to receive a supply chain risk designation, a label that had previously been used only for companies associated with foreign adversaries. A federal court subsequently issued an injunction blocking the most severe aspects of the ban, finding that the government's rationale was likely unconstitutional.
Claude Mythos Preview is a frontier AI model developed by Anthropic that demonstrates unprecedented capabilities in finding and exploiting software vulnerabilities. According to Anthropic's technical disclosures, Mythos can identify and exploit zero day vulnerabilities in every major operating system and every major web browser. The model has found thousands of high severity vulnerabilities, including bugs that had survived decades of human security audits. In one documented case, Mythos chained together four separate vulnerabilities into a sophisticated browser exploit that escaped both renderer and operating system sandboxes. Anthropic restricted access to approximately 40 organizations and launched Project Glasswing, a defensive security consortium with partners including Apple, Google, Microsoft, and Nvidia, committing up to $100 million in usage credits. The National Security Agency has been testing Mythos for vulnerability discovery despite the Pentagon's blacklisting of Anthropic, creating a significant contradiction in the government's position. Mythos represents a watershed moment for cybersecurity because it demonstrates that AI models can now discover and exploit software flaws faster than the global software industry can patch them.
Eight companies signed agreements with the Pentagon on May 1, 2026, to deploy AI technology on Impact Level 6 and Impact Level 7 classified networks. Those companies are OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX (through its xAI subsidiary), Oracle, and Reflection AI. Reflection AI is a newer startup backed by Nvidia whose inclusion alongside industry giants represents a notable endorsement. These AI systems will be deployed through GenAI.mil, the Pentagon's central AI platform, which more than 1.3 million Department of Defense personnel already use. All eight companies agreed to "all lawful purposes" terms of use, the same condition that Anthropic refused to accept. The agreements cover deployment in the Pentagon's most highly classified network environments. Anthropic was the only major AI company excluded from the deal. The Pentagon described these agreements as accelerating the transformation toward an "AI first fighting force."
The Pentagon's procurement standards are establishing norms that will cascade throughout the AI industry. When the world's largest technology buyer demands unrestricted AI deployment with no vendor imposed safety restrictions, that standard influences how AI companies approach all their customers. If the precedent is that AI companies cannot impose ethical boundaries on how their technology is used without risking government retaliation, the incentive structure for the entire industry shifts toward compliance over caution. For enterprise organizations, this means that the governance frameworks for AI in business operations, healthcare, finance, and other sectors will be shaped by whatever standards emerge from the military AI procurement market. Organizations working with AI providers like KriraAI should actively monitor these developments and ensure their own AI governance frameworks do not assume that vendor imposed safety restrictions will persist in a market environment where such restrictions have been penalized.
Project Glasswing is a cybersecurity initiative launched by Anthropic in April 2026 that brings together 12 major technology companies and over 40 additional organizations to use Claude Mythos Preview for defensive security work. The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. The initiative is a direct response to Mythos Preview's demonstrated ability to find exploitable zero day vulnerabilities at an industrial scale, including bugs in every major operating system and web browser. The fact that direct competitors such as Apple, Google, and Microsoft are cooperating under Anthropic's leadership reflects the severity of the cybersecurity threat that Mythos class capabilities represent. Anthropic committed up to $100 million in model usage credits and $4 million in donations to open source security organizations. The project also prioritizes open source software security, recognizing that open source code constitutes the vast majority of modern software infrastructure. Glasswing represents a new model for managing dual use AI capabilities through industry cooperation rather than either unrestricted release or total suppression.
Founder & CEO
Divyang Mandani is the CEO of KriraAI, driving innovative AI and IT solutions with a focus on transformative technology, ethical AI, and impactful digital strategies for businesses worldwide.