How Iran, Anthropic-DoD Dispute Show the Need for Protective AI

How Iran, Anthropic-DoD Dispute Show the Need for Protective AI
US Secretary of Defense Pete Hegseth (L) and Admiral Charles Bradford “Brad” Cooper II, Commander of US Central Command, speak during a press conference at US Central Command (CENTCOM) headquarters at MacDill Air Force Base in Tampa, Florida, on March 5, 2026. (Photo by Octavio JONES / AFP via Getty Images)

U.S. and Israeli attacks on Iran mark yet another escalation in a period of rapidly expanding U.S. military activity stretching from Yemen and Nigeria to the Caribbean, Venezuela, and now direct confrontation with Tehran. These operations are unfolding at a moment when AI is increasingly embedded into military operations, including in Iran, from intelligence analysis and operational planning to target development and decision-support. At the same time, a public rupture between the U.S. Department of Defense and Anthropic—one of the few AI developers that attempted to place guardrails on military use of its models—has thrown the stakes of the military AI debate into sharp relief.

These developments are not unrelated. Together, they point to a fundamental imbalance in current military AI: investment, institutional attention, and partnership incentives are disproportionately skewed toward “maximum lethality,” speed, and operational scale, while AI capabilities that ensure international humanitarian law (IHL) compliance, or that could strengthen civilian protection, remain de-prioritized and underfunded.

The DoD-Anthropic dispute has rightfully drawn attention to the risks posed by autonomous weapon systems (although core issues underlying this discussion are not new—states and civil society have been debating the issue for more than a decade). Yet largely absent in this recent flurry of public attention has been any forward-looking, affirmative vision of the legal and protective values military AI should be designed to advance. This framing reflects a deeper orientation whereby lethality and operational effectiveness (narrowly defined) have become the default organizing principles of military AI development. The problem is not just that military AI is rapidly advancing; it is that it is moving in one direction. Technologies and operational capabilities follow investment. If institutional energy flows primarily toward speed, scale, and lethality, those are the predominant capabilities that will exist—putting even further pressure on the personnel and systems meant to ensure compliance with IHL and mitigate harm to civilians.

Militaries today suffer from an acute civilian data deficit. They operate with unprecedented volumes of data relating to adversary forces yet remain comparatively in the dark about civilians and the civilian environment. Meanwhile, the resurgence of large-scale combat operations (LSCO), characterized by much higher operational tempo and scale of attacks, further compounds these challenges. The underlying data infrastructure for the civilian environment and civilian protection, such as data on humanitarian organizations, critical infrastructure like electricity networks, and standardized civilian harm reporting, has been systematically neglected while adversary-focused intelligence capabilities have been massively resourced.

Such challenges are already evident in U.S. operations in Iran, being conducted perhaps at a higher scale and tempo than any U.S. operation since the 2003 invasion of Iraq. U.S. strikes are often occurring in dense urban environments, which raises the likelihood of targeting or damage to dual-use infrastructure such as ports and telecommunications hubs that serve both military functions and essential civilian needs.  The strikes are also taking place near the presence of schools, medical facilities, and other civilian objects in proximity to military objectives–including the U.S. military’s strike that hit an Iranian girls’ school on the first day of the war, reportedly killing over 100 children. Such operations entail significant risks of not only direct civilian casualties but foreseeable indirect, reverberating, and cumulative harm to civilians, from hospitals or medical services dependent on power grids and communication systems, to critical government services, water and sanitation systems, and supply chains.

No DoD actor at this stage appears institutionally tasked with directing AI investment toward ensuring IHL compliance or civilian protection, or even conceptualizing what that might look like. The imbalance is striking.

DoD has announced successive multi-billion-dollar initiatives to accelerate AI integration into military operations with a FY 2026 budget of over $13 billion invested into AI. The Chief Digital and Artificial Intelligence Office (CDAO), established in 2022, now serves as the primary hub for AI governance and development across the Department. Flagship programs such as Maven Smart System are framed principally around increasing speed, scalability, and lethality of operations, and in an overall institutional environment where the Secretary of Defense has articulated the goal as becoming an “AI-first” warfighting force.

Yet within this vast and well-funded effort, there appear to be no dedicated resources to support AI technologies aimed at strengthening IHL compliance or civilian protection. Despite commitments to “responsible AI,” DoD has established no program, line of effort, or research initiative focused on developing AI tools for civilian harm mitigation or IHL compliance. As discussed further below, this lack of prioritization and investment is unfortunately consistent with the administration’s wider approach to these issues.

Rebalancing: Building Better AI and Data Infrastructure for Civilian Protection as a Strategic Objective

Rebalancing requires treating civilian protection not as a constraint on military AI but as a strategic objective to be actively advanced, one that demands its own research agenda, procurement criteria, and performance metrics.

There are a wide variety of AI tools and capabilities that could strengthen IHL and civilian protection while enhancing operational effectiveness: from enhanced proportionality analyses by modelling cascading and cumulative civilian harm caused by strikes on dual-use infrastructure, to dynamic no-strike lists supported by AI-PoC “bots,” AI-enabled chat bots trained on civilian protection best practices and real-time civilian environment data that could mitigate bias in targeting—all areas years of research and analysis have identified as critical drivers of civilian harm and undermining of IHL. Governments could also use AI-enabled tools to accelerate post-strike civilian harm assessments and at scale.

The same analytic capabilities that make AI potentially dangerous—its speed, scale, and capacity to process vast data—also potentially make it uniquely suited to address some of the operational and informational challenges that militaries face in upholding IHL. These and other potential uses of AI are not beyond technical feasibility. What they lack is investment and institutional prioritization.

Rebalancing also requires building the data infrastructure on which protective AI depends. Civilian harm data systems remain fragmented, under-resourced, and largely absent from military planning as well as investigation, accountability, and learning processes. Robust civilian environment data—patterns of life, infrastructure dependencies, displacement flows—must be treated as operationally and strategically necessary, not as an afterthought or solely the responsibility of humanitarian actors. Without such data, military AI will only reinforce biases and blindspots, rendering civilians even less visible and accounted for in operational planning and targeting–which increases risks to U.S. legitimacy, military and intelligence cooperation, and operational effectiveness.

Institutional Investment is Required

The DoD’s 2022 Civilian Harm Mitigation and Response Action Plan (CHMR-AP) and the Civilian Protection Center of Excellence (CPCoE), established in 2023, were a rare attempt to institutionalize civilian protection at the precise moment the U.S. military was accelerating technological transformation. Properly resourced, these initiatives could have served as a locus for expertise, experimentation, and investment in AI for civilian protection—and as a model for other states integrating AI into military modernization.

I write this having previously served at CPCoE, an effort that was borne out of a  hard-earned lesson that civilian protection does not occur automatically. It must be resourced, institutionalized, and operationalized. The de-prioritization of civilian harm mitigation and response risks dismantling what was to be the institutional home within DoD for this work–alarm that has been echoed in Congress in the wake of U.S. operations and civilian casualties in Iran. And that shift matters for military AI because capabilities follow investment. If institutional energy flows primarily or exclusively toward speed, scale, and lethality, those are the predominant capabilities that will mature. The protective potential of AI will remain fallow.

To be clear, the concerns voiced by Anthropic and others regarding transparency, autonomy, and meaningful human control are core and deserve sustained attention, as was the case during last week’s CCW GGE discussions on Lethal Autonomous Weapon Systems. Robust guardrails and oversight are necessary conditions for responsible military AI. But more is needed, especially as technological development and geopolitical competition race ahead. Proponents of responsible AI, including DoD leadership and CDAO, should also articulate a comprehensive vision of what military AI can and should affirmatively accomplish—one that treats civilian protection, legality, and accountability as core design objectives, not afterthoughts or constraints.

As we are seeing in Iran today, AI is shaping how modern war is and will be fought. What that looks like is not just a technological choice. It is an institutional one—and it is being made right now, through policy priorities, budget decisions, procurement processes, and the structure of public-private partnerships.

Looking ahead, the question is what kind of military AI do we want—one that will merely optimize for destruction, or AI that is also designed and structured to strengthen legality and protection? The latter is possible. But in addition to contractual language, it takes intentional prioritization and investment from AI companies and militaries.

, Published courtesy of Lawfare. 

No Comments Yet

Leave a Reply

Your email address will not be published.

© 2026 Open Data News Wire. Use Our Intel. All Rights Reserved. Washington, D.C.