Skip to content

Laurel Papworth Lecturer and workshops on AI: ChatGPT and MS Copilot

  • AI Courses
  • Keynote
  • About LaurelExpand
    • About Laurel Papworth
    • KEYNOTE on ChatGPT
    • Events/Conferences on AI
    • Articles on Metaverse
    • Clients
    • Contact
    • Testimonials 2005 – Today
  • Alchemy Podcast
  • LecturesExpand
    • Artificial Intelligence
  • Articles (All)
Laurel Papworth Lecturer and workshops on AI: ChatGPT and MS Copilot
Home / Artificial Intelligence / AI in Defence: Why Governance, Not Automation, Won the Contract
AI News Australia | Artificial Intelligence

AI in Defence: Why Governance, Not Automation, Won the Contract

ByLaurel Papworth March 4, 2026March 4, 2026

A recent standoff between Anthropic and the U.S. Department of Defence wasn’t really about patriotism, politics, or even procurement — it was about where judgement lives when systems start making decisions at scale. Anthropic refused certain uses of its model CLAUDE, drawing a clear ethical boundary around surveillance and autonomous targeting. OpenAI stepped in and…

A recent standoff between Anthropic and the U.S. Department of Defence wasn’t really about patriotism, politics, or even procurement — it was about where judgement lives when systems start making decisions at scale. Anthropic refused certain uses of its model CLAUDE, drawing a clear ethical boundary around surveillance and autonomous targeting. OpenAI stepped in and won the contract not by dismissing those concerns, but by translating them into infrastructure: cloud control, audit trails, decision hierarchies, safety layers, contractual enforcement. The deeper question isn’t which company was right. It’s what happens to accountability when optimisation engines sit inside institutions that carry legal and moral weight. Automation doesn’t remove responsibility; it rearranges it, reassigns it, outsources it to humans who may not have agency within the system. And if leaders don’t design the escalation paths, oversight structures, and governance mechanisms deliberately, the system will optimise first and explain later. This case is less about defence and more about the future architecture of power.

Youtube video on OpenAI vs Anthropic for Department of Defence/War USA

My take on the AI In Military contract.

Hello, my name is Laurel Papworth, and I want to walk through the recent issue between Anthropic and the United States Department of Defence, and then OpenAI’s response, which ultimately secured the contract.

Anthropic refused to allow certain uses of its model.

Specifically, they drew a line around mass surveillance of American citizens and fully autonomous agentic warfare. In other words, you cannot simply hand decision-making over to the system and say, “Do whatever you want.” They framed this as a matter of democracy, privacy, and accountability.

It’s a position many people would find understandable. But while Anthropic articulated the problem clearly, they didn’t offer a structural solution that met the Department’s operational demands. The Department responded strongly, indicating that if Anthropic maintained those constraints, its models could be excluded not just from Defence but potentially from other government departments.

On the same day, OpenAI stepped in and signed an agreement to operate within classified environments. What’s interesting is that OpenAI’s “red lines” were not dramatically different. They also stated no mass domestic surveillance, no autonomous weapons targeting, and no high-stakes automated decision-making. The difference was architectural.

OpenAI proposed a layered safeguards approach.

First, the models would run on OpenAI’s infrastructure (essentially within Microsoft Azure OpenAI Cloud) rather than being installed privately or at the edge. That means the provider retains oversight of inputs, outputs, telemetry, and execution context. There are audit logs. There is real-time monitoring. There are dashboards. It is not an invisible black box sitting inside a private facility.

Second, there must be a clear personnel escalation hierarchy, what in corporate settings we call an escalation chart, and in military settings resembles chain of command. Accountability is explicitly human. The AI may optimise toward a goal, but it does not hold judgement, shame, blame, or legal responsibility. You cannot sue a model. Responsibility remains with named individuals inside institutional structures.

This is the part many organisations underestimate. Automation does not eliminate accountability; it relocates it.

In high-stakes environments (particularly Defence) decisions escalate. An operator reports to a supervisor, who reports upward. Legal and ethical oversight sits within that chain. AI outputs must be tied back into those same authority structures.

Third, OpenAI emphasised what is often called the safety stack: prompt filters, behavioural constraints, output sanitisation layers, anomaly monitoring. These are standard components in system cards and AI integration documentation, but here they were contractually reinforced.

Critically, the provider retained discretion. If the model is pushed into prohibited domains, OpenAI can shut the service down. There are audit requirements. There are compliance reviews. There are penalties for breach. These are not soft “please comply” clauses; they are enforceable conditions.

That has implications. Cloud control concentrates power with the provider. Contractual enforcement is not self-executing; it requires monitoring. Safety mechanisms can be bypassed if humans decide to override them. And humans themselves can become the weakest link (not necessarily through malice, but through rubber-stamping, fatigue, or poor system design).

What this case surfaces is a larger tension many organisations are approaching. There is a strong push toward full automation. But systems optimise toward defined goals. Humans hold competing tensions: speed versus compliance, cost versus reputation, short-term gain versus long-term stability. AI does not naturally hold those tensions in the way institutions must.

Delegating routine tasks (even weaponised processes) does not mean delegating responsibility.

Technical constraints, institutional controls, contractual enforcement, and operational transparency form a governance bundle. Without that bundle, optimisation can drift beyond intended boundaries.

Initially, I saw this as Anthropic good, OpenAI bad. On closer inspection, Anthropic framed the problem clearly. OpenAI translated that problem into governance architecture that institutions could operationalise. In environments like Defence, where stakes are existential, that architecture matters.

Let me repeat: you do not outsource judgement to a system. You remain vigilant:

  • You surface risk earlier.
  • You maintain escalation paths.
  • You tie automated outputs back to legal accountability.

That’s not a military issue. That’s an AI governance issue for every serious organisation.

Resources for Military AI (Department of Defence)

  • REUTERS: Anthropic cannot accede to Pentagon’s request in AI safeguards dispute, CEO says https://www.reuters.com/sustainability/society-equity/anthropic-rejects-pentagons-requests-ai-safeguards-dispute-ceo-says-2026-02-26
  • ASIS (security): https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/Anthropic-Refusal
  • ANTHROPIC statement: https://www.anthropic.com/news/statement-department-of-war
  • OPENAI statement: https://openai.com/index/our-agreement-with-the-department-of-war/
  • ORACLE relationship with DoW (AI, data and infrastructure should be here): https://www.oracle.com/au/defense-intelligence/
  • SALESFORCE AI and Military Defence: https://www.salesforce.com/news/press-releases/2026/01/26/us-army-department-of-war-missionforce-announcement/
  • REUTERS on layered protections and red lines defence https://www.reuters.com/business/media-telecom/openai-details-layered-protections-us-defense-department-pact-2026-02-28
  • WJARR paper on anticipatory models: https://journalwjarr.com/sites/default/files/fulltext_pdf/WJARR-2025-3505.pdf

This post was lightly edited by ChatGPT for comprehension and betterer English.

Post Tags: #Anthropic#artificial intelligence#Defence#openai

Post navigation

Previous Previous
Microsoft Copilot Memories – AI News Australia
NextContinue
Stock Market, Pick and Shovel, Artificial Intelligence #Australia
  • Terms and Conditions
  • Testimonials 2005 – 2021
  • Contact
  • Privacy Policy

© 2026 Laurel Papworth Lecturer and workshops on AI: ChatGPT and MS Copilot

pa@laurelpapworth.com
+61432684992
Mount Victoria, NSW 2786
Australia
Facebook Twitter Instagram YouTube Linkedin
  • AI Courses
  • Keynote
  • About Laurel
    • About Laurel Papworth
    • KEYNOTE on ChatGPT
    • Events/Conferences on AI
    • Articles on Metaverse
    • Clients
    • Contact
    • Testimonials 2005 – Today
  • Alchemy Podcast
  • Lectures
    • Artificial Intelligence
  • Articles (All)