
In the spring of 2025, something unusual happened in the tech industry: an AI company said no to $200 million. Anthropic, the maker of Claude, rejected a Pentagon contract — not because the technology wasn't ready, but because two of the required use cases violated the company's ethical principles. A few weeks later, OpenAI sat down at the same negotiating table. And signed.
This episode is more than a Silicon Valley anecdote. It reveals a fundamental rift in the AI industry: where does the red line run between technological progress and military use? And who draws it?
The facts: The U.S. Department of Defense was looking for an AI partner for a comprehensive data analysis and decision support program. Anthropic was the frontrunner — the company is considered a leader in developing safe AI systems. But during due diligence, it emerged that the contract included two use cases Anthropic could not support: autonomous weapons systems without human control and mass domestic surveillance.
Anthropic CEO Dario Amodei publicly explained the refusal: frontier AI models are not reliable enough for autonomous combat decisions. Hallucinations that are annoying in a chatbot could be lethal in a weapons system. And mass surveillance of citizens in their own country violates fundamental rights — regardless of which government orders it.
The backlash was severe. The Trump administration designated Anthropic a national security concern — an unprecedented step against an American tech company. Simultaneously, OpenAI took up negotiations and closed a deal for classified AI systems. CEO Sam Altman defended the decision with the argument of "responsible engagement": if AI will inevitably be used militarily, it's better to be at the table than standing outside.
Anthropic's argument is both technical and principled. On the technical side: current AI models produce errors. They hallucinate facts, misinterpret context, and their decision-making processes are not fully explainable. In a customer service chatbot, this leads to a wrong product recommendation. In an autonomous weapons system, it leads to dead civilians.
On the principled side: there are applications that are unacceptable regardless of technical maturity. Mass surveillance of your own population is one of them. Not because the technology can't do it — but because a society shouldn't.
OpenAI's counterargument also carries weight. If the world's most powerful AI systems will be used militarily — and they will — then it's better for responsible companies to help shape the terms than to leave the field to authoritarian states or less scrupulous providers.
Sam Altman emphasized that OpenAI had defined clear boundaries: no autonomous weapons, no systems for identifying and killing individuals. The cooperation was limited to logistics, cyber defense, and intelligence analysis. Critics respond: it's precisely this dual-use gray zone that makes the distinction impossible in practice.
The core problem with autonomous weapons systems isn't precision — it's speed. Modern AI systems make decisions in milliseconds. An autonomous system that identifies and engages a target operates faster than any human oversight can intervene. The concept of human-in-the-loop — a human confirming every lethal decision — becomes fiction when the machine acts a thousand times faster than a human can think.
Then there's the accountability vacuum. When a soldier kills a civilian, there's a chain of command, military courts, international law. When an algorithm makes the same mistake — who bears the blame? The developer? The general who deployed the system? The state that procured it? International law has no answer yet.
And then there are the errors. Every AI system has an error rate. Image recognition that achieves 99% accuracy sounds impressive — until you consider that out of 10,000 analyzed individuals, 100 are misclassified. In a surveillance system, that means 100 innocents on a target list. In a weapons system, it means 100 potential civilian casualties.
Most AI technology is dual-use — it can be deployed for civilian and military purposes. This applies even to seemingly harmless applications:
The technology itself is neutral. The decision of how to use it is not. This is precisely where it gets uncomfortable for AI companies: can you control what your customer does with your product after purchasing it?
The honest answer is: only to a limited extent. And that's exactly why contractual restrictions, technical safeguards, and — above all — clear company values are so important. The line must be drawn before the lucrative contract is on the table.
Every AI company will sooner or later face a variant of this question. Perhaps not with the drama of a Pentagon deal — but in more everyday forms: a client who wants to use text analysis for employee surveillance. A commissioning party from an authoritarian state. A request for sentiment analysis of protesters.
The companies that handle these situations best have one thing in common: they defined their red lines before they were needed. Not as a vague statement of intent in a press release, but as concrete criteria embedded in sales processes, contracts, and technical safeguards.
This sounds obvious. In practice, it's extraordinarily rare.
At deepsight, we develop AI tools for text analysis — for market research, customer feedback, media analysis. Our technology could theoretically be used for surveillance. That's why it's important for us to state clearly: We build tools for understanding text, not for surveilling people.
Our GDPR compliance isn't a regulatory burden — it's an expression of values. Data minimization, purpose limitation, transparency: these aren't just legal obligations, but design principles built into our architecture.
We don't accept contracts aimed at monitoring individuals or political groups. We don't sell to military or intelligence agencies. And we document these decisions so they don't remain mere lip service.
The question isn't whether AI will be used militarily — it already is. The question is: who sets the rules? And by what criteria?
Anthropic's decision to turn down $200 million is remarkable — not because it's morally unassailable, but because it shows that alternatives are possible. OpenAI's decision to participate is understandable — but it shifts responsibility to governance structures that don't yet exist.
For the AI industry as a whole, the lesson is clear: Ethical boundaries must be defined before the pressure arrives. Not as a reaction to public outrage, but as the foundation of your business model. Because when the $200 million contract is on the table, it's too late to start thinking about values.
