Can AI Companies replace Lockheed Martin?
Just two months in and 2026 is already going crazy! We went from AI-generated slop to AI-generated war!
At ReadOn, we don’t just report the markets. We help you understand what truly drives them, so your next decision isn’t just informed, it’s intelligent.
In case you were wondering, Lockheed Martin is the world’s biggest defense contractor. It’s a company that makes things go boom!
Now getting back to the story, for the first time ever, an American company was designated a supply chain risk for America itself. That’s a label typically reserved for foreign adversaries like Chinese tech companies. President Trump had ordered all federal agencies to immediately stop using Anthropic’s technology.
We’re talking about Anthropic, the maker of Claude AI. Its crime? Refusing to remove safeguards preventing its AI from being used for mass domestic surveillance and fully autonomous weapons.
Just a few hours later, OpenAI swept in with nearly identical safeguards and got the contract anyway. But by Altman’s own admission, the deal was “definitely rushed” and “the optics don’t look good.”
On a Friday night in late February 2026, OpenAI CEO Sam Altman made an announcement. The company behind ChatGPT had struck a deal with the Pentagon to deploy its AI models in classified military networks.
This isn’t some far-off sci-fi scenario. This is happening right now. The same technology millions use daily to write emails and generate recipe ideas would soon be analyzing battlefield intelligence and supporting military operations.
So what’s really going on here? Can consumer AI companies, built on chatbots and image generators, actually become the next generation of defense giants? And more importantly, should they?
Welcome to the AI War Lab
To understand why the Pentagon is so desperate for commercial AI, you need to look at Ukraine.
The war there has become what defense analysts now call the world’s first “AI proving ground.” According to the U.S. Army War College research, drones now account for 70-80% of battlefield casualties in Ukraine. Not bullets. Not artillery. Drones.
But we’re not talking about ordinary drones anymore. Ukraine deployed autonomous drones as early as 2023. These systems hit Russian targets without human oversight. By 2025, Ukrainian officials stated that “real drone swarm uses” with AI targeting were arriving on the battlefield.
Think about that for a second. Swarms of AI-powered drones, making targeting decisions on their own, already exist and are being used in combat right now.
The technology goes way beyond flying robots. Ukraine’s Delta platform uses AI to fuse data from drones, satellites, and open-source intelligence into a unified, real-time battle map.
Ukrainian developers have trained AI systems on over 2 million hours of drone footage. These systems enable drones to autonomously navigate contested airspace, evade electronic warfare interference, and conduct high-precision targeting. A 2024 report found that AI-enhanced drones achieved a 3-4x higher target engagement rate than manually operated platforms.
And it’s not just Ukraine. Russia is racing to keep pace. In late January 2025, the Russian government reiterated its AI goals of automatic intelligence processing, improved information support for combat operations, and enhanced threat prediction. Russian forces have already begun using AI to make their Shahed drones harder to jam, upgrading them with 4G modems and satellite navigation.
This is the new reality of warfare. Fast, cheap, AI-driven. And it’s exactly why the Pentagon is knocking on Silicon Valley’s door.
The Intelligence Arms Race Gets Complicated
Here’s where things get messy. The AI companies building these systems aren’t traditional defense contractors. They’re consumer tech companies with millions of everyday users. Their business models depend on trust, user data, and a reputation for being “responsible.”
Enter the Anthropic saga.
Anthropic wanted clear restrictions. No mass domestic surveillance of Americans, no fully autonomous weapons. Seems reasonable, right? The Pentagon disagreed. Defense officials insisted AI models must be available for “all lawful purposes.”
Every Pentagon contractor was ordered to stop using Anthropic’s products immediately.
Anthropic wanted “prohibitions on domestic mass surveillance and human responsibility for the use of force.” OpenAI’s deal appears to include the same safeguards, but there’s a crucial difference in how it’s structured. OpenAI will deploy via cloud, maintain its safety stack, and send cleared personnel to ensure model safety. The restrictions reflect existing U.S. law and Pentagon policies.
This raises uncomfortable questions. Even with contractual safeguards and “red lines,” who actually controls how these systems are used once they’re deployed in classified military networks? What happens when geopolitical pressures override corporate policies?
Your Data, Their Weapon?
The elephant in the room is data. These AI companies were built on massive datasets. Much of it is user-generated content, public information, and scraped web data. The same models trained on your emails, photos, and browsing patterns could theoretically be adapted for military intelligence gathering.
The line between civilian and military AI disappearing is what makes this extremely concerning. According to research on Ukraine’s AI battlefield, the “weaponization of civilian life” is already happening. Civilian tech, civilian data, civilian infrastructure are all becoming militarized.
Ukraine’s Brave1 defense innovation platform has evaluated over 500 proposals and funded more than 70 defense AI projects, many involving civilian tech partnerships. The country even launched a “Test in Ukraine” initiative in July 2025, inviting global arms manufacturers to push drones, robots, and AI systems into real combat.
The South China Morning Post reported in February 2026 that China’s “intelligentised warfare” strategy embeds AI “from top to bottom,” while the U.S. under Trump proposed setting military spending at $1.5 trillion in 2027. That’ll be a 50% increase driven partly by concerns about China’s AI advancements.
This global AI arms race means more data collection, more surveillance capabilities, and more pressure on tech companies to prioritize national security over user privacy. When governments classify AI as strategic technology, consumer data rights become negotiable.
Consider this. Over 300 Google employees and 60 OpenAI employees signed an open letter asking their employers to support Anthropic’s position. They understood what was at stake. Once you hand over AI systems to classified military networks, you lose visibility into how they’re actually being used.
The Takeaway
So can ChatGPT or Claude become the next Lockheed Martin? Technically, yes. The technology is already there. The Pentagon contracts are being signed. The precedent is being set.
But should they? That’s the biggest question.
Traditional defense contractors like Lockheed Martin or Raytheon are purpose-built for military applications. They operate under strict government oversight, with clear chains of accountability. Their entire business model is transparent. They make weapons systems for militaries.
Consumer AI companies operate differently. They’re built on public trust, user data, and a promise of beneficial technology. When those same companies start deploying AI in classified military networks, even with safeguards, that social contract changes fundamentally.
The European Parliament’s recent briefing on defense and AI warned that “AI is rapidly transforming modern warfare” and stressed the need for “ethical guidelines, transparency, and accountability.” Yet the EU’s AI Act explicitly excludes military applications from regulation, leaving a massive governance gap.
Austrian Foreign Minister Alexander Schallenberg warned at a Vienna conference that “this is the Oppenheimer moment of our generation.” AI-driven warfare, he said, risks becoming “an uncontrollable arms race” where “mass killing becomes a mechanized, near-effortless process.”
The irony is stark. We’re debating whether AI chatbots are politically biased or give good recipe suggestions, while those same systems are being adapted for military targeting.
See, once Pandora’s box is open, it doesn’t close. Ukraine has become the world’s AI weapons testing ground. The innovations developed under existential threat in Ukraine will inevitably spread to other conflicts. Other countries. Other applications.
So no, ChatGPT and Claude probably won’t become exactly like Lockheed Martin. They’ll probably become dual-use technology giants whose consumer products double as military intelligence tools. Where your chatbot conversation history exists on the same infrastructure analyzing battlefield intelligence. Where the line between civilian and combatant gets defined based on algorithms.
The question isn’t whether AI companies will work with defense. That ship has sailed. The question is about what safeguards we build now, before the technology outpaces our ability to control it.
Until the next new age arms race, ReadOn!

