More than 600 Google employees sent an open letter to CEO Sundar Pichai on Monday pleading with him to refuse any Pentagon contract that would place the company’s Gemini AI models inside classified military networks. By the time many of them woke up on Tuesday, multiple outlets reported that the ink was already dry.
The letter, signed by workers across Google DeepMind, Cloud, and other divisions including more than 20 directors, senior directors, and vice presidents, warned that once AI systems enter classified environments, there is essentially no way for the company to monitor what happens to them. “Classified workloads are by definition opaque,” one organizing employee said in a statement. “Right now, there’s no way to ensure that our tools wouldn’t be leveraged to cause terrible harms or erode civil liberties away from public scrutiny.”
The deal confirmed on Tuesday is an amendment to an existing Pentagon contract under the genAI.mil program, which already gave the Defense Department access to Google’s AI for unclassified work. The new version extends Gemini’s reach into classified domains under terms that permit “any lawful government purpose,” according to The Information, which first reported the agreement. Google confirmed the arrangement but stressed that it includes language stating the AI is “not intended for” domestic mass surveillance or autonomous weapons without appropriate human oversight.
That language did little to reassure employees who had spent months watching a near-identical dispute play out down the street.
The Anthropic Shadow
The backdrop here is the Pentagon’s messy split with Anthropic earlier this year. The AI startup refused to let its Claude model be used for mass surveillance or autonomous weapons without restrictions. In response, the Defense Department labeled Anthropic a “supply chain risk,” a designation normally reserved for foreign adversaries, and canceled its contract. Anthropic sued.
OpenAI and Elon Musk’s xAI moved quickly to fill the gap, each signing their own classified agreements. Google is now the third major AI lab to step in where Anthropic drew a line. The employees’ letter cited that sequence directly, arguing that Google was walking into a fight it could have avoided.
The Principle That Wasn’t
The anger inside Google is also about something older. In 2018, thousands of employees protested Project Maven, a Pentagon program that used AI to analyze drone footage. Google let that contract lapse and soon after published a set of AI principles pledging not to use the technology for weapons or surveillance. Those principles were rewritten in February 2025, stripping out the weapons and surveillance language entirely. This week’s deal is what that revision was always designed to enable.
Andreas Kirsch, a research scientist at Google DeepMind, posted on X on Tuesday that he was “speechless” and “ashamed.” He told Business Insider he had gone to bed Monday night hopeful the employee letter would force a pause. “This morning I woke up to a worst-case version of the contract being signed by Google in the meantime,” he said.
The company, for its part, described the agreement as a responsible extension of existing work. A spokesperson told Reuters that providing API access to commercial models on Google infrastructure “represents a responsible approach to supporting national security.”
The employees who signed the letter see it differently. They argue that on air-gapped classified networks, “should not be used for” is not a safeguard. It is a suggestion. And suggestions do not stop drone strikes.
Subscribe to my whatsapp channel