Google Moves Forward With Pentagon AI Deal Despite Employee Pushback

Google has reportedly signed an agreement allowing the US Department of Defense to use it AI models for classified work, despite an open letter from hundreds of employees urging the company not to use military equipment they say would be dangerous or impossible to supervise.
The agreement, reported earlier Tuesday by The Information, allows the Pentagon to use Google AI tools for “any legitimate government purpose,” including critical military applications. Google is joining OpenAI and xAI, which also reached similar classified AI agreements with the Pentagon.
The reported agreement includes language stating that Google’s AI system is not intended for mass surveillance at home or autonomous weapons without proper human supervision. But it also says Google doesn’t have the right to control or challenge official government work decisions, according to reports. Google will also help adjust security settings and filters at the government’s request.
A Google spokesperson told CNET in an emailed statement that the company remains committed to the position that AI should not be used for mass surveillance at home or autonomous weapons without human supervision, and said that giving API access to commercial models under standard procedures is a “responsible way” to support national security.
The Pentagon declined to comment to CNET.
The deal comes amid internal backlash. In an open letter to CEO Sundar Pichai, more than 600 Google employees asked the company to “refuse to make our AI systems available for distributed workloads.” The workers wrote that because they work closely with technology, they have a responsibility to highlight and prevent its “unethical and dangerous use.
“We want to see AI benefit humanity, not see it used in cruel or dangerous ways,” the letter said. Workers say their concerns include lethal autonomous weapons and mass surveillance, but they go beyond those examples because classified operations can occur without workers’ knowledge or ability to stop them.
This tension is like one of Google’s most prominent internal rebellions. In 2018, thousands of workers protested Project Maven, a Pentagon program involving AI analysis of drone images. Google chose later not to renew that contract.
The company’s stance on military AI and national security has changed since then.
Last year, Google removed previous language in its AI policies that said it would not pursue technologies that could cause total harm, weapons, certain surveillance technologies or systems that violate widely accepted human rights and international law principles.
In a February blog post reviewing Google’s AI principles, Google DeepMind CEO Demis Hassabis and senior vice president James Manyika wrote that “democracy should lead the development of AI” and that companies and governments should work together to create AI that “protects people, promotes global growth and supports national security.”
For Google employees opposed to the deal, the concern is not only that AI could be used by the military, but that the classified deployment removes general visibility into how the model is being used.
“I feel very ashamed,” Andreas Kirsch, a Google DeepMind researcher, wrote in a public forum on X in response to the reported agreement.
An open letter from Google employees ends with a direct appeal to Google’s CEO: “Today, we ask you, Sundar, to live up to the values that this company is built on, and reject segregated workloads.”



