As first spotted by Bloomberg, Google quietly removed a pledge to not use AI for weapons or surveillance earlier this week. The company has updated its public AI principles page, deleting a section entitled “applications we will not pursue,” which was still present on the site as recently as last week.
When asked to comment on the change by TechCrunch, the company pointed the publication to a new blog post on “responsible AI.” It says, in part, “we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
Google’s newly updated AI principles note the company will work to “mitigate unintended or harmful outcomes and avoid unfair bias,” as well as align the company with “widely accepted principles of international law and human rights.”
Google has in recent years faced internal pressure from its employees over the company’s contracts to provide cloud services to the U.S. and Israeli militaries. While the search giant has long maintained that its AI has not been used to harm humans, the Pentagon’s chief digital and AI officer, Dr. Radha Plumb, told TechCrunch that AI is giving the Department of Defense a “significant advantage” in identifying, tracking, and assessing threats.
“We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces,” said Plumb.
The “kill chain” refers to the military’s process of identifying, tracking, and eliminating threats. Generative AI is helpful during the planning and strategizing phases of the kill chain, said Plumb.