Ex-employee Alleges Google Breached AI Ethics Policy
A former Google employee has raised serious allegations against the tech giant, claiming that the company violated its own AI ethics policies by assisting an Israeli military contractor in analyzing drone footage in 2024. This whistleblower complaint has been reported by The Washington Post and filed with the United States Securities and Exchange Commission (SEC).
Allegations of Breach
The complaint alleges that Google contravened its stated “AI principles,” which assert that the company will not use AI technology for surveillance that violates internationally accepted norms or for applications related to weapons. According to the documents, a customer support request from an email address associated with the Israeli Defense Forces (IDF) was allegedly sent to Google’s cloud-computing division.
The request was linked to a customer identified as an employee of CloudEx, an Israeli tech company reportedly under contract with the IDF. The complaint details that the customer sought assistance for a bug encountered while using Google’s Gemini AI to analyze aerial footage, which purportedly led to failures in identifying objects such as drones and soldiers.
Internal Responses and Concerns
Google’s cloud customer support reportedly provided suggestions and conducted internal tests related to the issue. After several exchanges between the CloudEx employee and Google support staff, the problem was allegedly resolved. Notably, another Google employee was copied on the support request; the whistleblower claims this individual worked on the IDF’s Google Cloud account.
The complaint suggests that the aerial footage in question was connected to Israeli operations in Gaza during the Israel-Hamas War. However, the whistleblower did not provide specific evidence to substantiate this claim. They further alleged that the customer service interactions contradicted Google’s public policies and broke securities laws, misleading both regulators and investors.
Response from Google
In response to these allegations, a Google spokesperson disputed the claim of a “double standard,” asserting that the customer support interaction did not violate AI ethics. The spokesperson noted that the request originated from a customer account with minimal spending on AI products, making any significant usage of AI impossible.
Google’s cloud video intelligence service, mentioned in the complaint, offers object tracking free for the first 1,000 minutes, costing 15 cents per additional minute. The spokesperson emphasized that Google customer support merely answered a general use question and did not provide further technical assistance.
Background and Policy Updates
SEC complaints like this can be filed by any individual but do not guarantee an investigation. In February 2025, Google updated its AI policy to explicitly prohibit the use of its technology for surveillance and weaponry, stating the need to support democratically elected governments in adapting to global AI developments.
This situation raises significant questions about corporate accountability and the ethical implications of technology in military contexts, particularly regarding the potential for misuse and the responsibilities of tech companies in upholding their stated principles.