Google Under Fire Over AI Ethics and Alleged Israeli Military Assistance
SAN FRANCISCO (Diya TV) — Google has come under scrutiny following allegations that it breached its own artificial intelligence ethics rules in 2024 by assisting an Israeli military contractor in analyzing drone video footage. This claim was made in a federal whistleblower complaint reviewed by The Washington Post, raising important questions about how major tech companies implement AI policies during armed conflicts.
The Whistleblower Complaint
A former Google employee filed the complaint with the U.S. Securities and Exchange Commission, alleging that Google violated its publicly stated AI principles, which at the time prohibited the use of AI for weapons or surveillance that contravenes international norms.
The complaint asserts that Google’s cloud computing division provided support to a customer associated with the Israel Defense Forces (IDF). The request for assistance reportedly originated from an email linked to the IDF, and the customer matched the name of an employee from CloudEx, an Israeli technology firm that serves as a contractor for the IDF.
Technical Support Request
According to the whistleblower, the customer contacted Google support regarding a technical issue while using Google’s Gemini AI system to analyze aerial drone footage. The user reported a bug where the software failed to detect objects in the video feeds, which included drones, soldiers, and other significant items. Google’s customer support team responded with suggestions and conducted internal tests to address the issue.
After several exchanges, the problem was resolved. The complaint alleges that another Google employee joined the email chain, providing support for the IDF’s Google Cloud account. The footage in question was claimed to be related to Israeli military operations in Gaza during the Israel-Hamas war, although the complaint did not provide direct evidence to substantiate this assertion.
Contradictions in AI Ethics Policies
The whistleblower argues that Google’s actions contradict its own AI ethics policies and claims that the company misled regulators and investors. They expressed concerns about Google’s uneven application of its AI review process, suggesting that the company enforces its ethics rules strictly in some cases but applies a different standard when it involves Israel.
In a statement, the anonymous former employee emphasized the robustness of Google’s ethics review process but noted a perceived double standard regarding Israeli operations.
Google’s Response
Google has denied the allegations, rejecting the notion of a double standard. A company spokesperson stated that the support request did not violate AI policies and that the account in question spent less than a few hundred dollars per month on AI services, making meaningful use of AI implausible.
The spokesperson further clarified that Google’s support team provided only general guidance and did not offer technical assistance related to weapons or intelligence operations.
Ethical Concerns and Policy Changes
Critics argue that even limited support can raise ethical issues, as assisting in troubleshooting AI used for drone footage could potentially aid military operations. Google has previously maintained that its work with the Israeli government does not involve sensitive or classified military tasks.
This complaint surfaces as Google has recently revised its public stance on AI usage. In February 2025, the company updated its AI policy, removing a prior commitment to avoid surveillance and weapons use. Google cited the need for flexibility to support democratically elected governments amid rapid global changes in AI development.
It is important to note that SEC complaints do not automatically trigger investigations; they can be filed by anyone, and regulators decide whether to pursue further action.