GDPR and AI: Evaluating Software Vendors with the AI Trust Score
Independent audits reveal that major AI platforms score between C+ to A+ on privacy governance. The AITS methodology provides enterprises with a data-driven framework to assess vendor compliance.
As enterprises increasingly adopt AI tools, they often lack a standardized method to measure how these platforms protect user data. Procurement teams typically focus on pricing, uptime, and integrations, but fail to address a critical question mandated by GDPR Article 22 and CCPA regulations: How does this vendor govern its AI? Can compliance be proven if regulators come calling?
This isn’t just a theoretical concern. With AI embedded in various layers of enterprise software—from meeting transcription to automated email filtering—the compliance landscape expands significantly. Organizations that neglect to evaluate AI governance rigorously risk exposing themselves to compliance failures.
The AITS Methodology: A Standardized Framework for AI Governance Evaluation
The AITS (AI Trust Score) methodology, developed by TrustThis.org, addresses the evaluation gap. This framework assesses software platforms based on 20 distinct criteria categorized into:
- AITS Base: Covers 12 traditional privacy fundamentals including data retention, international transfers, and opt-out mechanisms.
- AITS AI: Evaluates 8 AI governance-specific criteria such as training data transparency, ethical AI principles, and algorithmic contestation rights.
Each criterion is assessed as either a pass or fail based on documented evidence from privacy policies and terms of service. Scores are converted to letter grades using a standardized scale:
- A+: 91% to 100%
- A: 81% to 90%
- B+: 71% to 80%
- B: 61% to 70%
- C+: 51% to 60%
- C, D, E: progressively lower ranges
This method transforms subjective vendor claims into objective, comparable metrics.
What the Data Reveals: Three Platforms, Three Very Different Grades
The February 2026 audit evaluated three major platforms, revealing significant disparities that compliance officers cannot afford to ignore:
- Anthropic Claude: Achieved the highest grade of A+, meeting 19 of 20 criteria. It excelled in AI governance, demonstrating transparency in data usage for model training and providing a clear opt-out mechanism.
- Microsoft Copilot: Followed with an A grade, fully complying with base privacy criteria. It documents data handling practices clearly but earned a B+ on AI governance, highlighting similar weaknesses as its competitors.
- Google Workspace: Scored C+, the lowest grade. While it passed 18 of 20 criteria, it failed significantly in base privacy practices and AI governance, lacking transparency in ethical AI principles.
The Universal Failure: Lack of AI Contestation Rights
A critical finding is that none of the evaluated platforms document a clear mechanism for users to contest automated AI decisions. This oversight poses compliance risks under GDPR Article 22, which grants individuals the right to request human review of automated decisions that significantly affect them.
For instance, when an AI system transcribes a meeting or filters emails, users currently lack documented pathways to challenge these automated decisions. This gap creates regulatory exposure for organizations deploying these tools without additional contractual protections.
What Compliance Officers Should Do Now
The AITS framework is not intended to declare winners and losers among vendors but rather to provide enterprises with objective data for informed decision-making. Based on the findings, compliance teams should take the following steps:
- Audit your current vendor stack: Use standardized criteria to evaluate vendor privacy assurances.
- Require documented AI governance policies: Vendors must clearly articulate their handling of training data and user rights.
- Negotiate contractual clauses: Address the universal contestation gap by requiring explicit provisions for human review of automated decisions.
- Reassess vendor relationships periodically: AI governance is evolving, and platforms can improve or introduce new risks over time.
The difference between an A+ and a C+ is more than just a score—it represents the gap between documented compliance and potential regulatory exposure, which can cost millions in fines.
Conclusion: Transparency Is Not Optional
As the regulatory landscape intensifies, organizations must proactively evaluate their vendors’ AI governance. The EU AI Act will introduce additional compliance layers, making it imperative for enterprises to rely on data-driven evaluation frameworks like AITS. Vendor selection should be guided by documented evidence rather than marketing claims to ensure compliance and maintain trust with employees and customers alike.