With AI Accountability Stalling, Boards Must Push Tech Giants for Greater Transparency
February 3 – Global investment in artificial intelligence technologies is expected to surpass $500 billion this year, and a handful of tech giants now hold over 60% of the world’s cloud infrastructure and data processing power, the engines powering a new era of economic transformation.
From healthcare systems and financial networks to government services and food supply chains, AI is becoming the backbone of fundamental operations. Its integration will have cascading effects across industries, shaping how societies function and having a defining impact on the success – or malfunction – of tomorrow’s economy.
Any organisation that develops, procures, or deploys these systems holds the power to shape risks and rights downstream. The choices that tech companies such as Amazon, Meta, and Microsoft make today in deploying and governing AI will determine whether this transformation strengthens resilience and trust – or amplifies risk and inequality.
When an AI product is deployed without robust principles such as transparency, accountability, and respect for human rights, the result can be harmful content circulating widely, sparking regulatory backlash. For instance, governments in the UK, Indonesia, Malaysia, and more have criticized X for failing to prevent the mass production of sexualized AI-generated images of women and children using its Grok chatbot. X’s transparency on AI lags behind that of many of its peers, creating a policy vacuum that risks further eroding trust in its flagship platform.
The World Benchmarking Alliance (WBA) recently announced new data around ethical AI from its analysis of 200 major tech companies, part of its wider analysis of 2,000 most influential companies across all industries.
One statistic jumped out: among the 200 assessed tech companies, only 38% publish ethical AI principles, yet not a single company disclosed comprehensive human rights impact assessments (HRIAs) results for the AI systems they build, buy, or use. That gap between promises and practice is worrying at precisely the moment AI’s footprint and risk profile is expanding.
Equally alarming, findings show that high-level AI transparency such as public disclosure of AI principles is slowing down. Only nine companies (4.5%) reported their AI principles for the first time in 2025, compared with 19 in 2024.
Most tech giants are far from fulfilling even fundamental, achievable criteria. Out of all tech companies assessed, around 19% committed to any regional or international AI frameworks (38 companies) or incorporated respect for human rights into their principles (41). Only slightly more than 10% explained their internal AI governance mechanisms.
There is a silver lining, though. Compared with 2023, far more companies now acknowledge AI as a material risk, signaling a shift away from pure hype towards recognizing AI’s real-world impacts.
For instance, NEC elevated human-rights-respecting AI to its top sustainability priority, LG Electronics published dedicated AI accountability reporting, and Salesforce linked AI to environmental sustainability. The newest Ranking Digital Rights Index (RDR Index) also found major tech companies adopting detailed new policies to guide their algorithmic governance and improving transparency overall.
While there are some positive trends, major industry-shaping firms such as ASML, Oracle, SK Hynix, and TSMC, as well as platforms that millions of people use every day such as Spotify or Uber, still have no public AI principles. What’s more, some of these firms have been persistently unresponsive to investors’ efforts to engage with them on their AI practices, as WBA revealed last month in a report on its Collective Impact Coalition for Ethical AI.
Publishing high-level AI principles is only the first step towards improving human rights outcomes. Without more robust disclosures and buy-in across industries, even this initial action risks becoming paralyzed before it becomes the norm. Companies that are further ahead should prioritize strengthening and clearly explaining their impact assessment processes, especially as the use and development of some AI systems will require mandatory assessments in the EU this year. Early movers will set the tone for their peers.
The regulatory clock is also ticking. In Europe, transparency and HRIA-related obligations are entering force on a staggered timeline under the EU AI Act, with fundamental rights impact assessment for high-risk AI systems entering into force August 2 this year. In short: what’s voluntary today will be mandatory tomorrow.
We are reaching a critical point for AI transparency just as technology dominates essential infrastructure. The way forward is not yet another pledge, but a shift from principles to proof, like rigorous HRIAs, public findings, and measurable improvements in systems and outcomes.
That shift will reduce reputational and regulatory risks and accelerate time-to-value for AI investments by clarifying where guardrails are needed. Boards that expect AI to drive growth must insist on the governance that sustains it. Otherwise, letting accountability stall while the technology dominates is a risk no business leader should accept.