The Struggle for Good AI Governance is Real
Many organizations deploying AI recognize the need for guardrails, but few have figured out how to build a mature governance model.
According to a recent survey from Cisco, three out of four organizations report having a dedicated AI governance process in place, but only 12% describe their efforts as mature. Cisco’s 2026 Data and Privacy Benchmark Study suggests that AI governance processes are still evolving, with privacy concerns driving the need for more guardrails. Notably, 93% of organizations plan to invest further to keep up with the complexity of AI systems and expectations from customers and regulators.
The Challenge of Establishing Governance
The struggle to establish good governance is real, and AI experts agree. Recognition from IT and security professionals that they have work to do is a positive development, according to industry leaders. As Jen Yokoyama, senior vice president for legal innovation and strategy at Cisco, states, “It’s a good statistic to show the awareness of the complexity that is facing these companies.”
One of the big challenges for organizations deploying AI is that governance has lagged behind adoption. Many IT leaders must make decisions on compliance, ethical issues, and transparency while technology is being rolled out. Yokoyama highlights the push for speed and quick adoption as a significant factor, stating, “They need to do it at speed because people want to see returns on that technology.”
Complications from Quick Deployments
The speed of AI adoption complicates governance efforts. Jean-Matthieu Schertzer, chief AI officer at Eagle Eye Group, observes that while many organizations quickly deploy AI across functions like marketing and operational efficiency, governance maturity often lags. The opaque nature of AI systems makes it difficult to trace decisions, identify bias, and establish accountability when things go wrong.
Effective AI governance relies on structured operating practices such as documenting model limitations, conducting bias and security audits, and establishing review workflows. AI leaders must also meet growing expectations around transparency, consent, and regulatory compliance, which span legal, data, security, marketing, and product teams. Progress often slows when ownership is unclear or initiatives remain confined to siloed pilots.
The Data Governance Connection
Many organizations struggle for better AI governance due to a lack of good data governance. Anisha Vaswani, chief information and customer officer at Extreme Networks, points out that enterprises are still grappling with good data governance amidst rapidly evolving technology landscapes and investments. “You’re dealing with a lot of complexity in your data, fragmentation of models, and you need to keep abreast of it,” she adds.
Vaswani recommends establishing cross-functional teams to address governance issues and emphasizes the importance of auditability and explainability in AI tools. “Part of governance is asking, ‘What could go wrong, and how are we going to mitigate it?’”
Collaboration Across Disciplines
Creating good practices in AI governance will require collaboration across multiple disciplines within organizations. Cisco’s Yokoyama notes that “IT professionals see things that legal doesn’t, that privacy doesn’t, that the engineers don’t.” Without mechanisms for conversation, especially in larger companies, organizations risk learning after the fact and becoming reactive.
Effective AI governance requires broad, cross-functional participation. Organizations should unite product, engineering, operations, legal, and business leaders to define shared standards and accountability, creating an ongoing operating model embedded into the product lifecycle that evolves as AI capabilities mature.
The Role of Leadership
Leadership is crucial, and top executives must define governance as a core responsibility of deploying AI. Clear ownership, decision rights, and escalation paths should be established across the AI product lifecycle. Organizations should not treat regulation as the sole driver of governance models; instead, governance decisions should be anchored in human impact, ensuring AI systems are designed and deployed with safety, trust, and responsible execution in mind.
IT leaders are encouraged to treat governance like financial oversight rather than red tape. Regular audits for bias and clear documentation of AI outputs can lead to responsible AI becoming a repeatable practice.