Achieving Effective AI Governance in a Rapidly Evolving Landscape

The Struggle for Good AI Governance is Real

Many organizations deploying AI recognize the need for guardrails, but few have figured out how to build a mature governance model.

According to a recent survey from Cisco, three out of four organizations report having a dedicated AI governance process in place, but only 12% describe their efforts as mature. Cisco’s 2026 Data and Privacy Benchmark Study suggests that AI governance processes are still evolving, with privacy concerns driving the need for more guardrails. Notably, 93% of organizations plan to invest further to keep up with the complexity of AI systems and expectations from customers and regulators.

The Challenge of Establishing Governance

The struggle to establish good governance is real, and AI experts agree. Recognition from IT and security professionals that they have work to do is a positive development, according to industry leaders. As Jen Yokoyama, senior vice president for legal innovation and strategy at Cisco, states, “It’s a good statistic to show the awareness of the complexity that is facing these companies.”

One of the big challenges for organizations deploying AI is that governance has lagged behind adoption. Many IT leaders must make decisions on compliance, ethical issues, and transparency while technology is being rolled out. Yokoyama highlights the push for speed and quick adoption as a significant factor, stating, “They need to do it at speed because people want to see returns on that technology.”

Complications from Quick Deployments

The speed of AI adoption complicates governance efforts. Jean-Matthieu Schertzer, chief AI officer at Eagle Eye Group, observes that while many organizations quickly deploy AI across functions like marketing and operational efficiency, governance maturity often lags. The opaque nature of AI systems makes it difficult to trace decisions, identify bias, and establish accountability when things go wrong.

Effective AI governance relies on structured operating practices such as documenting model limitations, conducting bias and security audits, and establishing review workflows. AI leaders must also meet growing expectations around transparency, consent, and regulatory compliance, which span legal, data, security, marketing, and product teams. Progress often slows when ownership is unclear or initiatives remain confined to siloed pilots.

The Data Governance Connection

Many organizations struggle for better AI governance due to a lack of good data governance. Anisha Vaswani, chief information and customer officer at Extreme Networks, points out that enterprises are still grappling with good data governance amidst rapidly evolving technology landscapes and investments. “You’re dealing with a lot of complexity in your data, fragmentation of models, and you need to keep abreast of it,” she adds.

Vaswani recommends establishing cross-functional teams to address governance issues and emphasizes the importance of auditability and explainability in AI tools. “Part of governance is asking, ‘What could go wrong, and how are we going to mitigate it?’”

Collaboration Across Disciplines

Creating good practices in AI governance will require collaboration across multiple disciplines within organizations. Cisco’s Yokoyama notes that “IT professionals see things that legal doesn’t, that privacy doesn’t, that the engineers don’t.” Without mechanisms for conversation, especially in larger companies, organizations risk learning after the fact and becoming reactive.

Effective AI governance requires broad, cross-functional participation. Organizations should unite product, engineering, operations, legal, and business leaders to define shared standards and accountability, creating an ongoing operating model embedded into the product lifecycle that evolves as AI capabilities mature.

The Role of Leadership

Leadership is crucial, and top executives must define governance as a core responsibility of deploying AI. Clear ownership, decision rights, and escalation paths should be established across the AI product lifecycle. Organizations should not treat regulation as the sole driver of governance models; instead, governance decisions should be anchored in human impact, ensuring AI systems are designed and deployed with safety, trust, and responsible execution in mind.

IT leaders are encouraged to treat governance like financial oversight rather than red tape. Regular audits for bias and clear documentation of AI outputs can lead to responsible AI becoming a repeatable practice.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...