CFOs Must Enhance AI Governance for Financial Excellence

Research Suggests CFOs Should Step Up AI Governance

As AI usage expands across finance functions, many CFOs are looking to direct capital toward data automation as they coordinate more closely with technology and risk leaders on auditability and accountability.

This shift is increasingly shaped by how quickly AI-enabled tools are becoming part of everyday work habits across organizations. Newer finance and operations employees are onboarding with a high degree of comfort using AI, raising their expectations about speed, access, and decision support within the finance function.

The Importance of Data Discipline and Governance

These dynamics, from the CFO perspective, emphasize data discipline and governance as new technologies like generative AI become embedded into workflows across finance and the company at large.

Recent survey data indicates that finance leaders are funding these efforts with dedicated budgets and IT support as AI use expands across finance’s core processes. Broader research points to uneven enterprise readiness, underscoring the critical role of governance and data architecture in shaping how AI is deployed and assessed.

Investment in Governance Expands Alongside AI Use

According to Workiva, 79% of organizations are prioritizing data automation and governance to address persistent data issues. These initiatives are supported by substantial resources, with 73% of respondents reporting dedicated IT team support and 71% indicating they have secured a dedicated budget.

The availability of this funding is closely tied to how finance leaders assess the risks associated with data quality. Respondents frequently pointed to the familiar problem of delayed operational decisions due to weak data, which can lead to regulatory fines, legal action, and loss of investor or lender credibility.

Almost all respondents (91%) believe that AI has improved the timeliness and strategic value of financial decisions. Its presence in reporting continues to grow, with 65% using AI in select components of quarterly or annual disclosures, and nearly half (46%) applying it extensively across the reporting process.

Coordination Across Functions

As usage expands, oversight mechanisms are keeping pace, with 76% of organizations reporting that internal audit teams test AI models. How these initiatives are executed increasingly relies on coordination across functions. Almost all respondents (96%) agree that alignment among the CFO, CIO, and CSO is essential to break down data silos, and the same percentage state that improved access to shared data increases the likelihood of achieving optimal business outcomes.

Enterprise Readiness and Its Impact on Governance Investments

Workiva’s survey also indicates that finance leaders are prioritizing data automation and governance, backed by dedicated budgets, IT support, and formal oversight. This emphasis reflects how closely data quality and control are tied to financial decision-making and reporting processes.

Research from Cisco quantifies why these investments matter. The Cisco AI Readiness Index revealed that only about 13% of organizations qualify as AI “pacesetters,” meaning they possess the infrastructure, data integration, and governance maturity required to scale AI effectively. These organizations are roughly four times more likely to move AI initiatives from pilot to production and about 50% more likely to report measurable business value from AI.

When viewed alongside Workiva’s findings, Cisco’s data highlights the operational impact of governance spending. Investments in data architecture, interoperability, and controls directly address the readiness gaps identified as barriers to AI value. Organizations with fragmented data environments or underinvested systems face greater difficulties in evaluating AI outputs, measuring return on investment, and applying AI consistently across functions.

Workforce Behavior and AI Adoption

As adoption expands, workforce behavior is significantly influencing how quickly AI tools are integrated into daily operations. Recent findings from the tutoring platform Wiingy illustrate a concerning mix of familiarity and dependence that many employees are exhibiting. According to their report, 54% of Gen Z respondents use AI multiple times a day, while 15% use it a few times a week. In total, 72% claim they cannot go a week without using AI, often outside formal enterprise systems and policies.

Data from Workiva indicates that as AI becomes part of a worker’s routine, inconsistent data inputs and unmanaged tools introduce unprecedented amounts of risk. Wiingy’s research also shows that AI is primarily utilized as a productivity and learning tool, with three-quarters of respondents (75%) feeling a personal connection to AI and 62% stating it helps them express ideas more effectively. Only 3% believe that AI shapes their values, indicating that the shift to develop and educate teams on AI has not yet begun.

Conclusion: The Call to Action for CFOs

All this data serves as a warning to CFOs and other senior leaders. They still have time to clearly define how AI is used within their organizations, but must establish governance frameworks that preserve accountability for judgment-based decisions, support efficient use across finance functions, and balance data quality goals with risk management and employee upskilling.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...