Manufacturers Must Bridge AI Adoption and Cybersecurity Gaps

Manufacturers’ AI Adoption is Outpacing Cybersecurity, Compliance, and Risk Governance

A new report from Kiteworks finds that manufacturers lead in operational AI controls but remain underprepared for adversarial AI attacks, regulatory audits, and third-party AI failures across global supply chains.

Key Takeaways

  • Operational AI controls are strong, but narrowly focused. Most manufacturers maintain human oversight and real-time monitoring of AI systems, reflecting a deep commitment to uptime, safety, and production reliability.
  • Adversarial AI testing is a major blind spot. Only 7% of manufacturers conduct AI red teaming or adversarial testing, leaving production, quality, and supplier-driven AI systems vulnerable to intentional cyberattacks.
  • Compliance readiness lags AI deployment. Limited use of privacy impact assessments and audit-quality evidence could expose manufacturers to regulatory action as AI governance requirements expand globally.
  • Third-party AI risk is emerging as a systemic supply chain threat. AI failures at suppliers, logistics partners, or technology vendors are increasingly likely to disrupt manufacturing operations without clear accountability or governance.

Current State of AI Governance in Manufacturing

According to Kiteworks, manufacturers’ rapid adoption of artificial intelligence is outpacing their ability to govern AI-driven cyber and supply chain risk. The report, Data Security and Compliance Risk: 2026 Forecast Report, notes that while manufacturers lead in operational AI controls, they are underprepared for adversarial AI attacks and regulatory scrutiny.

The findings are based on a global survey of 225 security, IT, compliance, and risk leaders, including 27 from manufacturing organizations. Manufacturers outperform global peers in production-critical AI controls, with 63% maintaining human oversight and 56% monitoring AI data flows.

Emerging Cyber Blind Spots

Despite these strengths, manufacturers are not resilient against intentional cyber threats. Only 7% conduct AI red teaming or adversarial testing, significantly expanding the manufacturing attack surface. As Tim Freestone, chief strategy officer at Kiteworks, notes, “Manufacturing has built AI governance for reliability, not hostility.”

Compliance and Audit Readiness Gaps

The report highlights significant compliance gaps, with only 15% of manufacturing organizations conducting privacy impact assessments. Without strong documentation and audit trails, manufacturers may struggle to demonstrate compliance with emerging AI regulations.

Kiteworks warns that while manufacturers may detect AI-related anomalies through monitoring, weak audit trails will limit their ability to investigate root causes and explain outcomes to regulators.

Third-Party AI Risk as a Systemic Threat

A central concern is the growing gap between internal AI governance and third-party AI risk. AI systems used by suppliers and logistics partners often lack equivalent governance, increasing the likelihood of production disruptions.

Patrick Spencer, SVP of Americas marketing at Kiteworks, states, “When supplier AI systems fail, the impact shows up on the production line, not in a policy document.”

Five AI Risk Predictions for Manufacturers in 2026

Kiteworks outlines five predictions manufacturers should act on immediately:

  • Adversarial AI attacks will exploit testing gaps. With 93% lacking adversarial testing, AI systems will be targeted through model poisoning and data manipulation.
  • Compliance documentation gaps will drive regulatory exposure. Limited privacy impact assessments will increase enforcement and reputational risk.
  • Monitoring will outpace forensic readiness. Manufacturers will detect incidents but lack the data needed to investigate.
  • OT-AI convergence will outgrow IT-centric governance. Traditional IT governance frameworks will fall short as AI integrates deeper into operational technology.
  • Third-party AI failures will disrupt production. Supplier and partner AI risks will remain under-governed, with minimal oversight.

Closing the Gap Between Operational Excellence and AI Resilience

Kiteworks recommends manufacturers extend existing safety and quality disciplines to AI governance by:

  • Implementing adversarial AI testing programs
  • Strengthening compliance documentation and audit trails
  • Building forensic-ready incident response capabilities
  • Developing AI-specific OT governance models
  • Elevating supply chain AI risk to board-level oversight

In conclusion, manufacturers need to adapt their operational DNA to include adversarial AI risk, regulatory proof, and supply chain accountability. Those who adapt will lead, while those who do not may face significant disruptions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...