Proofpoint Boosts AI Security with Acuvity Acquisition

Proofpoint Strengthens Enterprise AI Security Platform with Acuvity Acquisition

Proofpoint has made a strategic move by acquiring Acuvity, a specialist in AI security and governance. This acquisition is aimed at addressing the increasing risks associated with the enterprise adoption of generative AI and autonomous agents.

Enhancing Security Measures

The integration of Acuvity’s capabilities adds AI-native visibility, governance, and runtime protection to Proofpoint’s platform, which is focused on securing what the company refers to as the “agentic workspace”. This collaboration allows organizations to confidently adopt AI tools and agents while ensuring the necessary governance, visibility, and control to manage potential risks.

Understanding the Risks

The rise of generative AI tools brings significant productivity benefits; however, they also introduce new risks such as:

  • Exposure of sensitive data
  • Intellectual property loss
  • Regulatory breaches
  • Prompt-injection attacks
  • Unauthorized “shadow AI” deployments

Ryan Kalember, Chief Strategy Officer at Proofpoint, emphasized the importance of this acquisition in expanding the company’s human- and agent-centric security platform to offer comprehensive visibility, governance, and control across the agentic workspace.

Acuvity’s Innovative Solutions

Acuvity’s platform provides crucial visibility into how AI is utilized across various endpoints, browsers, and emerging AI infrastructures, including:

  • Model integration frameworks
  • Locally deployed AI tools

Additionally, it introduces detection models that help organizations understand the context and intent behind AI interactions, allowing for effective monitoring and governance of user interactions with AI systems and enterprise data.

A Unified Approach to Security

Proofpoint asserts that this acquisition will allow them to provide unified protection that encompasses collaboration security, data security, and AI governance within a single platform. They argue that securing AI requires extending beyond traditional infrastructure protection to consider how AI systems access, process, and act upon sensitive information.

The Need for a New Security Model

Acuvity co-founder and CEO Satyam Sinha pointed out that the shift towards AI-driven workflows necessitates a fundamentally new security model. He stated, “AI is fundamentally reshaping how work gets done, and enterprises are overwhelmed by the pace of AI adoption and the complexity of securing it.” This highlights the pressing need for a security approach that governs how AI thinks, acts, and learns in real time.

Conclusion

As AI becomes increasingly integrated into daily enterprise operations, Proofpoint aims to leverage Acuvity’s technology to enhance its capabilities in helping organizations adopt AI securely while maintaining compliance and protecting sensitive information.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...