Thoropass has released its 2026 State of Audit and Compliance Report, highlighting that AI adoption is now the leading source of compliance risk, with 69% of security leaders stating it is outpacing their controls. The report indicates that while compliance programs have matured, operational inefficiencies persist, particularly in evidence collection during audits.
In 2026, AI-powered RegTech is revolutionizing compliance by shifting from reactive to predictive systems that can identify potential breaches before they occur. The RegTech market is expected to surge to USD 82.8 billion by 2032, making compliance a critical competitive advantage for FinTech firms.
On February 13, 2026, Judge Jed S. Rakoff ruled that communications with generative AI platforms are not protected by attorney-client privilege, warning users to treat anything typed into these tools as potentially public. This ruling highlights the risks of using AI for legal analysis, particularly for non-lawyers, and emphasizes the importance of safeguarding privileged information.
A recent study by the Universitat Oberta de Catalunya explores who will set the rules for the future of artificial intelligence, focusing on governance models tied to private digital identities and biometric systems. The research highlights projects like World, co-founded by OpenAI’s Sam Altman, which propose alternative governance frameworks that could undermine democratic legitimacy while promoting narratives of security and inclusion.
Europe’s cautious approach to regulating artificial intelligence may undermine its security and economic growth amid a competitive global landscape. As the U.S. and China advance, Europe risks falling behind unless it balances innovation with necessary protections.
BigID has announced an expansion of its Data Access Governance (DAG) capabilities to include AI agents, ensuring better oversight and security for non-human entities operating within enterprise environments. The new features include agent identity discovery, access right-sizing, and real-time activity monitoring to enhance data governance and mitigate insider risks associated with AI.
The AI Act introduces a harmonized legal framework for the development and use of AI systems in the EU, focusing on accessibility obligations for information and interfaces. These requirements apply especially to high-risk AI systems and those subject to transparency obligations, ensuring that all users, including those with disabilities, can access critical information.
The White House has introduced a national legislative framework for artificial intelligence, favoring federal regulations over conflicting state laws. This unified approach aims to foster innovation while preparing the U.S. for an AI arms race against China.
Nominated Senator Karen Nyamu is preparing to introduce a bill in the Senate aimed at regulating Artificial Intelligence (AI) to combat the spread of fake news and protect personal rights. She emphasizes the importance of oversight to prevent the misuse of AI technologies, which can mislead the public and threaten jobs.