AI and Regulation Transform Application Security Strategies

AI and Regulation Redefine Application Security: A Comprehensive Study

Artificial intelligence has overtaken all other forces shaping application security, according to a major new industry study that shows organizations racing to secure AI-generated code while responding to growing regulatory pressure.

Emergence of AI in Application Security

The 16th edition of the Building Security In Maturity Model (BSIMM), released by Black Duck, analyzed real-world software security practices across 111 organizations worldwide, covering more than 91,000 applications developed by 223,000 developers. This comprehensive study reveals that, for the first time in BSIMM’s history, AI has emerged as the single most influential factor reshaping security priorities.

Organizations are now grappling with a dual challenge: securing AI-powered development tools such as large language model (LLM) coding assistants while defending against increasingly sophisticated AI-enabled attacks.

New Security Risks from AI-Generated Code

The report highlights a growing concern that while AI-generated code may appear polished and production-ready, it can conceal serious security flaws. Consequently, organizations are introducing new controls specifically designed to manage AI-related risk.

BSIMM16 found a 12% increase in organizations using risk-ranking methods to determine where LLM-generated code can safely be deployed, alongside a 10% rise in teams applying custom security rules to automated code review tools to detect vulnerabilities unique to AI-generated code. Additionally, there was a 10% increase in the use of attack intelligence to track emerging AI-related threats.

Rather than relying solely on trust in AI tools, security teams are increasingly embedding automated checks and governance mechanisms into the software development lifecycle to address the limitations of AI-assisted coding.

The Role of Regulation in Driving Security Investments

Alongside AI, government regulation is a powerful driver of change. New mandates, including the EU Cyber Resilience Act and U.S. federal software security requirements, are forcing organizations to strengthen software supply chain visibility and improve their ability to demonstrate compliance.

The study reports a near-30% increase in organizations producing software bills of materials (SBOMs) for deployed software, reflecting growing demands for transparency into software components. Automated verification of infrastructure security increased by more than 50%, while processes for responsible vulnerability disclosure grew by over 40%, indicating a shift toward more structured, auditable security operations.

These changes suggest that regulatory compliance is no longer treated as a checkbox exercise but as a catalyst for long-term improvements in application security maturity.

Focus on Supply Chain Security

BSIMM16 shows organizations expanding their focus beyond internally developed code to address risk across the wider software supply chain. Increased use of third-party components, open source software, and AI-assisted development has heightened the need for standardization and visibility.

The report observed a more than 40% rise in organizations establishing standardized technology stacks, as well as continued growth in SBOM adoption, signaling that supply chain security is becoming a core element of application security programs rather than a specialized concern.

Adapting Security Training to Modern Development

Traditional security training approaches are also evolving. Lengthy classroom-based courses are increasingly being replaced by just-in-time, role-specific guidance delivered directly within developer workflows.

BSIMM16 recorded a 29% increase in organizations providing security expertise via open collaboration channels, allowing developers to access immediate support when security questions arise. This shift reflects the realities of agile development environments, where short, targeted guidance is often more effective than formal training sessions.

Indications of Maturity in Application Security

Notably, BSIMM16 introduces no changes to the framework structure for the first time since the model was created. While many individual security activities showed significant growth, none shifted sufficiently to warrant reclassification.

This stability signals that application security as a discipline has reached a level of structural maturity, even as AI, regulation, and supply chain complexity continue to reshape how organizations implement security in practice.

As organizations navigate an increasingly AI-driven development landscape, BSIMM16 provides a snapshot of how leading security teams are adapting, offering a benchmark for others seeking to balance innovation, compliance, and risk management in modern software environments.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...