Strengthening AI Security: Frameworks for Trustworthy Machine Learning

AI Security Frameworks – Ensuring Trust in Machine Learning

As artificial intelligence transforms industries and enhances human capabilities, the need for strong AI security frameworks has become paramount.

Recent developments in AI security standards aim to mitigate risks associated with machine learning systems while fostering innovation and building public trust.

Organizations worldwide are now navigating a complex landscape of frameworks designed to ensure AI systems are secure, ethical, and trustworthy.

The Growing Ecosystem of AI Security Standards

The National Institute of Standards and Technology (NIST) has established itself as a leader in this space with its AI Risk Management Framework (AI RMF), released in January 2023.

The framework provides organizations with a systematic approach to identifying, assessing, and mitigating risks throughout an AI system’s lifecycle.

“At its core, the NIST AI RMF is built on four functions: Govern, Map, Measure, and Manage. These functions are not discrete steps but interconnected processes designed to be implemented iteratively throughout an AI system’s lifecycle,” explains Palo Alto Networks in its framework analysis.

Simultaneously, the International Organization for Standardization (ISO) has developed ISO/IEC 42001:2023, establishing a comprehensive framework for managing artificial intelligence systems within organizations.

The standard emphasizes “the importance of ethical, secure, and transparent AI development and deployment” and provides detailed guidance on AI management, risk assessment, and addressing data protection concerns.

Regulatory Landscape and Compliance Requirements

The European Union has taken a significant step with its Artificial Intelligence Act, which came into force on August 2, 2024, though most obligations will not apply until August 2026.

The Act establishes cybersecurity requirements for high-risk AI systems, with substantial financial penalties for non-compliance.

“The obligation to comply with these requirements falls on companies that develop AI systems and those that market or implement them,” notes an analysis of the Act.

For organizations looking to demonstrate compliance with these emerging regulations, Microsoft Purview now offers AI compliance assessment templates covering the EU AI Act, NIST AI RMF, and ISO/IEC 42001, helping organizations “assess and strengthen compliance with AI regulations and standards.”

Industry-Led Initiatives for Securing AI Systems

Beyond government and regulatory bodies, industry organizations are developing specialized frameworks.

The Cloud Security Alliance (CSA) will release its AI Controls Matrix (AICM) in June 2025. This matrix is designed to help organizations “securely develop, implement, and use AI technologies.”

The first revision will contain 242 controls across 18 security domains, covering everything from model security to governance and compliance.

The Open Web Application Security Project (OWASP) has created the Top 10 for LLM Applications, addressing critical vulnerabilities in large language models.

This list, developed by nearly 500 experts from AI companies, security firms, cloud providers, and academia, identifies key security risks including prompt injection, insecure output handling, training data poisoning, and model denial of service.

Implementing these frameworks requires organizations to establish robust governance structures and security controls.

IBM recommends a comprehensive approach to AI governance, including “oversight mechanisms that address risks such as bias, privacy infringement and misuse while fostering innovation and building trust.”

For practical security implementation, the Adversarial Robustness Toolbox (ART) provides tools that “enable developers and researchers to evaluate, defend, and verify Machine Learning models and applications against adversarial threats.”

The toolkit supports all popular machine learning frameworks and offers 39 attack and 29 defense modules.

Looking Forward: Evolving Standards for Evolving Technology

As AI technologies continue to advance, security frameworks must evolve accordingly.

The CSA acknowledges this challenge, noting that “keeping pace with the frequent changes in the AI industry is no easy feat” and that its AI Controls Matrix “will definitely have to undergo periodic revisions to stay up-to-date.”

The Cybersecurity and Infrastructure Security Agency (CISA) recently released guidelines aligned with the NIST AI RMF to combat AI-driven cyber threats.

These guidelines follow a “secure by design” philosophy and emphasize the need for organizations to “create a detailed plan for cybersecurity risk management, establish transparency in AI system use, and integrate AI threats, incidents, and failures into information-sharing mechanisms.”

As organizations navigate this complex landscape, one thing is clear: adequate AI security requires a multidisciplinary approach involving stakeholders from technology, law, ethics, and business.

As AI systems become more sophisticated and integrated into critical aspects of society, these frameworks will play a crucial role in shaping the future of machine learning, ensuring it remains both innovative and trustworthy.

More Insights

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...

AI Governance Framework: Ensuring Responsible Deployment for a Safer Future

At the 17th annual conference of ISACA in Abuja, stakeholders called for an AI governance framework to ensure responsible deployment of artificial intelligence. They emphasized the need for...

Essential Strategies for Effective AI Governance in Healthcare

The AMA emphasizes the necessity for CMOs and healthcare leaders to establish policies for AI tool adoption and governance due to the rapid expansion of AI in healthcare. Key foundational elements for...

UN Establishes AI Governance Panel for Global Cooperation

The United Nations General Assembly has adopted a resolution to establish an Independent International Scientific Panel on Artificial Intelligence and a Global Dialogue on AI Governance. This...

Emerging Cyber Threats: AI Risks and Solutions for Brokers

As artificial intelligence (AI) tools rapidly spread across industries, they present new cyber risks alongside their benefits. Brokers are advised to help clients navigate these risks by understanding...