Category: Regulatory Compliance

Securing AI in Manufacturing: Mitigating Risks for Innovation

The integration of AI in manufacturing offers significant benefits, such as increased innovation and productivity, but also presents risks related to security and compliance. Organizations must adopt proactive governance strategies to mitigate these risks and ensure that AI technologies work effectively within their operations.

Read More »

AI’s Rise: Addressing Governance Gaps and Insider Threats

This year’s RSAC Conference highlighted the pervasive influence of artificial intelligence (AI) in cybersecurity discussions, with nearly 90% of organizations adopting generative AI for security purposes. However, the conference also raised concerns about the growing risks associated with AI, including governance gaps and insider threats within organizations.

Read More »

Ensuring AI Compliance Amidst Data Proliferation

The podcast discusses the compliance risks associated with data during artificial intelligence (AI) processing, emphasizing the challenges of managing proliferating datasets. Mathieu Gorge, CEO of Vigitrust, highlights the importance of understanding data flow and maintaining compliance as organizations increasingly adopt AI technologies.

Read More »

Kickstarting Compliance with the EU AI Act: Four Essential Steps

The European Union’s Artificial Intelligence Act (AI Act) is the world’s first comprehensive regulation on AI, impacting not only European entities but also U.S.-based organizations that develop or use AI technologies. Companies must prepare for compliance by assessing their AI systems against the Act’s risk categories and implementing necessary governance measures.

Read More »

Blueprint for Effective AI and Social Media Regulation

The Take It Down Act demonstrates that targeted regulation of AI can be achieved without stifling innovation, successfully addressing online harms to children. With bipartisan support and backing from major tech companies, the law criminalizes the publication of nonconsensual intimate images online, requiring platforms to act swiftly in removing such content.

Read More »

Unpacking the EU’s AI Act: Challenges and Compliance in Healthcare

During the AI Health Law & Policy Summit, panelists discussed the complexities of the EU’s AI Act and the challenges of global regulatory compliance for AI-enabled medical products. Experts emphasized the importance of proactive engagement with regulatory bodies and the need for companies to adapt their governance frameworks to meet evolving compliance requirements.

Read More »

EU AI Act: Milestones and Compliance Challenges Ahead

The EU AI Act is setting a precedent as the world’s first comprehensive regulation for artificial intelligence, with phased implementation and complex compliance requirements. Key obligations focus on AI literacy and the prohibition of harmful practices, while the upcoming Code of Practice for general-purpose AI models is currently delayed.

Read More »

Transforming Healthcare AI: Ensuring Governance and Compliance

As artificial intelligence (AI) revolutionizes healthcare, organizations must navigate the accompanying risks and ethical dilemmas through robust governance, risk management, and compliance (GRC) frameworks. Newton3 specializes in guiding healthcare leaders to ensure that AI deployments are both effective and accountable, mitigating potential harm to patients and aligning with regulatory standards.

Read More »

Revolutionizing Audit: ISACA’s New AI Certification

ISACA has launched the Advanced in AI Audit (AAIA) certification, aimed at equipping experienced auditors to navigate the complexities introduced by AI in the audit landscape. This certification focuses on key areas such as AI governance, risk management, and auditing tools, building on ISACA’s established standards in IT audit.

Read More »