Category: Regulatory Compliance

Securing AI Copilots: Mitigating Risks and Enhancing Compliance

AI Copilots, such as Microsoft’s, present significant security, privacy, and compliance risks if not properly secured, potentially leading to data breaches and regulatory violations. Real-world incidents have already highlighted these dangers, indicating the urgent need for organizations to adopt a multi-layered approach to AI security and governance.

Read More »

AI Regulation in Financial Services: Current Trends and Future Challenges

Artificial intelligence (AI) is increasingly integrated into financial services, transforming various operations, from consumer interactions to critical functions like underwriting and fraud detection. As AI adoption accelerates, the regulatory landscape remains uncertain, with federal and state agencies grappling to establish oversight and compliance guidelines.

Read More »

Rethinking Governance in the Age of AI Agents

AI agents are increasingly integral to enterprise operations, handling tasks such as customer support and regulatory documentation with a level of autonomy that requires a reevaluation of governance, risk, and compliance (GRC) frameworks. As these agents operate in sensitive environments, traditional oversight methods must evolve to ensure real-time governance and mitigate emerging risks.

Read More »

Balancing AI Innovation with Cybersecurity Risks

Financial CISOs are navigating the challenge of adopting AI while defending against AI-powered threats, such as sophisticated phishing and deepfake fraud. To address this duality, a robust strategic framework is essential, including the establishment of dedicated AI governance and prioritization of Explainable AI (XAI).

Read More »

EU AI Act: Preparing for Major Compliance Changes Ahead

The European Union has initiated a new era of AI regulation with the Artificial Intelligence Act, which went into effect on August 1, 2024. This landmark legislation establishes a comprehensive legal framework for AI, introducing a phased approach to compliance and imposing obligations on developers, providers, and deployers of AI systems.

Read More »

New York’s Bold Move to Regulate AI Giants’ Safety Protocols

New York is poised to introduce the Responsible AI Safety and Education (RAISE) Act, which would mandate that major AI developers publish safety protocols and conduct risk assessments before releasing advanced AI models. The bill, which has passed the state Senate, aims to minimize risks associated with powerful AI systems while imposing civil penalties for violations.

Read More »

Challenges of Implementing Regulated AI in Drug Development

The FDA’s recent rollout of the internal AI tool, Elsa, aims to address the challenges of regulatory document review, but experts warn that creating effective regulated AI is highly complex. Erez Kaminski, CEO of Ketryx, suggests that a neuro-symbolic approach, combining neural networks and rule-based AI, may be essential for managing the intricate demands of regulatory environments.

Read More »

New York’s Groundbreaking AI Safety Legislation

New York has become the first state to pass comprehensive legislation regulating AI safety through the RAISE Act, which targets powerful AI models from major companies. This groundbreaking bill mandates transparency and safety assessments from developers, aiming to balance innovation with necessary safeguards.

Read More »