Category: AI

EU AI Act: Milestones and Compliance Challenges Ahead

The EU AI Act is setting a precedent as the world’s first comprehensive regulation for artificial intelligence, with phased implementation and complex compliance requirements. Key obligations focus on AI literacy and the prohibition of harmful practices, while the upcoming Code of Practice for general-purpose AI models is currently delayed.

Read More »

AI Governance: Addressing Emerging ESG Risks for Investors

A Canadian trade union has proposed that Thomson Reuters enhance its artificial intelligence governance framework to align with investors’ expectations regarding human rights and privacy. The proposal highlights the potential risks associated with AI technologies, including misuse and data privacy issues, urging shareholders to consider the increasing legal and reputational threats the company may face.

Read More »

Transforming Healthcare AI: Ensuring Governance and Compliance

As artificial intelligence (AI) revolutionizes healthcare, organizations must navigate the accompanying risks and ethical dilemmas through robust governance, risk management, and compliance (GRC) frameworks. Newton3 specializes in guiding healthcare leaders to ensure that AI deployments are both effective and accountable, mitigating potential harm to patients and aligning with regulatory standards.

Read More »

Revolutionizing Audit: ISACA’s New AI Certification

ISACA has launched the Advanced in AI Audit (AAIA) certification, aimed at equipping experienced auditors to navigate the complexities introduced by AI in the audit landscape. This certification focuses on key areas such as AI governance, risk management, and auditing tools, building on ISACA’s established standards in IT audit.

Read More »

AI Governance and Compliance in Higher Education

Artificial intelligence (AI) is transforming higher education by enhancing various processes, including admissions, research, and student support, while also raising concerns about data privacy and algorithmic bias. To ensure responsible AI use, institutions must develop a comprehensive AI Governance, Risk, and Compliance framework that aligns with legal standards and academic values.

Read More »

AI Compliance Risks: Safeguarding Against Emerging Threats

The rapid growth of artificial intelligence (AI), particularly generative AI, presents both opportunities and significant risks for businesses regarding compliance with legal and regulatory environments. Organizations must carefully assess their use of AI to identify vulnerabilities and ensure adherence to evolving regulatory frameworks.

Read More »

Building Effective AI Literacy Programs for Compliance and Success

The EU AI Act mandates that providers and deployers of AI systems ensure a sufficient level of AI literacy among their staff and others involved in AI operations. This obligation applies to anyone connected to AI within the EU and emphasizes the need for informed decision-making regarding the risks and benefits of AI technologies.

Read More »

Ethics at the Crossroads of AI Innovation

As artificial intelligence (AI) increasingly influences critical decision-making across various sectors, the need for robust ethical governance frameworks becomes essential. Organizations must prioritize ethical considerations and implement effective AI governance to navigate the complexities and potential biases associated with AI technologies.

Read More »

Croatia’s Path to Responsible AI Legislation

EDRi affiliate Politiscope hosted an event in Croatia to discuss the human rights impacts of Artificial Intelligence (AI) and to influence national policy ahead of the implementation of the EU AI Act. The event highlighted the risks associated with AI, including privacy threats and algorithmic discrimination, while advocating for stronger safeguards for people’s rights.

Read More »