Day: June 2, 2025

Creators Demand Rights in the Age of AI

ABBA’s Björn Ulvaeus and other EU creators are advocating for creators’ rights in the generative AI market, highlighting concerns over potential loss of ownership of their works under the EU’s AI Act. They emphasize the need for legislation that protects creators’ interests while fostering innovation in the tech industry.

Read More »

Japan’s AI Governance: Embracing Innovation Through Light Regulation

Japan’s AI governance strategy for 2025 adopts a ‘light touch’ regulatory approach, favoring existing sector-specific laws and voluntary risk mitigation by businesses over strict regulations. This shift aims to position Japan as the most AI-friendly country in the world, reflecting evolving global attitudes towards AI regulation amidst a heated international AI race.

Read More »

Empowering Hong Kong Firms to Prioritize AI Safety

As artificial intelligence (AI) continues to evolve, organizations must prioritize safe practices to mitigate security risks, including personal data privacy concerns. A recent compliance survey revealed that while many companies utilize AI, only a portion have established policies to address data protection and governance.

Read More »

Standardizing AI Risk Management: A Path Forward

As AI reshapes Singapore and the world, organizations must address the array of risk management challenges that this transformative technology brings. A standardized approach incorporating global consensus is essential for guiding organizations in balancing innovation with effective risk management.

Read More »

Maximizing AI Value While Minimizing Risk

Society is undergoing a transformation driven by artificial intelligence (AI), which raises essential questions about how to maximize its benefits while minimizing risks. Key elements for AI success include robust infrastructure, an inclusive ecosystem, and effective governance to ensure that AI serves the broader interests of society.

Read More »

EU AI Act: Milestones and Compliance Challenges Ahead

The EU AI Act is setting a precedent as the world’s first comprehensive regulation for artificial intelligence, with phased implementation and complex compliance requirements. Key obligations focus on AI literacy and the prohibition of harmful practices, while the upcoming Code of Practice for general-purpose AI models is currently delayed.

Read More »

AI Governance: Addressing Emerging ESG Risks for Investors

A Canadian trade union has proposed that Thomson Reuters enhance its artificial intelligence governance framework to align with investors’ expectations regarding human rights and privacy. The proposal highlights the potential risks associated with AI technologies, including misuse and data privacy issues, urging shareholders to consider the increasing legal and reputational threats the company may face.

Read More »

Transforming Healthcare AI: Ensuring Governance and Compliance

As artificial intelligence (AI) revolutionizes healthcare, organizations must navigate the accompanying risks and ethical dilemmas through robust governance, risk management, and compliance (GRC) frameworks. Newton3 specializes in guiding healthcare leaders to ensure that AI deployments are both effective and accountable, mitigating potential harm to patients and aligning with regulatory standards.

Read More »