Category: AI

AI Governance in East Asia: Strategies from South Korea, Japan, and Taiwan

As AI becomes a defining force in global innovation, South Korea, Japan, and Taiwan are establishing distinct regulatory frameworks to oversee its use, each aiming for more innovation-friendly regimes. South Korea’s AI Basic Act emphasizes a risk-based approach, while Japan’s AI Promotion Act focuses on stimulating innovation through a lighter regulatory touch.

Read More »

Ensuring Ethical Compliance in AI-Driven Insurance

As insurance companies increasingly integrate AI into their processes, they face regulatory scrutiny and ethical challenges that necessitate transparency and fairness. New regulations aim to minimize risks of unfair discrimination while ensuring accountability in the use of intelligent systems.

Read More »

False Confidence in the EU AI Act: Understanding the Epistemic Gaps

The European Commission’s final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice has sparked discussions about its implications for AI regulation, revealing an epistemic gap in how “general-purpose AI” is defined. The EU AI Act’s rigid legal constructs may hinder adaptive governance in a rapidly evolving technological landscape, emphasizing the need for anticipatory frameworks that embrace uncertainty and flexibility.

Read More »

Transforming AI Governance: The EU Act’s Framework Against Super AI Risks

The EU AI Act establishes a risk-based framework that categorizes AI systems based on their potential harm, imposing strict regulations on high-risk and prohibited uses to enhance human oversight and cybersecurity. This legislative approach aims to prevent existential threats from super AI while promoting responsible innovation and safeguarding human rights.

Read More »

EU AI Act: Key Changes and Future Implications

The EU AI Act reached a significant milestone on August 2, 2025, marking the beginning of real obligations for general-purpose AI models. Providers must now meet specific requirements to enter the EU market, including documentation, copyright policies, and risk evaluations, with enforcement set to begin in August 2026.

Read More »

AI Copyright Dilemma in the EU

The European Union’s implementation of the Artificial Intelligence Act introduces new guidelines that aim to balance AI growth with copyright compliance, but this creates significant challenges for data access. The complexities of copyright law may hinder the competitiveness of EU AI models in a global market increasingly dominated by less restrictive regimes.

Read More »

EU AI Act: Key Compliance Dates and Implications for Medtech

The EU AI Act has come into effect, imposing compliance requirements for AI systems, especially high-risk ones, with penalties starting as of August 2, 2025. Companies must prepare for full implementation by August 2, 2027, and engage proactively with regulators to navigate the new landscape.

Read More »

Building Secure and Ethical AI in an Evolving Threat Landscape

Sam Peters, Chief Product Officer at ISMS.online, discusses the importance of building secure and ethical AI models in a rapidly evolving threat landscape, emphasizing that compliance must be the foundational element of AI initiatives. He outlines the risks associated with AI and stresses that organizations must adopt internationally recognized frameworks like ISO/IEC 42001 and ISO/IEC 27001 to effectively manage these risks and protect their AI systems.

Read More »