Category: Data Protection

EU AI Act: Key Compliance Dates and Implications for Medtech

The EU AI Act has come into effect, imposing compliance requirements for AI systems, especially high-risk ones, with penalties starting as of August 2, 2025. Companies must prepare for full implementation by August 2, 2027, and engage proactively with regulators to navigate the new landscape.

Read More »

AI Surveillance: Ensuring Safety Without Sacrificing Privacy

AI-driven surveillance enhances safety through advanced technologies like facial recognition and behavior analysis, but it poses significant risks to privacy, civil liberties, and social equity. As regulatory frameworks evolve unevenly globally, striking a balance between safety and individual rights remains essential for democratic societies.

Read More »

Strengthening Data Protection and AI Governance in Singapore

Singapore is proactively addressing the challenges posed by data use in the age of artificial intelligence, emphasizing the need for robust data protection measures and the importance of adapting laws and social norms. Minister for Communications and Information Josephine Teo highlighted the role of data in the AI development lifecycle, while also stressing the necessity for independent testing to ensure generative AI applications function reliably.

Read More »

Texas Takes the Lead in Consumer-Focused AI Regulation

Texas has passed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which aims to protect consumers while establishing regulations for businesses using artificial intelligence. This pioneering legislation emphasizes human-centeredness and transparency, requiring companies to disclose AI use and prohibiting harmful practices.

Read More »

Data Governance Essentials in the EU AI Act

The EU AI Act proposes a framework to regulate AI, focusing on “high-risk” systems and emphasizing the importance of data governance to prevent biases and discrimination. Article 10 outlines strict standards for the datasets used in high-risk AI systems, ensuring they are suitable and reliable for their intended purposes.

Read More »

Data Governance Essentials in the EU AI Act

The EU AI Act proposes a framework to regulate AI, focusing on “high-risk” systems and emphasizing the importance of data governance to prevent biases and discrimination. Article 10 outlines strict standards for the datasets used in high-risk AI systems, ensuring they are suitable and reliable for their intended purposes.

Read More »

Privacy Groups Urge Against Delays in AI Regulation

Over 50 organizations, including Access Now and the European Consumer Organisation, warned the EU Technology Commissioner against delays or backtracking on the AI Act, emphasizing that such actions would undermine accountability mechanisms. They highlighted the importance of maintaining legal protections and not allowing the simplification agenda to drive deregulation.

Read More »

Clarifying GDPR Compliance for AI Training

The French CNIL has clarified that training AI models on personal data from public sources can be lawful under GDPR’s legitimate interest basis, provided certain conditions are met. However, this guidance does not resolve other regulatory issues such as copyright and database rights that continue to impact AI training practices.

Read More »

Understanding the Impacts of the EU AI Act on Privacy and Business

The EU AI Act, finalized in late 2023, establishes comprehensive regulations governing the use of artificial intelligence by companies operating in Europe, including those based in the U.S. It aims to ensure that AI systems are developed and used safely and ethically, with an emphasis on transparency, documentation, and human oversight.

Read More »