Category: AI

AI Governance: Balancing Innovation and Risk Management

In an exclusive interview, Dr. Enzo Tolentino discusses the dual nature of artificial intelligence as both a game-changer and a risk amplifier, emphasizing the importance of addressing risks like privacy challenges and bias. He highlights the significance of frameworks such as NIST and ISO 42001 in navigating AI governance and ensuring responsible deployment.

Read More »

Unchecked AI: The Hidden Dangers of Internal Deployments

The report from Apollo Research warns that unchecked internal deployment of AI systems by major firms like Google and OpenAI could lead to catastrophic risks, including AI systems operating beyond human control. It highlights the absence of effective governance and the potential for these technologies to concentrate unprecedented power in a small number of companies, threatening democratic processes and societal stability.

Read More »

Empowering Malaysia’s Future Through AI Governance

Artificial intelligence (AI) is transforming industries worldwide, and Malaysia is positioning itself as a regional hub for AI development through initiatives like the National AI Office. However, to harness AI’s potential, organizations must prioritize data and AI governance to address challenges such as data privacy, reliability, and systemic biases.

Read More »

Universities at the Crossroads of AI Policy

Artificial intelligence has emerged as a significant geopolitical issue, placing universities at the forefront of navigating complex national AI policies. As these institutions adapt to fragmented regulations, they must balance their academic freedom with the demands of national interests while striving to lead in AI integration and ethical governance.

Read More »

EU Commission’s New Guidelines on AI Systems Defined

The European Commission has published guidelines that clarify the definition of AI systems under the AI Act, providing examples and specifying which systems fall outside this definition. While the guidelines are non-binding, they serve as a useful resource for companies to determine their compliance with the AI Act.

Read More »

Biometrics and Regulation: Understanding the EU’s Legal Landscape

Biometric technologies are rapidly expanding beyond security and law enforcement into areas like customer sentiment analysis and employee monitoring, driven by advances in artificial intelligence. The EU’s regulatory framework, including the GDPR and the AI Act, governs the use of these technologies, imposing strict compliance obligations based on their risk classification.

Read More »

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the importance of training professionals to take responsibility for AI outcomes and the need for transparency in AI systems to ensure consumer protection.

Read More »

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and Australia are making strides towards comprehensive regulation, the U.S. currently relies on traditional medical device regulations for AI products.

Read More »