Category: News

South Korea’s AI Regulations: Innovation at Risk?

South Korea has enacted the AI Basic Act, a comprehensive legal framework emphasizing safety, transparency, and ethical standards in AI. While aiming to build public trust, the law raises concerns among startups about compliance challenges and possible impacts on innovation.

Read More »

Institutional Investors Define AI Governance for Competitive Edge

Institutional investors emphasize the need for internal guidelines for responsible AI use, as standardized regulations are currently seen as premature. Panelists at the 2025 Investment Innovation Conference discussed the importance of creating smart guardrails while balancing innovation and fiduciary duties in the investment landscape.

Read More »

European AI Roundtable Explores Transparency in the AI Act

On December 11, 2025, CCIA Europe hosted the European AI Roundtable in Brussels, focusing on transparency obligations under Article 50 of the EU AI Act. Discussions emphasized the need for a flexible, practical Code of Practice that avoids overwhelming users while addressing risks of AI-generated content.

Read More »

AI Safety and Ethics: OpenAI’s Commitment to Responsible Development

At BT Davos 2026, Christopher Lehane from OpenAI discusses the importance of ethical AI development, emphasizing the need for global safety standards and regulations that foster innovation while protecting users. He highlights OpenAI’s commitment to children’s safety through age-appropriate models and the localization of AI systems to meet diverse cultural needs.

Read More »

Transforming AI Governance for Business Success

NetApp and Domino Data Lab discuss the challenges of AI governance, traceability, and cost management as AI adoption accelerates in enterprises. They emphasize the importance of moving from pilot programs to production systems while ensuring transparency and optimizing infrastructure for efficiency.

Read More »