Day: June 18, 2025

Tokio Marine Unveils Global AI Governance Framework

Tokio Marine has announced its Basic Policy on Artificial Intelligence (AI) Governance, which will be effective across its global group companies starting in April. The policy emphasizes the responsible use of AI, ensuring that AI decisions remain subject to human judgment while addressing concerns about misinformation, discrimination, and personal rights violations.

Read More »

Establishing a Strong AI Governance Framework in Education

As AI initiatives multiply across districts, the absence of a coherent governance framework risks creating scattered, inconsistent results. Adopting an AI governance manifesto is essential for school districts to ensure that AI serves overarching priorities and enhances educational outcomes.

Read More »

China’s New AI Safety Institute: A Shift in Governance and Global Engagement

The China AI Safety and Development Association (CnAISDA) has been established to represent China’s interests in international AI discussions, particularly concerning the risks associated with frontier AI technologies. This development reflects China’s increasing recognition of the need for global cooperation on AI safety while maintaining its focus on domestic economic growth and innovation.

Read More »

AI Standards: Balancing Innovation and Accountability

The article discusses the recent shift in the U.S. government’s approach to artificial intelligence regulation, moving from a focus on safety and multi-stakeholder collaboration to prioritizing national security and innovation. This change raises concerns about the potential neglect of critical issues such as bias and discrimination in AI, as the tech industry gains more influence over policy decisions.

Read More »

AI’s Impact on Democratic Governance: A New Era for Pakistan

The Senate of Pakistan recently hosted a high-level session on the role of Artificial Intelligence (AI) in democratic governance, bringing together lawmakers, diplomats, and AI experts to explore its transformative potential. The event emphasized the importance of responsible AI governance and the need for lawmakers to enhance their AI literacy to ensure ethical and effective integration into legislative processes.

Read More »

New York’s RAISE Act: Pioneering AI Safety Standards

The New York State Senate has passed the Responsible AI Safety and Education Act (RAISE Act), which requires major generative AI companies to publish safety reports and notify consumers of security incidents. This legislation aims to promote innovation while ensuring that safety measures are in place to protect the public from potential risks associated with advanced AI technologies.

Read More »

New Jersey Moves to Ban AI in Mental Health Therapy

New Jersey legislators have advanced a bill that would prohibit the use of artificial intelligence as a licensed mental health professional, citing risks associated with AI therapy. The measure aims to protect consumers and addresses the growing reliance on AI chatbots for mental health support amidst a shortage of mental health workers.

Read More »

EU AI Act: Preparing for Major Compliance Changes Ahead

The European Union has initiated a new era of AI regulation with the Artificial Intelligence Act, which went into effect on August 1, 2024. This landmark legislation establishes a comprehensive legal framework for AI, introducing a phased approach to compliance and imposing obligations on developers, providers, and deployers of AI systems.

Read More »