Day: March 18, 2026

Global Frameworks for Ethical AI Governance

As the world navigates the governance of artificial intelligence, New America’s Planetary Politics initiative is collaborating with various stakeholders to ensure equitable benefits and mitigate risks associated with AI development. The background paper submitted to the UN High-Level Advisory Body emphasizes the importance of including developing countries in global AI governance and suggests the establishment of a Gavi-like body for AI data and talent.

Read More »

AI Guardrails Act Establishes Crucial Limits on Military AI Use

Senator Elissa Slotkin has introduced the AI Guardrails Act, which aims to establish clear limitations on the Department of Defense’s use of artificial intelligence, particularly concerning autonomous weapons, domestic surveillance, and nuclear weapon deployment. The legislation emphasizes the necessity of human involvement in decisions related to lethal force and the protection of individual privacy rights.

Read More »

Colorado’s Innovative Shield for AI in Legal Services

Colorado lawyers are advocating for a new regulation that would protect AI developers from complaints regarding unauthorized practice of law, allowing them to provide essential legal assistance to the public. The state’s nonprosecution policy aims to foster innovation in legal technology over the next three years while ensuring developers are supervised by lawyers.

Read More »

Swift Action Required for AI Regulatory Simplification in the EU

The European Parliament’s Committees on Civil Liberties and Internal Market have adopted their negotiating mandate for the AI Omnibus, aiming to simplify the AI Act and extend compliance deadlines. CCIA Europe emphasizes the need for a swift agreement to ensure a pragmatic approach that prioritizes innovation over regulatory complexity.

Read More »

Best Practices for AI Compliance in the Workplace

In this episode of California Employment News, experts discuss the essential steps employers should take when implementing AI in their workplaces. Key topics include creating internal AI policies, safeguarding employee data, and conducting meaningful bias audits to ensure compliance and reduce risk.

Read More »

Court Ruling Highlights AI Access Risks to User Accounts

A court in California ruled that AI agents accessing user accounts without authorization may violate state and federal laws, even if users granted permission. This decision raises significant questions for both AI developers and platforms regarding user consent and terms of service.

Read More »

AI Regulation Clash: Schmidt vs. Sweeney on Safety and Accountability

In a heated debate, former Google CEO Eric Schmidt contended that AI systems can exhibit unexpected behaviors that complicate the implementation of safety regulations. In contrast, former FTC CTO Latanya Sweeney expressed skepticism about the tech industry’s willingness to comply with regulations, citing past instances of non-compliance.

Read More »

AI Legal Liability: The Implications of ChatGPT in Litigation

On March 4, 2026, Nippon Life Insurance Company filed a lawsuit against OpenAI, alleging that the use of ChatGPT by a former employee led to tortious interference with a contract and unauthorized practice of law. The case raises critical questions about AI’s role in legal advice and its potential liabilities.

Read More »