Category: AI Accountability

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on “high-risk” AI systems. The enacted version includes provisions that restrict certain AI practices and establishes a regulatory sandbox program for the development and testing of AI technologies.

Read More »

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI Officer, emphasized that this recognition affirms Workday’s leadership in the critical area of AI governance.

Read More »

AI-Driven Compliance: Balancing Automation and Accountability

On AI Appreciation Day, it’s crucial to recognize the transformative role of Artificial Intelligence in enhancing compliance and verification processes across various sectors in India. As businesses increasingly rely on AI for real-time compliance, the importance of transparency and ethical deployment becomes paramount in building trust within the digital economy.

Read More »

AI Adoption Outpaces Governance in UK Businesses

A recent study reveals that while 93% of UK organizations are utilizing AI, only 7% have established proper governance frameworks. This lack of oversight poses significant risks, as many companies are advancing in AI adoption without understanding the associated challenges and necessary controls.

Read More »

AI Governance: The Key to Successful Enterprise Implementation

Artificial intelligence is at a critical juncture, with many enterprise AI initiatives failing to reach production and exposing organizations to significant risks. Effective AI governance is essential to prevent these issues, ensuring that AI systems are treated as vital assets requiring ongoing oversight and adaptation.

Read More »

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The lack of accountability and awareness among employees remains a significant vulnerability, highlighting the need for comprehensive training and clear governance structures in the face of evolving threats.

Read More »

New York’s RAISE Act: Pioneering AI Safety Standards

The New York State Senate has passed the Responsible AI Safety and Education Act (RAISE Act), which requires major generative AI companies to publish safety reports and notify consumers of security incidents. This legislation aims to promote innovation while ensuring that safety measures are in place to protect the public from potential risks associated with advanced AI technologies.

Read More »

AI Hiring Regulations: What HR Must Know to Stay Compliant

Artificial intelligence (AI) is reshaping the hiring landscape, but it also raises concerns about discrimination and compliance with regulations. Recent developments in California and a lawsuit against Workday highlight the need for HR teams to scrutinize their AI tools to prevent legal risks.

Read More »

The Imperative of Responsible AI in Today’s World

Responsible AI refers to the practice of designing and deploying AI systems that are fair, transparent, and accountable, ensuring they benefit society while minimizing harm. As AI becomes increasingly integrated into our lives, it is essential to address the risks of bias, discrimination, and lack of accountability to build trust in these technologies.

Read More »