Category: AI Governance

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC companies view responsible AI as a catalyst for growth, only 1 percent are adequately prepared to manage these risks.

Read More »

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and ethical AI use. She warns that unmanaged AI poses significant risks related to data privacy and bias, making proactive governance essential for organizations.

Read More »

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has raised concerns among state lawmakers, who fear it could hinder their ability to protect citizens from potential issues related to AI.

Read More »

Avoiding AI Governance Pitfalls

As AI-infused tools become increasingly prevalent in enterprises, the importance of effective AI governance has grown. However, many businesses are falling short in their governance efforts, often treating it as “AI governance theater” rather than implementing systemic and innovative strategies.

Read More »

Trump Administration Shifts Focus to AI Standards and Innovation

The Trump administration has rebranded the AI Safety Institute to the Center for AI Standards and Innovation, signaling a shift towards rapid technology development. Commerce Secretary Howard Lutnick emphasized that the center will continue to evaluate AI capabilities and vulnerabilities while promoting U.S. innovation.

Read More »

Unlocking GenAI: The Governance Imperative

Many organizations are struggling to move Generative AI projects from pilot mode to production due to underlying data issues. To successfully scale GenAI, companies must prioritize data governance and ensure that their data is reliable, complete, and compliant with regulations.

Read More »

Building Trust in AI Through Effective Guardrails

Guardrails are essential in AI system architecture, especially as AI systems gain more autonomy. They help ensure responsible usage by managing risks, moderating content, and maintaining human oversight throughout the AI’s decision-making process.

Read More »

Ensuring Responsible AI Use in Insurance: A Broker’s Guide

The article discusses the rapid adoption of artificial intelligence in the insurance industry, highlighting the potential risks such as biases and lack of explainability. It emphasizes the importance of brokers in ensuring responsible AI usage by advocating for transparency, monitoring fairness, and educating clients on AI-driven processes.

Read More »

Revolutionizing AI Governance: Embracing Contingency and Evolutionary Models

Tejasvi Addagada emphasizes the urgent need for a fundamental reset in how we govern data and AI, arguing that traditional management models are outdated and inadequate for the complexities of modern technology. He introduces the Contingency and Evolutionary Governance models as adaptive frameworks that align with the evolving nature of organizations and their use of AI.

Read More »