Category: AI Ethics

Public Trust in AI Hits New Low as Election Approaches

A recent study reveals that Australians’ trust in artificial intelligence has reached a record low, with concerns about its misuse driving calls for stronger government regulation. The newly released AI Safety Scorecard compares political parties on their support for proposed policies aimed at ensuring safer AI practices.

Read More »

UN Alliance Launches HUMAN-AI-T Initiative to Shape Ethical AI Development

The United Nations Alliance of Civilizations has concluded its meeting in Geneva, launching the HUMAN-AI-T initiative aimed at integrating ethical considerations into artificial intelligence development. This initiative will function as a secure digital platform to preserve humanity’s cultural and ethical legacy, utilizing post-quantum cryptographic technologies.

Read More »

Bridging Divides in AI Safety Dialogue

Despite numerous AI governance events, a comprehensive framework for AI safety has yet to be established, highlighting the need for focused dialogue among stakeholders. A dual-track approach that combines broad discussions with specialized dialogue groups could foster consensus and address context-specific risks effectively.

Read More »

AI Regulation: Building Trust in an Evolving Landscape

As AI adoption accelerates globally, governments are rapidly developing ethical and legal frameworks to ensure compliance and mitigate risks associated with AI technologies. The EU’s AI Act and other regulatory measures in countries like the US, India, and China signify that AI governance is becoming essential for businesses to maintain a competitive edge.

Read More »

AI Adoption and Trust: Bridging the Governance Gap

A recent KPMG study reveals that while 70% of U.S. workers are eager to leverage AI’s benefits, 75% remain concerned about potential negative outcomes, leading to low trust in AI. Nearly half of employees are using AI tools without proper authorization, highlighting significant gaps in governance and raising ethical concerns.

Read More »

AI in the Workplace: Balancing Benefits and Risks

A recent global study reveals that while 58% of employees use AI tools regularly at work, nearly half admit to using them inappropriately, such as uploading sensitive information or not verifying AI-generated content. This highlights the urgent need for organizations to establish clear policies and training on the responsible use of AI to mitigate risks.

Read More »

Balancing AI Management: IT vs. HR Responsibilities

As AI-powered agents become integrated into business operations, organizations face the challenge of determining whether IT or HR should manage these systems. Both departments have unique responsibilities to ensure AI functions effectively while addressing its impact on workplace culture and employee dynamics.

Read More »

North Carolina Appoints First AI Governance Leader

The N.C. Department of Information Technology (NCDIT) has appointed I-Sah Hsieh as its first artificial intelligence governance and policy executive to promote responsible AI use in the state. Hsieh brings over 25 years of expertise in AI governance and ethics, aiming to enhance efficiency while ensuring digital safety for residents, businesses, and visitors.

Read More »