Category: News

Addressing the Gaps in EU AI Regulation for Multi-Agent Incidents

The upcoming EU Article 73 guidelines mandate reporting serious AI incidents in high-risk environments, but focus too narrowly on single-agent failures. This oversight risks accountability and systemic harms from interactions between multiple AI systems, which must be addressed to protect collective interests.

Read More »

Bridging the AI Readiness Gap in Healthcare and Insurance

Healthcare and insurance organizations face significant challenges scaling AI initiatives beyond pilots due to data fragmentation, workflow integration issues, compliance demands, and cultural readiness gaps. Addressing these challenges is essential for successful enterprise-wide AI adoption.

Read More »

AI Governance Solutions for the Future

Rock Consultancy, founded by solicitor Elaine Morrissey, helps organizations in Ireland navigate AI governance and data privacy challenges. With expertise in GDPR and emerging AI regulations, the consultancy supports businesses in adapting to the evolving artificial intelligence landscape.

Read More »

JPMorgan’s Bold Shift: AI Takes Over Shareholder Voting

JPMorgan Chase’s decision to replace proxy advisory firms with its proprietary AI platform, Proxy IQ, marks a significant shift in corporate governance and shareholder democracy. This move raises critical questions about accountability, bias, and transparency in AI-driven voting systems.

Read More »

AI Health Chatbots: Balancing Innovation and Data Privacy in the UK and EU

Ali Vaziri discusses the challenges and considerations of AI health chatbots in the UK and EU, emphasizing the importance of privacy and security when handling sensitive health data. He highlights that while these tools can enhance medical support, they must comply with stringent regulations and manage the risks associated with data sharing.

Read More »

Leading with Accountability in the Age of AI

Responsible AI leadership involves ethical accountability and principled decision-making that considers the broader impact on stakeholders. As AI transforms business, leaders must embrace their complex responsibilities rather than avoiding them or blaming technology.

Read More »

Advancing Responsible AI for a Safer Future

The Center for Responsible AI and Governance (CRAIG) is the first NSF-funded Industry-University Cooperative Research Center dedicated to creating safe, accurate, impartial, and accountable AI. With over 35 researchers across four institutions, CRAIG aims to enhance AI trustworthiness, promote U.S. competitiveness, and uphold societal values.

Read More »