Category: Artificial Intelligence Governance

Industry Concerns Mount Over EU’s Draft AI Code

The draft Code of Practice on General-Purpose Artificial Intelligence (GPAI) aims to assist AI companies in complying with the EU’s AI Act, focusing on transparency, copyright, and risk assessment. However, the tech industry has raised significant concerns about the draft’s burdensome requirements and its potential impact on innovation.

Read More »

Balancing Innovation and Safety in America’s AI Future

The article discusses the challenges and uncertainties surrounding the United States’ approach to artificial intelligence (AI) governance under President Trump, emphasizing the need to balance rapid innovation with safety and public trust. It highlights the divergence between U.S. AI policies and those of the European Union and China, suggesting that the U.S. should integrate meaningful safeguards to maintain its leadership in AI development.

Read More »

Wall Street Warns of New AI Hazards

Wall Street firms, including Goldman Sachs and JPMorgan Chase, are alerting investors to new risks associated with the growing use of artificial intelligence, such as software hallucinations and potential criminal misuse. These concerns come as financial institutions increasingly incorporate AI into their operations, raising issues around data quality, employee morale, and regulatory compliance.

Read More »

Responsible AI Practices in Software Engineering

Artificial intelligence has the potential to revolutionize how people live and work, but its misuse necessitates a focus on responsible AI practices. Ensuring safety, transparency, and fairness in AI systems is crucial for software engineers to mitigate risks and negative consequences.

Read More »

AI and Copyright: Protecting Creators in a Digital Age

The rapid development of artificial intelligence technologies presents both opportunities and risks for the creative industries, particularly concerning copyright infringement and job security. As generative AI systems evolve, they threaten to undermine intellectual property rights and diminish employment prospects for artists, raising urgent calls for responsible AI practices.

Read More »

Building Trustworthy AI: From Talk to Action

Artificial intelligence (AI) is increasingly prevalent in various sectors, but its misuse can lead to bias and unfair decisions, posing significant business risks. Companies must transition from merely discussing AI ethics to actively implementing responsible AI practices that ensure fairness, transparency, and accountability.

Read More »

Ireland’s New AI Regulatory Framework: Key Competent Authorities Designated

On March 4, 2025, the Irish government approved a recommendation to implement a distributed regulatory model for enforcing the EU Artificial Intelligence Act. This marks a significant step in establishing a framework for AI governance in Ireland, designating eight public bodies as national competent authorities responsible for oversight in their respective sectors.

Read More »

Empowering Your Workforce with AI Literacy Under the EU AI Act

The European Union’s AI Act mandates that organizations ensure their workforce is sufficiently AI-literate, requiring tailored training programs for technical teams, non-technical staff, and leadership. This initiative not only aims for compliance but also presents an opportunity to foster a strong security culture and enhance organizational resilience in an AI-driven landscape.

Read More »

Virginia’s Landmark Legislation on High-Risk AI Systems

On February 20, 2025, the Virginia General Assembly passed HB 2094, the Virginia High-Risk Artificial Intelligence Developer and Deployer Act, which awaits the signature of Governor Youngkin. If enacted, Virginia will become the second state to legislate against algorithmic discrimination, targeting high-risk AI systems that significantly impact consequential decision-making.

Read More »