Day: April 2, 2025

Strengthening Responsible AI in Global Networking

Infosys has collaborated with Linux Foundation Networking to advance Responsible AI principles and promote the adoption of domain-specific AI across global networks. The partnership includes contributions of Infosys’ Responsible AI Toolkit to new open-source projects aimed at enhancing ethical AI practices in the networking industry.

Read More »

AI Regulation: Balancing Innovation and Oversight

Amid the upcoming release of the draft enforcement ordinance of the “Artificial Intelligence (AI) Framework Act,” experts emphasize the need for effective standards through industry communication to promote AI innovation in Korea. Key concerns include regulatory uncertainty and potential burdens on companies due to unclear definitions and obligations related to high-impact AI.

Read More »

AI Deregulation: A Risky Gamble for Financial Markets

The article discusses the risks associated with AI deregulation in the U.S., particularly under President Trump’s administration, which may leave financial institutions vulnerable to unchecked algorithms. It highlights the need for robust regulatory frameworks to balance innovation and economic stability, especially in the wake of past financial crises.

Read More »

AI Cybersecurity: Essential Requirements for High-Risk Systems

The Artificial Intelligence Act (AI Act) is the first comprehensive legal framework for regulating AI, requiring high-risk AI systems to maintain a high level of cybersecurity to protect against malicious attacks. Cybersecurity is essential not only for high-risk systems but for all AI systems that interact with users or process data, as it impacts trust, reputation, and compliance.

Read More »

Essential AI Training for Compliance with the EU-AI Act

The EU is mandating that companies developing or using artificial intelligence ensure their employees are adequately trained in AI skills, with penalties for non-compliance. IVAM is offering a 4-hour online compact training course to help businesses meet these new legal requirements.

Read More »

Achieving National Tech Sovereignty through AI

Countries are increasingly focused on developing sovereign AI systems to maintain control over their technology and data, particularly for critical infrastructure and national security. This strategic initiative aims to mitigate dependence on foreign tech platforms while ensuring that AI systems align with national values and cultural norms.

Read More »

EU’s AI Code of Practice Threatens Copyright Protections, Say Creators

A coalition of European authors and rightsholders has condemned the third draft of the EU’s General-Purpose AI Code of Practice, stating that it undermines copyright laws and fails to protect their rights. They argue that the draft lacks proper measures for GPAI providers to ensure compliance with copyright regulations, rendering the rights of creators almost meaningless.

Read More »

The Future of AI Regulation in the EU: Key Developments and Challenges

The AI Act represents a significant legislative effort by the European Union to regulate artificial intelligence systems, aiming to balance innovation with ethical and legal considerations. With key areas of debate including the definition of high-risk AI systems and the need for transparency, the Act is set to shape the future landscape of AI regulation in Europe.

Read More »

Regulating Facial Recognition: Balancing Innovation and Human Rights

Facial Recognition Technologies (FRTs) present significant ethical and legal challenges, particularly in law enforcement, where they have been shown to misidentify individuals, leading to wrongful arrests and violations of human rights. As such, it is crucial to regulate these technologies to ensure they respect fundamental rights while balancing AI-driven innovation.

Read More »