Category: AI Ethics

AI Regulations in HR: What Employers Must Know

The document discusses the implications of the Artificial Intelligence Act (AI Act) for employers in the HR sector, emphasizing the need for compliance with regulations concerning prohibited AI practices and the importance of AI literacy among staff. It highlights that employers must take action to ensure that AI systems used in the workplace do not violate fundamental rights and safety standards.

Read More »

AI Regulations: Balancing Safety and Free Expression

As the US and EU develop AI frameworks, there is a caution against fear-driven policies that could undermine democratic values, such as outright bans on political deepfakes. Research indicates that the narrative surrounding AI’s influence on elections has been overstated, with evidence showing limited impact on voting behaviors.

Read More »

Revolutionizing Mental Healthcare with AI Solutions

Conversational AI has the potential to significantly improve access to mental healthcare by reducing administrative burdens and enhancing patient engagement. However, it is crucial to implement these technologies responsibly, prioritizing safety, transparency, and cultural sensitivity to ensure they serve vulnerable populations effectively.

Read More »

Strengthening Responsible AI in Global Networking

Infosys has collaborated with Linux Foundation Networking to advance Responsible AI principles and promote the adoption of domain-specific AI across global networks. The partnership includes contributions of Infosys’ Responsible AI Toolkit to new open-source projects aimed at enhancing ethical AI practices in the networking industry.

Read More »

Regulating Facial Recognition: Balancing Innovation and Human Rights

Facial Recognition Technologies (FRTs) present significant ethical and legal challenges, particularly in law enforcement, where they have been shown to misidentify individuals, leading to wrongful arrests and violations of human rights. As such, it is crucial to regulate these technologies to ensure they respect fundamental rights while balancing AI-driven innovation.

Read More »

Protecting Human Rights in the EU AI Act: A Call for Stronger Safeguards

The authors express serious concerns that the draft Code of Practice for the EU AI Act fails to adequately protect human rights by allowing many risks to be classified as optional. They argue that this approach undermines the Act’s intent to set a world-leading standard for AI regulation while prioritizing corporate interests over human rights.

Read More »

Evaluating the EU AI Act: Necessity vs. Feasibility

The EU AI Act is seen as a necessary step towards responsible AI development in the European Union, but its implementation raises significant concerns regarding enforceability and resource allocation. Critics argue that the Act’s broad scope may inadequately protect fundamental digital rights and that many of its requirements are challenging to interpret.

Read More »

Unlocking Scalable and Responsible AI Inference

AI gateways are essential for scalable and responsible AI inferencing, providing crucial enhancements such as semantic caching and content guard. These advancements address performance optimization and governance concerns, enabling organizations to deploy AI effectively across various environments.

Read More »

Essential AI Governance for Ethical Innovation

AI governance is essential to ensure that artificial intelligence systems are developed and used ethically, transparently, and in compliance with privacy laws. It involves frameworks and policies that mitigate risks related to data privacy, bias, and discrimination in AI applications.

Read More »

Benchmarks for Responsible AI: Ensuring Ethical Performance

The rapid advancement of Large Language Models (LLMs) has reshaped the landscape of artificial intelligence, bringing both exciting possibilities and significant responsibilities. To ensure these models are reliable and ethical, comprehensive benchmarks and precise evaluation metrics are essential.

Read More »