Strengthening Responsible AI in Global Networking

Infosys and Linux Foundation Networking Collaborate to Strengthen Responsible AI for Global Networks

In a significant move to enhance ethical enterprise AI adoption, Infosys has announced its collaboration with Linux Foundation Networking (LFN). This partnership aims to leverage open source networking projects to advance Responsible AI principles across global networks.

Contribution of Responsible AI Toolkit

Infosys has contributed its Responsible AI Toolkit and an AI application development framework to two new open source networking projects: Salus and Essedum. These projects are designed to accelerate the integration of AI technologies while maintaining ethical standards.

Salus utilizes Infosys’ comprehensive toolkit to provide advanced technical guardrails that help detect and mitigate AI risks, including bias, privacy breaches, and harmful content. This framework also enhances model transparency, ensuring AI systems are not only effective but also trustworthy.

On the other hand, Essedum builds on Infosys’ existing AI networking solutions and seed code to facilitate the integration of AI data, models, and applications within the networking industry. This development is expected to streamline AI implementation in various networking scenarios.

Shared Vision for Responsible AI

Arpit Joshipura, General Manager at LFN, expressed gratitude for Infosys’ contributions, stating, “Our efforts to further domain-specific AI are coming to fruition with the addition of these new projects. Creating combined, open, and unified frameworks will only accelerate AI-driven innovation.” This statement emphasizes the importance of collaboration in fostering technological advancement.

He further noted that by introducing accessible solutions for Responsible AI and integrating data sharing and domain-specific AI tools, the industry is being equipped to build smarter and more efficient networks.

Commitment to Ethical Innovation

Infosys’ Chief Technology Officer, Mohammed Rafee Tarafdar, highlighted the company’s commitment to advancing innovation that addresses complex challenges while upholding principles of transparency, fairness, and trust. He indicated that this collaboration is a testament to their shared vision of embedding Responsible AI principles into actionable solutions.

With Infosys’ strong AI capabilities, particularly through its Topaz offerings, the company actively supports the initiative to help organizations harness domain-specific AI responsibly across global networks.

Conclusion

The collaboration between Infosys and Linux Foundation Networking marks a pivotal step in promoting ethical AI practices within the networking domain. By focusing on responsible innovation, both organizations aim to create a future where AI technologies contribute positively to global networks while adhering to ethical standards.

More Insights

Shaping Responsible AI Governance in Healthcare

The AI regulatory landscape has undergone significant changes, with the US and UK adopting more pro-innovation approaches while the EU has shifted its focus as well. This evolving environment presents...

AI Basic Law: Industry Calls for Delay Amid Regulatory Ambiguities

Concerns have been raised that the ambiguous regulatory standards within South Korea's AI basic law could hinder the industry's growth, prompting calls for a three-year postponement of its...

Essential Insights on GDPR and the EU AI Act for Marketers

This article discusses the importance of GDPR compliance and the implications of the EU AI Act for marketers. It highlights the need for transparency, consent, and ethical use of AI in marketing...

Understanding the EU AI Act Risk Pyramid

The EU AI Act employs a risk-based approach to regulate AI systems, categorizing them into four tiers based on the level of risk they present to safety, rights, and societal values. At the top are...

Harnessing Agentic AI: Current Rules and Future Implications

AI companies, including Meta and OpenAI, assert that existing regulations can effectively govern the emerging field of agentic AI, which allows AI systems to perform tasks autonomously. These...

EU’s Unexpected Ban on AI in Online Meetings Raises Concerns

The European Commission has banned the use of AI-powered virtual assistants in online meetings, citing concerns over data privacy and security. This unexpected decision has raised questions about the...

OpenAI Calls for Streamlined AI Regulations in Europe

OpenAI is urging the EU to simplify AI regulations to foster innovation and maintain global competitiveness, warning that complex rules could drive investment to less democratic regions. The...

Designing Ethical AI for a Trustworthy Future

Product designers are crucial in ensuring that artificial intelligence (AI) applications are developed with ethical considerations, focusing on user safety, inclusivity, and transparency. By employing...

Bridging the Gaps in AI Governance

As we stand at a critical juncture in AI’s development, a governance challenge is emerging that could stifle innovation and create global digital divides. The current AI governance landscape resembles...