Atlassian’s Commitment to Responsible AI: Progress and Insights

From Pledges to Practice: Atlassian’s Commitments to the EU AI Pact and Responsible AI Governance

In September 2024, a significant event took place in Brussels, Belgium, marking the launch of the EU AI Pact, an initiative by the European Commission aimed at promoting responsible AI practices among companies. This pact invited organizations to voluntarily pledge their commitment to uphold key principles in preparation for the forthcoming EU AI Act.

Atlassian proudly joined over 100 industry leaders as initial signatories of the Pact, reinforcing its dedication to responsible AI and addressing customer compliance needs. This participation is a crucial aspect of Atlassian’s ongoing journey to deliver AI solutions that customers can trust and deploy with confidence.

Progress Report on Pledges

Fast forward a year, and Atlassian returned to Brussels to share a comprehensive report titled “From Pledges to Practice: Atlassian’s Commitment to the EU AI Pact”. This report outlines the company’s five specific pledges under the Pact and highlights key actions taken, achievements reached, and lessons learned during implementation.

Key Areas of Commitment

The five areas of focus emphasized in Atlassian’s pledges reflect its Responsible Technology Principles and are essential for aligning with the EU AI Act:

  1. Organizational Strategy for AI Governance: Atlassian aims to promote the internal use of AI through a structured governance strategy.
  2. Addressing High-Risk AI Use Cases: The company is committed to identifying and managing high-risk AI scenarios within its products and operations.
  3. Team Education on AI: Educating employees about AI and its impacts is a priority for fostering a knowledgeable workforce.
  4. Transparency for Deployers: Atlassian strives to provide clarity for users deploying its AI technologies.
  5. Designing Recognizable AI Systems: Ensuring that end-users can identify AI interactions in their experiences is crucial for user trust.

By engaging with the EU AI Pact, Atlassian undertook a reflective examination of its AI governance programs, assessing both successes and areas for improvement. The new report aims to provide valuable insights for customers and stakeholders regarding Atlassian’s initiatives in responsible AI governance.

Accountability as a Collective Effort

Atlassian’s Responsible Technology Principles emphasize that accountability is a collaborative endeavor. Alongside the report, the company has updated its No BS Guide to Responsible AI Governance and the Responsible Technology Review Template. These resources are the result of insights gained from numerous internal reviews of AI products and use cases, offering a detailed look at how responsible AI practices can be effectively implemented.

Join the Journey

Organizations interested in enhancing their AI governance are encouraged to adopt the practices outlined by Atlassian when deploying new technologies. The company invites stakeholders to stay informed about its compliance journey with the EU AI Act by visiting its compliance resource center. As Atlassian continues to support customers globally in adopting AI, it looks forward to sharing more about its responsible AI initiatives and readiness for compliance.

Access the report

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...