AI Governance and Innovation: Norway’s Strategic Approach

Paving the Way for Safe and Innovative Use of AI in Norway

The Government of Norway is actively preparing for the implementation and enforcement of new regulations surrounding artificial intelligence (AI). This initiative includes the establishment of KI-Norge (literally “AI Norway”), which will serve as a national arena aimed at fostering the innovative and responsible use of AI. This development comes in response to the introduction of the EU’s Regulation on Artificial Intelligence, known as the AI Act, which sets the framework for both businesses and the public sector to utilize AI technology in a manner that is both innovative and ethically responsible.

Norway’s Minister of Digitalisation and Public Governance, Karianne Tung, announced these developments during an event at DNB’s AI lab. It was revealed that a draft Act would be circulated for public comment before the summer, with the aim of having the Norwegian Act come into force by late summer 2026.

“The Government is now making sure that Norway can exploit the opportunities afforded by the development and use of artificial intelligence, and we are on the same starting line as the rest of the EU. At the same time, we want to ensure public confidence in connection with the use of this technology,” said Minister Tung. She emphasized the importance of establishing a robust national governance structure for the enforcement of AI rules.

The Benefits and Risks of AI

AI technology is already widely used across various sectors, providing tools to address significant challenges in areas such as health, industry, and education. However, the potential for misuse of this technology, particularly in times of global uncertainty, cannot be overlooked.

Minister of Trade and Industry Cecilie Myrseth added, “The EU’s AI Act will make it easier for Norwegian companies to compete, as we will be following the same rules as the rest of Europe. It will also ensure both innovation and accountability, which are essential for building expertise and trust among customers and businesses alike.”

Establishing AI Norway

The establishment of AI Norway represents a significant step in enhancing Norway’s national efforts in AI. This new expert environment will function under the Norwegian Digitalisation Agency (Digdir) and will serve multiple roles, including acting as a driving force and advisory service, as well as a link between key AI players in the public sector, trade and industry, and academia.

Minister Tung elaborated that a key component of this initiative will be the AI Sandbox, where Norwegian businesses can experiment, develop, and train AI systems in a secure environment. The objective is to enhance competitiveness and provide greater opportunities for Norwegian AI systems, particularly benefiting start-ups and small to medium-sized enterprises.

Norsk Akkreditering and Its Role

In compliance with the EU regulations, Norsk Akkreditering (NA) has been appointed as Norway’s national accreditation body. This designation will ensure that Norway adheres to the standards outlined in the European AI Act, building upon existing systems used in public administration.

Supervisory Responsibilities of the Norwegian Communications Authority

The Norwegian Communications Authority (Nkom) has been designated as the national coordinating supervisory authority for AI. Nkom’s responsibilities include ensuring that the EU’s new AI regulations are uniformly followed in Norway, collaborating with various responsible sector authorities to monitor the safety, security, and responsible use of AI systems in the market.

“The Norwegian Communications Authority (Nkom) has been given an important role in overseeing compliance with the EU rules on artificial intelligence in Norway. It is essential that society can trust that AI is developed in line with our shared European values and rules,” said Minister Tung.

Key Points of the EU’s AI Act

  • The EU’s AI Act is the world’s first comprehensive regulation in the field of artificial intelligence, ensuring harmonization across Member States and providing clarity on what is required for safe and ethically sound AI development and use.
  • The Act promotes safe, ethical AI that protects health, safety, fundamental rights, democracy, the rule of law, and environmental sustainability, thereby enhancing public trust in the technology.
  • Different requirements are set for AI systems based on the level of risk they pose.
  • AI systems that present unacceptable risks have been banned in the EU since February 2, 2025, as they violate fundamental values and human rights.
  • High-risk AI systems are subject to strict requirements due to their potential adverse impacts on fundamental human rights, while limited-risk systems must comply with transparency obligations.
  • Most AI systems are considered minimal risk and are not subject to specific obligations, allowing for significant innovation across multiple sectors.

The introduction of the EU’s AI Act is expected to shape the future of AI in Norway and the broader European landscape, paving the way for safer and more responsible AI technologies.

More Insights

China’s AI Content Labeling: Key Compliance Insights for Businesses

China has implemented new AI labeling rules that require clear identification of AI-generated content across various media when distributed on Chinese platforms. Companies must adapt their content...

Building Secure and Ethical AI in an Evolving Threat Landscape

Sam Peters, Chief Product Officer at ISMS.online, discusses the importance of building secure and ethical AI models in a rapidly evolving threat landscape, emphasizing that compliance must be the...

AI Recruitment Compliance: Key Insights for Employers in Bulgaria and the EU

Artificial intelligence is increasingly influencing recruitment practices, offering a data-driven approach that can streamline hiring processes and reduce human bias. However, the use of AI also...

EU AI Act: Setting the Standard for Global Super AI Regulation

The EU AI Act pioneers global super AI regulation through its risk-based framework, categorizing AI systems by their potential harm and implementing tailored controls to protect society. By focusing...

Classifying Your AI System Under the EU AI Act Made Easy

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High-risk, Limited, and Minimal. Genbounty offers a free Risk Classification Wizard to help teams quickly determine their...

AI Legislation: Bridging Global Gaps at AIPPI 2025

The AIPPI 2025 congress in Yokohama will address crucial topics in AI law, such as artificial intelligence and copyright, compulsory licenses, and exhaustion of trademark rights. AIPPI president...

Colorado’s AI Act: New Compliance Challenges for Businesses

Last week, Colorado lawmakers decided to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline for businesses to prepare. The CAIA...

AI Surveillance: Ensuring Safety Without Sacrificing Privacy

AI-driven surveillance enhances safety through advanced technologies like facial recognition and behavior analysis, but it poses significant risks to privacy, civil liberties, and social equity. As...

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...