Businesses Must Lead in Responsible AI Development

Businesses Urged to Take the Lead on Developing AI Regulation

In New Zealand, businesses are encouraged to adopt a responsible use of generative artificial intelligence (AI) to overcome the widespread skepticism surrounding the technology. A recent webinar organized by The Law Association highlighted the potential risks of unregulated AI, which could stifle innovation.

Current Sentiments Towards AI

Presenter Hannah King referenced a KPMG report indicating that New Zealanders exhibit more distrust towards AI than any other country. The findings reveal that:

  • Only 44% of New Zealanders believe the benefits of AI outweigh the risks.
  • Just 23% feel that existing regulations are sufficient to ensure safe AI use.
  • A significant 81% believe that regulation is necessary.

“We have the lowest rate globally of acceptance, excitement, and optimism about AI,” stated King, emphasizing the need for a comprehensive regulatory approach.

The Importance of Responsible AI

King articulated that the widespread adoption of responsible AI is vital for realizing its potential. According to the World Economic Forum, responsible AI is defined as:

“Building and managing AI systems to maximize benefits while minimizing risks to people, society, and the environment.”

Regulatory Challenges and Global Perspectives

King pointed out that various countries are taking fragmented approaches to AI regulation, which creates challenges for companies operating internationally. Issues such as compliance, public trust, and overlapping regulations are emerging as significant hurdles.

Many nations are shifting towards a risk-based approach to AI regulation, focusing on protecting core values like privacy, non-discrimination, and security. However, governments have historically struggled to keep pace with rapidly evolving technology.

International Regulatory Examples

Globally, different jurisdictions are reacting to the fast-paced development of AI:

  • Australia is regulating AI using existing laws and is establishing an AI Safety Institute to identify future risks.
  • The United States is largely focused on innovation and deregulation, with state-level efforts centering on privacy and copyright protection.
  • The European Union (EU) has enacted an AI Act that categorizes risks into four levels, from unacceptable to minimal, and imposes extraterritorial implications that could affect New Zealand companies.

For multinational companies, EU legislation represents a global high-water mark for AI regulation, prompting a need for consistent governance.

The Situation in New Zealand

Currently, New Zealand lacks standalone AI legislation, with the government opting for a “light touch” approach that combines existing laws with regulatory guidance and industry self-regulation. However, the rapidly changing landscape may prompt future legislative changes.

“There was a cabinet paper in 2024 noting that regulatory intervention should be considered to unlock innovation or address acute risks,” King noted.

Call to Action for Businesses

Businesses in New Zealand are urged to proactively develop AI policies that emphasize a responsible approach to the technology. Fostering trust and encouraging innovation will be crucial for the future of AI in the country.

“The less we focus on responsible AI, the more suspicion and concern will grow, ultimately hindering our progress in AI adoption,” King warned.

In conclusion, it is imperative for New Zealand businesses to lead the charge in developing thoughtful AI regulations that not only protect citizens but also enable innovation to thrive in a responsible manner.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...