Public Trust in AI Hits New Low as Election Approaches

Record-Low Public Trust in Artificial Intelligence: A Call for Action

A recent study reveals that public trust in Artificial Intelligence (AI) has plummeted to unprecedented lows in Australia. This decline is fueled by widespread fears over the potential misuse of AI technologies, particularly as the country approaches a federal election.

Survey Findings

The University of Melbourne and KPMG conducted a global survey that confirmed Australians’ confidence in AI systems has reached a record low:

  • Only one-third of Australians express trust in AI systems.
  • Half of the population has either personally experienced harm from AI or witnessed it happening to others.
  • Nearly 60% fear that elections are being manipulated through AI-generated content or bots.
  • More than 75% advocate for stronger regulations, with less than one-third believing that existing safeguards are sufficient.
  • 90% of respondents support specific legislation to combat AI-driven misinformation.

The AI Safety Scorecard

In response to these concerns, Australians for AI Safety released the 2025 Federal Election AI Safety Scorecard. This scorecard evaluates the positions of major political parties on two key policies endorsed by experts:

  • The establishment of an Australian AI Safety Institute, which would serve as an independent body to test emerging AI models, research associated risks, and advise the government.
  • The introduction of an Australian AI Act that mandates clear liabilities and guardrails for developers and deployers of high-risk and general-purpose AI.

The scorecard indicates that only the Australian Greens, Animal Justice Party, Indigenous-Aboriginal Party of Australia, and Trumpet of Patriots fully support both policies. Additionally, Senator David Pocock and other independents have endorsed these initiatives. In contrast, the Libertarian Party opposes the policies, labeling them as “government schemes”.

Political Responses and Challenges

The Coalition’s reaction to the scorecard emphasized perceived inaction by the government, stating, “We need to be alive to the risks associated with this technology… The Albanese Labor Government has completely failed to take decisive action.” However, the Coalition did not clarify its stance on the proposed policies.

Critics argue that this situation exemplifies policrastination, where one party accuses another of inaction while failing to propose viable solutions. Australians are increasingly frustrated with politicians who delay critical decisions regarding AI governance.

Urgent Calls for Regulation

Experts underscore the importance of proper regulation to safeguard the benefits of advanced AI technologies. As noted by an AI governance researcher, it is crucial for the government to establish robust AI policies to protect public interests. The recent findings align with previous surveys indicating that Australians desire stronger safeguards.

One concerned parent and AI researcher expressed alarm over the deceptive capabilities of advanced AI technologies, stating, “It’s clear government isn’t taking this seriously.” This sentiment echoes the broader public demand for immediate action from political leaders.

The Path Forward

Australians for AI Safety argues that effective regulation has historically enabled the aviation sector to thrive. They assert that similar frameworks are necessary for AI innovation to build public trust. Comparable regulatory bodies are already operating in countries like Japan, Korea, the United Kingdom, and Canada. The EU AI Act also serves as a benchmark for Australia, which has committed to creating an AI Safety Institute but has yet to follow through.

As the nation heads toward the elections, it is imperative for voters to consider the positions of political parties on AI governance and support leaders who prioritize the establishment of strong regulatory frameworks.

More Insights

Congress’s Silent Strike Against AI Regulation

A provision in Congress's budget bill could preempt all state regulation of AI for the next ten years, effectively removing public recourse against AI-related harm. This measure threatens the progress...

Congress Moves to Limit California’s AI Protections

House Republicans are advancing legislation that would impose a 10-year ban on state regulations regarding artificial intelligence, alarming California leaders who fear it would undermine existing...

AI Missteps and National Identity: Lessons from Malaysia’s Flag Controversies

Recent incidents involving AI-generated misrepresentations of Malaysia’s national flag highlight the urgent need for better digital governance and AI literacy. The failures in recognizing national...

Responsible AI: Insights from the Global Trust Maturity Survey

The rapid growth of generative AI and large language models is driving adoption across various business functions, necessitating the deployment of AI in a safe and responsible manner. A recent...

Driving Responsible AI: The Business Case for Ethical Innovation

Philosophical principles and regulatory frameworks have often dominated discussions on AI ethics, failing to resonate with key decision-makers. This article identifies three primary drivers—top-down...

Streamlining AI Regulations for Competitive Advantage in Europe

The General Data Protection Regulation (GDPR) complicates the necessary use of data and AI, hindering companies from leveraging AI's potential effectively. To enhance European competitiveness, there...

Colorado’s AI Act: Legislative Setback and Compliance Challenges Ahead

The Colorado Legislature recently failed to amend the Artificial Intelligence Act, originally passed in 2024, which imposes strict regulations on high-risk AI systems. Proposed amendments aimed to...

AI in Recruitment: Balancing Innovation and Compliance

AI is revolutionizing recruitment by streamlining processes such as resume screening and candidate engagement, but it also raises concerns about bias and compliance with regulations. While the EU has...

EU Member States Struggle to Fund AI Act Enforcement

EU policy adviser Kai Zenner has warned that many EU member states are facing financial difficulties and a shortage of expertise necessary to enforce the AI Act effectively. As the phased...