Harnessing General-Purpose AI: Balancing Innovation with Risk and Responsibility

Imagine a world transforming at an unprecedented pace, reshaped by artificial intelligence capable of far more than just processing data. We are witnessing the emergence of systems that write code, generate stunningly realistic images, and even reason through complex scientific problems. This rapidly evolving landscape presents both remarkable opportunities and potentially serious hazards. Understanding the capabilities, risks, and necessary safeguards surrounding these general-purpose AI systems is now of paramount importance, driving urgent conversations about how we can harness their power while mitigating potential harms to individuals, organizations, and society as a whole.

What are the current capabilities of general-purpose AI and potential future advancements

General-purpose AI has seen rapid advancements in recent years, moving from barely producing coherent paragraphs to writing computer programs, generating photorealistic images, and engaging in extended conversations. Recent models demonstrate improved scientific reasoning and programming abilities.

AI Agents

Many companies are investing in AI agents — general-purpose AI systems that can autonomously act, plan, and delegate tasks with minimal human oversight. These sophisticated agents could complete longer projects than current systems, potentially unlocking both benefits and risks.

Future Capabilities

The pace of advancement in the coming months and years is uncertain, ranging from slow to extremely rapid. Progress depends on deploying more data and computing power for training, and whether scaling can overcome current limitations. While scaling remains physically feasible for several years, major advancements may require research breakthroughs or novel scaling approaches.

Key Considerations for Policymakers:

  • Pace of Advancement: How rapidly will general-purpose AI capabilities advance, and how can that progress be reliably measured?
  • Risk Thresholds: What are sensible risk thresholds to trigger mitigation measures?
  • Information Access: How can policymakers best gain access to information about general-purpose AI relevant to public safety?
  • Risk Assessment: How can researchers, companies, and governments reliably assess the risks of general-purpose AI development and deployment?
  • Internal Operations: How do general-purpose AI models work internally?
  • Reliable Design: How can general-purpose AI be designed to behave reliably?

What types of risks are associated with the development and deployment of general-purpose AI

General-purpose AI (GPAI) presents a spectrum of risks, categorized here for clarity: malicious use, malfunctions, and systemic effects. Some harms are already apparent, while others are emerging as GPAI capabilities advance.

Malicious Use Risks

Bad actors could leverage GPAI to inflict harm on individuals, organizations, or society as a whole:

  • Fake Content: GPAI facilitates the generation of highly realistic fake content for non-consensual pornography, financial fraud, blackmail, and reputational damage.
  • Manipulation: GPAI enables persuasive content at scale, which can sway public opinion and influence political outcomes.
  • Cyber Offense: GPAI systems are showing capabilities in automating parts of cyberattacks, lowering the barrier for malicious actors.
  • Biological/Chemical Attacks: Some GPAI demonstrate abilities to aid in the creation of biological or chemical weapons. One major AI company recently increased their assessment of this type of biological risk from “low” to “medium”.

Risks from Malfunctions

Even without malicious intent, GPAI systems can cause harm due to:

  • Reliability Issues: Current GPAI can be unreliable, generating falsehoods in critical domains like medical or legal advice.
  • Bias: GPAI can amplify social and political biases, leading to discrimination and unequal outcomes in areas such as resource allocation.
  • Loss of Control (Hypothetical): While not currently plausible, some foresee scenarios where GPAI systems operate outside human control, warranting further attention.

Systemic Risks

Beyond individual model risks, widespread GPAI deployment introduces broader societal concerns:

  • Labour Market Risks: GPAI could automate a wide range of tasks, potentially leading to job losses that may or may not be offset by new job creation.
  • Global R&D Divide: GPAI development is concentrated in a few countries, raising concerns about global inequality and dependence.
  • Market Concentration: A small number of companies dominate the GPAI market, increasing the potential for cascading failures due to bugs or vulnerabilities.
  • Environmental Risks: GPAI is rapidly increasing energy, water and raw material use in compute infrastructure.
  • Privacy Risks: GPAI can cause both unintentional and deliberate violations of user privacy.
  • Copyright Infringements: GPAI both learns from and creates expressive media, challenging existing systems on data consent, compensation, and control. Legal uncertainty is causing AI companies to become more opaque, hindering third-party safety research.

The release of AI models to the general public as “open-weight models” (where the inner “weights” of the model are publicly available for download) adds another layer of complexity. This may increase or decrease various identified risks depending on the circumstances.

What techniques exist for identifying, assessing, and managing the risks associated with general-purpose AI

Risk management in general-purpose AI is still in its infancy, but promising techniques are emerging to address unique challenges inherent in this technology. Think of it as building safety systems for something we only partially understand ourselves.

Risk Identification and Assessment

The current gold standard remains “spot checks” — essentially testing AI behavior in specific scenarios. But let’s be clear, these are limited. It’s hard to predict a comprehensive range of use cases for general-purpose AI, or to replicate real-world conditions in a lab. Risk assessment needs expertise, resources, and access to information about AI systems, which AI companies are hesitant to share.

Mitigation Techniques

Several approaches are being explored, but caveats apply:

  • Adversarial Training: Exposing models to scenarios designed to make them fail so they can get better at it. Imagine teaching an AI to spot scams, but not being able to predict the emergence of new threats. Recent findings suggest that even with adversarial training, it is still generally easy to circumvent these safeguards.
  • Monitoring and Intervention: Tools exist to detect AI-generated content and track system performance. Layering technical measures with human oversight can improve safety but also introduces costs and delays.
  • Privacy measures: These range from removing sensitive training data to employing privacy-enhancing tech. However, adapting data privacies in general seems to be more challenging than mitigating safety concerns..

Economic & Political Considerations

External elements such as competitive pressure and the pace of advancement adds another layer of complexity to the equation. A tradeoff must be made between companies seeking to implement these risk mitigating techniques while remaining competitive. Decision-makers can’t be sure if there is going to be a widespread shift in policy that either aids or hinders safety efforts.

How can policymakers best understand and respond to the inherent uncertainties concerning general-purpose AI

Policymakers grappling with the rise of general-purpose AI (GPAI) face what experts are calling an “evidence dilemma.” The challenge is how to regulate a technology when its rapid advancement outpaces the available scientific evidence regarding its true potential and risks. Given the unpredictable nature of GPAI development, acting too early with preemptive measures could prove unnecessary or even counterproductive. On the other hand, waiting for definitive proof of risks might leave society vulnerable to sudden, severe threats.

Bridging the Information Gap

Currently, a significant informational asymmetry exists. AI companies possess considerably more insight into their systems’ internal workings and potential risks than governments or independent researchers. This imbalance hampers effective risk management across the board.

Addressing Competitive Pressures

Policymakers must also consider the impact of competitive pressures on both AI companies and governments. Intense competition can disincentivize comprehensive risk management within companies, while governments might deprioritize safety policies if they perceive a conflict with maintaining a competitive edge in the global AI landscape.

Key Actions for Policymakers:

  • Early Warning Systems: Support the development and deployment of early warning systems that can identify emerging risks associated with GPAI.
  • Risk Management Frameworks: Encourage the adoption of risk management frameworks that trigger specific mitigation measures based on new evidence of risks.
  • Transparency Measures: Explore mechanisms to increase transparency around GPAI development and deployment, while acknowledging legitimate commercial and safety concerns.
  • Safety Evidence: Consider requiring developers to provide evidence of safety before releasing new models, promoting a proactive approach to risk management.

Areas for Further Research:

Policymakers should encourage research into the following critical questions:

  • Pace of Advancement: How rapidly will GPAI capabilities advance, and how can progress be reliably measured?
  • Risk Thresholds: What are sensible risk thresholds to trigger mitigation measures?
  • Information Access: How can policymakers best gain access to information about GPAI relevant to public safety?
  • Risk Assessment: How can researchers, companies, and governments reliably assess the risks of GPAI development and deployment?
  • Model Internals: How do GPAI models work internally?
  • Reliable Design: How can GPAI be designed to behave reliably?

Ultimately, responding to the uncertainties surrounding GPAI requires a delicate balance. Policymakers must foster innovation while simultaneously safeguarding against potential harms, navigating a complex landscape with limited information and rapidly evolving technology.

What factors beyond technical aspects influence the progress and application of general-purpose AI?

As a tech journalist specializing in AI governance, I’m often asked about the factors influencing the progression and adoption of general-purpose AI beyond just technical capabilities. This is a crucial area for legal-tech professionals and policy analysts to understand, as these factors dramatically shape the landscape of risk and regulation.

Non-Technical Influencers of Progress

While improvements in compute, data availability, and algorithmic design are central to AI advancement, non-technical factors exert considerable influence:

  • Government Regulations:The approaches that governments take to regulating AI will likely have an impact on the speed of development and adoption of general-purpose AI.
  • Economic Factors: The pace of advancement in general-purpose AI creates an ‘evidence dilemma’ for decision-makers.
    Rapid capability advancement makes it possible for some risks to emerge in leaps.
  • Societal Dynamics: Societal factors make risk
    management in the field of general-purpose AI difficult.

The Evidence Dilemma for Policymakers

The potentially rapid and unexpected advancements in general-purpose AI present a unique governance challenge. Policymakers face the “evidence dilemma.” They must weigh potential benefits and risks without a large body of scientific evidence due to the rapid pace of technological improvements. This leads to a crucial balancing act:

  • Preemptive Measures: Acting early on limited evidence might be ineffective or turn out to be ultimately unnecessary.
  • Delayed Action: Waiting for definitive proof of risk can leave society vulnerable to rapidly emerging threats, making effective mitigation impossible.

To address this, some solutions companies and governments are working on are:

  • Early warning systems: Monitor potential risks by tracking specific measures when new evidence of risks emerge.
  • Risk management frameworks: Require developers to provide evidence of safety before releasing a new model.

Information Asymmetry Challenges

A key challenge is the information gap: AI companies often possess substantially more knowledge about their systems than governments or independent researchers. This lack of transparency complicates effective risk management.

  • Limited Data Sharing: Companies often restrict access to detailed model information due to commercial and safety concerns.
  • Hindered Research: Opacity inhibits third-party AI safety research.

Competitive Pressures

In addition to the regulatory and information challenges, AI companies and governments are frequently subject to competitive pressures that affect how prioritized AI risk management is:

  • Deprioritization of Risk: Competitive pressure may incentivize companies to invest less time or resources into risk management than they otherwise would.
  • Conflicts in Policy: Governments may invest less in policies to support risk management in cases where they perceive trade-offs between international competition and risk reduction.

The journey to harness general-purpose AI is fraught with promise and peril. Its ability to generate content, automate tasks, and even aid in scientific discovery is rapidly evolving, demanding careful consideration. The potential for malicious use, system malfunctions, and broader societal disruptions is real, spanning from disinformation campaigns to job displacement. While nascent risk management techniques offer some mitigation, they must grapple with the complexities of a constantly shifting technological landscape. Policymakers face a crucial balancing act: fostering innovation alongside responsible development. Navigating this complex terrain requires proactive measures, robust risk assessment frameworks, and a commitment to transparency, ensuring that the pursuit of AI’s transformative potential doesn’t come at an unacceptable cost to safety and societal well-being.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...