Enhancing Ocean Protection with a Trustworthy AI Framework

International Study Presents a Practical Guide on How AI Can Enhance Ocean Protection

A European team led by AZTI—Marine and Food Research (Spain) has developed a framework that establishes three pillars for marine AI to be reliable, ethical, and scientifically robust. While AI adoption is accelerating worldwide, global AI governance in the marine domain remains fragmented, with differing regulatory approaches across regions.

The full work is detailed in the scientific article “Towards Trustworthy Artificial Intelligence for Marine Research, Fisheries and Environmental Management,” published in Fish and Fisheries.

Understanding AI’s Challenges in Marine Contexts

“We are seeing a massive increase in the use of AI algorithms that process vast streams of marine data—from cameras and sonar to satellite observations—but they often fail to meet expectations,” explained an AI expert from AZTI. “The key question is: how much trust can we place in the AI algorithms? Given that AI is already a reality for the fishing and marine research sector, it will only be useful if it is trustworthy. Our work establishes how to ensure trustworthiness by combining science, ethics, and industry engagement.”

AI offers enormous possibilities but also risks. For example, an onboard camera system used for automated catch monitoring can mistake two similar species if it hasn’t been trained by experts and with images taken under diverse lighting conditions. A model predicting fish abundance may fail if built on incomplete or biased data, providing a misleading picture of the real state of a population. Such issues illustrate the necessity for robust criteria for quality, transparency, and validation, especially in a field where decisions affect ecosystems, fishing communities, and public policy.

Three Pillars for AI That Builds Trust

The framework proposed by the research team is structured around three main pillars:

  1. Socioeconomic and Legal Viability

    The development and use of AI must be accessible to the entire marine sector, including small-scale fisheries, and aligned with European regulations, including the new AI Regulation. The study emphasizes that the most effective tools are those designed with the direct participation of stakeholders, which increases social acceptance, incorporates local knowledge, and reduces resistance.

  2. Ethical Governance of Data

    For AI to function effectively, it needs diverse, clean, traceable, and responsibly managed datasets. The authors recommend applying FAIR, CARE, and TRUST principles to marine data, ensuring that information—images, sensor signals, or monitoring records—is interoperable, respectful of the communities generating it, and preserved for long-term use. Good data governance, they argue, is the foundation for transparency, reproducibility, and accountability.

    “When AI is used to guide decisions that affect marine ecosystems and livelihoods, accessibility, transparency, and validation are essential,” stated a researcher. “Our framework provides practical guidance to ensure that AI strengthens scientific evidence and trust across the marine sector.”

  3. Technical Robustness and Scientific Validation

    AI must demonstrate its reliability under real-world ocean conditions—not just in controlled environments. The study recommends validating models with independent data, applying statistical tests, and comparing outcomes with on-site measurements. For instance, automated catch analyses can be checked against manual port sampling to identify discrepancies. Such cross-validation ensures that algorithms reflect reality and deliver genuinely useful management tools.

Benefits for Research, Fishing, and Society

The framework’s implications extend to the scientific community, administrations, the fishing sector, and the public. For marine research, it provides coherent criteria for developing and benchmarking AI models, improving comparability, and accelerating insights into ecosystem health and climate impacts.

For fisheries and environmental management, it strengthens the reliability of decision-support systems—from quota allocation and marine spatial planning to monitoring illegal fishing. Properly validated models and well-governed data can help optimize routes, reduce emissions, enhance traceability, and improve sustainability at sea.

For society, trustworthy AI ensures that ocean digitalization proceeds responsibly. It supports a sustainable blue economy, balancing technological innovation with social and ecological well-being. As AI becomes increasingly integrated into environmental governance, the authors stress that regulation and ethics must evolve alongside technology.

“Regulating AI will be one of the defining governance challenges of our lifetime,” stated a fisheries biologist. “In the ocean, where data and decisions shape both ecosystems and societies, AI must serve as a bridge between human judgment and machine precision. Only by aligning ethical governance, scientific validation, and social inclusion can we ensure that AI reinforces—not replaces—our capacity to make informed decisions about the sea.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...