Supreme Court Unveils AI Governance Framework for Judiciary

SC Adopts AI Governance Framework for Courts

The Supreme Court (SC) has approved a governance framework regulating the use of artificial intelligence (AI) in the judiciary, setting guidelines aimed at modernizing court operations while preserving human judgment in decision-making.

Framework Overview

In a resolution dated February 18, 2026, the SC adopted the “Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary.” This framework lays out rules anchored on fairness, accountability, and transparency, emphasizing the ethical and responsible use of human-centered augmented intelligence tools in the judiciary.

The framework aims to reinforce the public’s faith and confidence in the independence and impartiality of the judicial system.

Drafting and Consultation

The policy was drafted by a working group led by senior associate justice Marvic Leonen, with associate justices Ramon Paul L. Hernando and Rodil V. Zalameda serving as vice chairpersons. It was developed in consultation with members of the judiciary, legal experts, and the academe, aligning with international standards, including frameworks from Asean and guidelines from Unesco.

Core Principles

At the core of the framework is the concept of “human-centered augmented intelligence,” emphasizing that AI should assist — not replace — human reasoning. The SC stated:

“The use of human-centered augmented intelligence should be centered on human values, such as the promotion of the rule of law and fundamental freedoms, dignity and autonomy, privacy and data protection, fairness, nondiscrimination, and social justice.”

AI Tool Applications

The SC noted that AI tools may support tasks such as:

  • Legal research
  • Document summarization
  • Transcription
  • Translation
  • Data processing

However, the outputs of these tools cannot be the sole basis for judicial decisions, as judges and court officials remain accountable for all rulings.

Implementation and Oversight

The use of AI tools will require prior authorization from the SC and will be rolled out in phases, beginning with pilot testing. Mandatory disclosure rules will apply, requiring users to identify the AI tool used, its purpose, and the extent of human oversight.

The framework also imposes safeguards on privacy and data protection, prohibiting the processing of confidential or privileged information without express authority. Risk assessments must be conducted before deploying any AI system, including checks against threats such as data poisoning.

Permanent Committee Establishment

To oversee implementation, the SC will establish a permanent committee tasked with guiding the development and ethical use of AI in the judiciary. This body will include representatives from the legal, technical, and academic sectors.

Addressing Algorithmic Bias

The policy further requires measures to prevent algorithmic bias and discrimination, encouraging the use of AI systems that are environmentally sustainable.

Strategic Plan Alignment

The SC stated that the framework supports its Strategic Plan for Judicial Innovations 2022–2027, which aims to build a more transparent, accountable, and technology-driven judiciary.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...