SC Adopts AI Governance Framework for Courts
The Supreme Court (SC) has approved a governance framework regulating the use of artificial intelligence (AI) in the judiciary, setting guidelines aimed at modernizing court operations while preserving human judgment in decision-making.
Framework Overview
In a resolution dated February 18, 2026, the SC adopted the “Governance Framework on the Use of Human-Centered Augmented Intelligence in the Judiciary.” This framework lays out rules anchored on fairness, accountability, and transparency, emphasizing the ethical and responsible use of human-centered augmented intelligence tools in the judiciary.
The framework aims to reinforce the public’s faith and confidence in the independence and impartiality of the judicial system.
Drafting and Consultation
The policy was drafted by a working group led by senior associate justice Marvic Leonen, with associate justices Ramon Paul L. Hernando and Rodil V. Zalameda serving as vice chairpersons. It was developed in consultation with members of the judiciary, legal experts, and the academe, aligning with international standards, including frameworks from Asean and guidelines from Unesco.
Core Principles
At the core of the framework is the concept of “human-centered augmented intelligence,” emphasizing that AI should assist — not replace — human reasoning. The SC stated:
“The use of human-centered augmented intelligence should be centered on human values, such as the promotion of the rule of law and fundamental freedoms, dignity and autonomy, privacy and data protection, fairness, nondiscrimination, and social justice.”
AI Tool Applications
The SC noted that AI tools may support tasks such as:
- Legal research
- Document summarization
- Transcription
- Translation
- Data processing
However, the outputs of these tools cannot be the sole basis for judicial decisions, as judges and court officials remain accountable for all rulings.
Implementation and Oversight
The use of AI tools will require prior authorization from the SC and will be rolled out in phases, beginning with pilot testing. Mandatory disclosure rules will apply, requiring users to identify the AI tool used, its purpose, and the extent of human oversight.
The framework also imposes safeguards on privacy and data protection, prohibiting the processing of confidential or privileged information without express authority. Risk assessments must be conducted before deploying any AI system, including checks against threats such as data poisoning.
Permanent Committee Establishment
To oversee implementation, the SC will establish a permanent committee tasked with guiding the development and ethical use of AI in the judiciary. This body will include representatives from the legal, technical, and academic sectors.
Addressing Algorithmic Bias
The policy further requires measures to prevent algorithmic bias and discrimination, encouraging the use of AI systems that are environmentally sustainable.
Strategic Plan Alignment
The SC stated that the framework supports its Strategic Plan for Judicial Innovations 2022–2027, which aims to build a more transparent, accountable, and technology-driven judiciary.