As AI systems exert increasing influence over our lives, a critical demand has emerged: the ability to understand how these systems arrive at their conclusions. Legal professionals, compliance officers, and policy analysts are now grappling with complex questions surrounding AI’s inner workings and the factors driving its judgements. Exploring the landscape of AI explainability provides essential insights into building trustworthy and ethically sound technologies, ensuring that algorithms serve humanity’s best interests.
Here are the high-level questions, separated by ‘
As a tech journalist specializing in AI governance, I’ve reviewed documentation on AI Explainability and identified questions that legal-tech professionals, compliance officers, and policy analysts should be asking.
Understanding AI Explainability and Transparency
What’s the difference between AI Explainability and AI Transparency, and why should we focus on Explainability in practice?
- Explainability is the degree to which people can understand an AI system’s rationale and processes for ensuring sustainability, safety, fairness, and accountability.
- Transparency, while related, can refer to both the interpretability of an AI system (“opening the black box”) *and* the demonstration that AI design & development processes are sustainable, safe, fair, and driven by responsibly managed data.
- Actionable Insight: While Transparency is crucial, Explainability offers practical guidance on operationalizing transparency, making it directly relevant to governance efforts.
Regulatory Alignment
What is the UK’s Algorithmic Transparency Recording Standard (ATRS) and how can it help adhere to regulations?
- The ATRS helps public sector bodies publish information about algorithmic tools used in decision-making processes affecting the public.
Ethical Considerations
Are there trade-offs between security and explainability, and how do we balance them, especially when working with children’s digital data?
- Security vs. Explainability: High-stakes contexts may incentivize obscuring AI workings to prevent exploitation, potentially raising bias, fairness, and accountability concerns. Balancing these aspects is essential for building responsible AI systems.
- Child-Centred AI: When explainability involves children, it’s essential to consider their specific needs and capabilities, like training implementers, engaging with children throughout the project lifecycle, and adhering to the UK ICO’s Age Appropriate Design Code.
Process and Outcome Based Explanations
What are process-based and outcome-based explanations and how should we approach each one?
- Outcome-based explanations include the “what” and “why” behind model outputs. They should be accessible and easy to understand. It also includes explaining to the affected stakeholders if, how, and why the AI-assisted human judgement was reached.
- Process-based explanations demonstrate that good governance and best practices have been followed throughout an AI system’s design and use. It should involve demonstrating that considerations of sustainability, safety, fairness, and responsible data management were operative throughout the project lifecycle.
Maxims For Ethical AI
What key maxims should guide our approach to AI explainability?
- Be Transparent: Make AI use obvious and explain decisions meaningfully to individuals, in line with Article 5(1) of the UK GDPR.
- Be Accountable: Ensure oversight, be answerable to internal and external bodies, and take responsibility for compliance.
- Consider Context: There’s no one-size-fits-all approach, this applies to model and explanation choices, governance structure and stakeholders
- Reflect on Impacts: Understand potential harm or wellbeing impairments from algorithmic decisions.
Building Explainable AI Systems
What high-level considerations should guide the development of appropriately explainable AI systems?
- Context, Potential Impact, and Domain-Specific Needs: Understand the type of application, domain-specific expectations, and existing technologies.
- Draw on Standard Interpretable Techniques: Match techniques to domain risks, data resources, and task appropriateness.
- Considerations in Using ‘Black Box’ AI Systems: Thoroughly weigh potential impacts, consider supplemental interpretability tools, and formulate an action plan to optimize explainability.
- Interpretability and Human Understanding: Account for the capacities and limitations of human cognition, emphasizing simplicity and accessibility.
Types Of Explanation
What different types of explanations must an organization provide so that decisions are SSAFE – D (Sustainable, Safe, Accountable, Fair, Explainable & have good Data stewardship)?
- Rationale: Clarifying the ‘Why’
- Considerations for Child-Centred AI: Explanation of model choice, inner workings, and statistical results should be delivered in an age-appropriate manner.
- Responsibility: Providing Details On ‘Who’ is accountable at each step of the AI model’s design and deployment.
- Considerations for Child-Centred AI: Should the child be interacting with an AI system (e.g. toy, chatbot, online system) they should have the “right for explanation at an age-appropriate level and inclusive manner”.
- Data: Highlighting ‘What’ types of data is held about them, other data sources used in particular AI decision and data to train and test the AI model.
- Considerations for Child-Centred AI: Children’s data agency must be promoted and kept at the forefront including transparent reporting.
- Fairness: Explaining measures taken to ensure unbiased and equitable AI decisions,
- Considerations for Child-Centred AI: Is explicit about formal definition(s) of fairness while providing active support marginalised children may benefit and or not be disadvantaged.
- Safety: Providing the steps, measures and reasoning for ensuring the maximize robustness, performance, reliability and security of AI-assisted decisions.
- Considerations for Child-Centred AI: Should have a mechanism for continuous monitoring and assessment of safety throughout entire lifecycle of AI model. Child Centric
- Impact: Focusing considerations on how system may affect people or broader society- and can it be of use.
- Considerations for Child-Centred AI: It is critical the possible impacts which could effect; Security, Mental Health/ Wellbeing , Future Pathways are factored in.
Explainability Assurance Management
How can we practically implement all this information into explainable AI systems?
- There are set tasks to help you deploy, design, provide clarification about the results. There task also assist in providing and design deploy properly transparent and explainable Al systems which include (Design, Develop / Procure Implement)
Top Tasks For Explainability Assurance Management For AI
In what manner can my enterprise ensure well established AI models that can be adequately explained?
- Task 1 Select Priority Explanations (Domain & Impact on individuals will be key to prioritization)
- Considerations for Child Centred AI: Where Children, Children’s or Personal data of such will be included an AI systems additional care will be mandated. Project planning must have heightened transparency to explain children’s pariticpation.
- Task 2: Ensure Data is pre-processed & collected (in a manner that it will be able to explain reasons why)
- Considerations for Child Centred AI: Ensure and maintain all the regulation guidelines surrounding the handling, use, consent, etc is aligned with the (UNICEF Policy) / ICO.
- Task 3: Building a system that is capable of extracting relevant information needed.
- Considerations for Child Centred AI: That the Model you wish to use is justified- or if there is a possible method of ensuring an Ethical Model could still provide results for an outcome without added safety concerns.
- Task 4: Ensure all the extracted reasoning is Translated into results or a summary for use
- Considerations for Child Centred AI: Explanation for decision should be explained in simplistic terms to maintain proper understanding.
- Task 5: Prepared implementers prior to deploying a system.
- Considerations for Child Centred AI: Should engage with any individuals that may be responsible when dealing with Child Data to ensure it is aligned and that staff understand the sensitivity.
- Task 6: Considers all aspects of the Model for the proper data presented.
- Considerations for Child Centred AI: A short summary should be written to properly convey and support all facets or delivery of an explanation/ the model used.
What is an EAM: Explainability Assurance Management
These templates will help accomplish 6 tasked when implemented. Should be included in a checklist
- Review checklist- ensure you are able to provide: Transparent from end to end, Consideration and impact is taking in the sector, Potential is consider when explaining the Depth.
Data In Models
When considering data points/ input for data, the group or model should:
- Establish safety objectives, modeling that leads to results, implement assessments and stakeholder impact.
What is AI Explainability?
AI explainability is the degree to which AI systems and governance practices enable a person to understand the *rationale* behind a system’s behavior. It also encompasses demonstrating the processes behind its design, development, and deployment ensuring sustainability, safety, fairness, and accountability across contexts.
Core Aspects of AI Explainability
Explainability entails *communicability*, which necessitates clear and accessible explanations. The depth and content of explanations depend on the sociocultural context in which they’re delivered and the audience receiving them. AI Explainability addresses:
- Outcomes of algorithmic models (for automated decisions or as inputs to human decision-making).
- Processes by which a model and its encompassing system/interface are designed, developed, deployed, and deprovisioned.
AI Explainability vs. AI Transparency
Although related, explainability focuses on the *practice* of providing explanations about AI-supported outcomes and the processes behind them. AI Transparency refers to both:
- Interpretability of a system (understanding how and why a model performed in a specific context).
- Demonstrating that the processes and decision-making behind AI systems are sustainable, safe, and fair.
Ultimately, developing an explanation *requires* a degree of transparency.
Practical Implementation
Implementing AI explainability means focusing directly on practices of *outcome-based* and *process-based* explanation.
Key Considerations for Implementers
- **Context is Critical:** The depth, breadth, and content of explanations must vary based on sociocultural context and audience.
- **Security vs. Explainability Trade-offs:** Security measures taken to protect algorithms and data can create a conflict with explainability creating risk of bias, potentially leading to unintended consequences.
- **Child-Centric Considerations:** Additional concerns such as long-term effects on holistic development as well as data exfiltration.
What are the main types of explanation?
As AI continues to permeate critical decision-making processes, it’s no longer enough to simply have a model that performs well. Stakeholders and regulators demand transparency, driving the need to make AI systems explainable. This section delves into the primary types of explanations crucial for building trust and ensuring compliance.
The type of explanation required will vary depending on the context, the socio-cultural context, and the audience to whom they are offered. While there is no “one-size-fits-all” approach to explaining AI/ML-assisted decisions, these six common explanation types are designed to help your AI project team build concise and clear explanations. Each is related to a SSAFE-D (Sustainability, Safety, Accountability, Fairness, Explainability, and Data Stewardship) principle:
- Rationale Explanation: Addresses the “why” behind an AI decision.
- Responsibility Explanation: Clarifies who is accountable throughout the AI model’s lifecycle, providing a point of contact for human review.
- Data Explanation: Details the data used, its sources, and how it was managed to reach a decision.
- Fairness Explanation: Outlines the steps taken to ensure unbiased and equitable AI decisions.
- Safety Explanation: Describes the measures in place to maximize the performance, reliability, security, and robustness of AI outcomes.
- Impact Explanation: Explains what was considered about the potential effects of an AI decision-support system on an individual and society.
These explanations can further be divided into two broad types:
Process-Based Explanations
These explanations demonstrate the good governance processes and best practices that were followed throughout the AI system’s design and deployment. They show how sustainability, safety, fairness, and responsible data management were considered end-to-end in the project lifecycle.
For example, if trying to explain fairness and safety, the components of your explanation will involve demonstrating that you have taken adequate measures across system’s production and deployment to ensure its outcome is fair and safe.
Outcome-Based Explanations
These explanations focus on the reasoning behind model outputs, delineating contextual and relational factors. They should be communicated in plain language, accessible to impacted stakeholders.
It’s important to also take explainable AI for Children into consideration. When considering children’s rights as they relate to AI systems, it involves ensuring children understand how AI systems impact them as well as utilising age-appropriate language.
Remember, providing both process-based and outcome-based explanations is crucial for fostering trust, demonstrating accountability, and ultimately ensuring the responsible deployment of AI systems.
What considerations should be addressed when building appropriately explainable AI systems?
As AI systems become more integrated into critical decision-making processes, especially in sectors like legal-tech, compliance, and governance, understanding and explaining their rationale is paramount. Key to this is ensuring AI projects are sustainable, fair, safe, accountable, and maintain data quality and integrity. This entails emphasizing communicability, delivering clear and accessible explanations tailored to the sociocultural context and the audience.
Let’s break down the key considerations:
Transparency and Accountability
Transparency of outcomes and processes is fundamental. Documentation detailing how an AI system was designed, developed, and deployed helps justify actions and decisions throughout the project lifecycle. This directly ties into Article 5(1) of the UK GDPR, which mandates that personal data be “processed lawfully, fairly, and in a transparent manner.” Project teams need to satisfy all aspects of this principle.
- Disclose AI Use: Proactively inform individuals, in advance, about the use of AI in decisions concerning them. Be open about why, when, and how AI is being used.
- Meaningfully Explain Decisions: Provide a coherent and truthful explanation, presented appropriately and delivered at the right time.
Accountability involves ensuring suitable oversight and being answerable to internal and external stakeholders, including regulators and affected individuals. This includes taking responsibility for compliance with data protection principles and demonstrating that compliance through appropriate technical and organizational measures; data protection by design and default.
- Assign Responsibility: Identify and assign responsibility within the organization for managing and overseeing the ‘explainability’ requirements of AI systems, including a human point-of-contact for clarifications or contesting decisions.
- Justify and Evidence: Actively consider and document justified choices about designing and deploying appropriately explainable AI models. Document these considerations and demonstrate they are present in the model’s design and deployment. Show evidence of explanations provided to individuals.
Context and Impact
There is no one-size-fits-all approach. Contextual considerations involve paying attention to various interrelated elements that can affect explaining AI-assisted decisions and managing the overall process. This should be a continuous assessment throughout the project lifecycle.
- Choose Appropriate Models and Explanations: Based on the setting, potential impact, and what an individual needs to know about a decision, select an appropriately explainable AI model and prioritize relevant explanation types.
- Tailor Governance and Explanation: Ensure robust governance practices, tailored to the organization and the specific circumstances and needs of each stakeholder.
- Identify the Audience: Consider the audience and tailor explanations to their level of expertise and understanding. What level of explanation is fit for purpose, whether for end users, implementers, auditors, or individuals impacted by the decision? Consider vulnerabilities and reasonable adjustments for those requiring explanations.
Reflecting on the impacts of AI systems helps demonstrate that algorithmic techniques will not harm or impair individual wellbeing. This includes evaluating the ethical purposes and objectives of the AI project at the initial stages and revisiting and reflecting on those impacts throughout development to mitigate potential harms.
Practical Implementation & Key Considerations
When seeking higher degrees of explainability for models and improved interpretability of outputs, consider the following:
- Domain-Specific Needs: Assess the context, potential impact, and domain-specific needs when determining interpretability requirements. This includes considering the type of application, domain-specific expectations, norms, and rules, and existing technologies. How will the solution impact industry standards and other government advice?
- Standard Interpretable Techniques: Utilize standard interpretable techniques whenever possible, balancing domain-specific risks and needs with available data resources, domain knowledge, and appropriate AI/ML techniques.
- Black Box AI Systems: When considering ‘black box’ AI systems, thoroughly weigh the potential impacts and risks, explore options for supplemental interpretability tools, and formulate an action plan to optimize explainability. Create detailed reporting to assist with model decision-making.
- Human Understanding: Keep in mind interpretability must be in terms of the capacities and limitations of human cognition, prioritizing simplicity and informational parsimony for accessible AI.
Types of Explanations
Context determines what information is required, useful, or accessible to explain decisions involving AI and, therefore, what types of explanations are the most appropriate. There are several explanation types that were designed to help your AI project team build concise and clear explanations:
- Rationale Explanation: Helps people understand the reasons that led to a decision outcome.
- Responsibility Explanation: Helps people understand who is involved in the development and management of the AI model, and who to contact for a human review of a decision.
- Data Explanation: Helps people understand what data about them, and what other sources of data, were used in a particular AI decision, as well as the data used to train and test the AI model.
- Fairness Explanation: Helps people understand the steps taken to ensure AI decisions are generally unbiased and equitable, and whether or not they have been treated equitably themselves.
- Safety Explanation: Helps people understand the measures that are in place and the steps taken to maximize the performance, reliability, security, and robustness of the AI outcomes, as well as what is the justification for the chosen type of AI system.
- Impact Explanation: Helps people understand the considerations taken about the effects that the AI decision-support system may have on an individual and society.
What is interpretability in the context of AI/ML systems?
In the rapidly evolving world of AI and machine learning (ML), interpretability has emerged as a critical concern for regulators, compliance officers, and anyone deploying these systems. Simply put, interpretability is the degree to which a human can understand how and why an AI/ML model made a particular prediction or decision in a specific context. It’s about more than just opening the “black box”; it’s about making the model’s rationale accessible and comprehensible to human users.
The Core of Interpretability
Interpretability goes beyond abstract understanding; it centers on a human’s ability to grasp the interworking and underlying logic of an AI system. Ideally, stakeholders should be able to dissect the reasons behind a model’s outputs or behaviors, pinpointing how various input features, interactions, and parameters contributed to a specific outcome. This requires translating complex mathematical components into plain, everyday language that decision recipients can understand.
Regulatory Concerns and Practical Implications
Regulators are increasingly emphasizing interpretability as a cornerstone of responsible AI development and deployment. The need for transparency creates tension in high-stakes contexts like national security, where explaining an AI system may expose vulnerabilities. However, lack of interpretability raises concerns about:
- Bias and Fairness: Without understanding how a model works, it’s difficult to detect and mitigate discriminatory biases embedded in the data or algorithms.
- Accountability: If an AI system makes an error or produces an unfair outcome, it’s crucial to trace the decision-making process and identify the responsible parties.
- Unintended Consequences: The inability to interpret a model’s behavior can lead to missed risks and unexpected negative impacts, especially on vulnerable populations.
For AI systems impacting children, the stakes are especially high. Regulations like the UK’s Age Appropriate Design Code emphasize child-friendly explanations and transparent data practices. UNICEF’s Policy Guidance on AI for Children adds that systems should be developed considering the most vulnerable children, regardless of their understanding.
Practical Tools for Building Interpretable Systems
While using less complex models like linear regression may enhance interpretability, sometimes “black box” models like Neural Networks or Random Forests offer more powerful performance. The solution is to then incorporate ‘post-hoc’ interpretability techniques — methods applied after a model is built to explain it externally. Here are two main techniques that can help with such models:
- Local Explanations: Techniques such as LIME (Local Interpretable Model-agnostic Explanations) provides per-instance explanations i.e. why the model made that explicit decision.
- Global Explanations: PDP (Partial Dependence Plots) and ALE (Accumulated Local Effects Plot) offer insight on “average” feature importance to explain and evaluate a model on a high-level globally.
Building explainable AI/ML systems isn’t easy, but it’s critical. Teams need to make justified, transparent choices about model design and deployment, and be in a position to clearly explain how AI influenced specific decisions.
What are the key aspects of being transparent in AI development?
As AI adoption accelerates, transparency is no longer optional but a fundamental requirement. Transparency in AI, according to industry standards, encompasses two critical facets. First, it involves the interpretability of the AI system—the ability to understand how and why a model behaves as it does, effectively ‘opening the black box’. Second, transparency mandates the demonstration that the AI system’s design, development, and deployment processes are sustainable, safe, fair, and underpinned by responsibly managed data. This means clear documentation and justification at every stage of the AI lifecycle.
Core Insights
Being transparent in AI development hinges on several key aspects:
- Disclosing AI Use: Be upfront about using AI in decision-making processes before making decisions. Clearly state when and why AI is being used.
- Meaningfully Explaining Decisions: Provide stakeholders with truthful, coherent, and appropriately presented explanations at the right time.
- Transparency Recording: Leverage frameworks like the UK’s Algorithmic Transparency Recording Standard (ATRS) to openly publish information about algorithmic tools used in public sector decision-making. The ATRS offers a structured way to communicate about algorithmic tools and their impact.
Regulatory Concerns
AI transparency isn’t just a best practice; it’s a compliance imperative. Article 5(1) of the UK GDPR demands that personal data processing be lawful, fair, and transparent. This legal mandate is shaping how organizations approach AI development and deployment. The UK’s Information Commissioner’s Office (ICO) has also developed guidance on explaining decisions made with AI that underscores the need for clear and accessible explanations.
However, conflicts can emerge, especially in areas such as national security, where security interests might clash with the need for transparency. Additionally, project teams have to address potential AI Safety risks, how they manage information generated about those risks, and to what extent explanations of the model and AI project processes are made available.
Practical Implications
For organizations to effectively implement AI transparency, some actionable steps must be followed:
- Process-Based Explanations: Demonstrate good governance practices throughout the AI system’s design and use. Document how sustainability, safety, fairness, and responsible data management are integrated into the project lifecycle.
- Outcome-Based Explanations: Offer clear, accessible explanations of model outputs in plain language. Justify how AI-assisted judgments are reached, especially in human-in-the-loop systems.
- Address Data Equity Concerns: Transparency requires a firm commitment to data equity to ensure that a diverse range of data are included; transparent reporting should also demonstrate that this goal was met. This requires addressing how datasets are built, managed, and used, with a continual focus on mitigating potential biases.
Special Considerations: Child-Centred AI
Transparency is not ‘one size fits all’, and it requires special care for vulnerable groups. Numerous child-centric guidance documents, such as the UNICEF Policy guidance on AI for children, the UK ICO Age Appropriate Design Code, mention transparency. This involves ensuring children understand how AI systems impact them. They must also be delivered in an age-appropriate manner. This involves informing children when they are interacting with an AI system rather than a human; providing clear privacy information; delivering ‘bite-sized’ explanations to the user when personal data is used for training, posting clear policies, community standards, and terms of use; and using child-friendly depictions of information that are tailored to specific ages.
What are the key aspects of being accountable in AI development?
Accountability is crucial to sustainable, fair, and safe AI development. It’s about justifying AI processes and outcomes and being answerable to internal stakeholders, regulators, and affected individuals.
Core Concepts: From Transparency to Explainability
Accountability necessitates transparency, but they aren’t interchangeable. Transparency involves interpretability (“opening the black box”) and demonstrating that design/development processes are sustainable, safe, and fair.
Explainability, which is practice-centered, focuses on operationalizing transparency in both AI-supported outcomes and development processes.
Regulatory Concerns and Legal Frameworks
The UK’s General Data Protection Regulation (GDPR) frames accountability as a core principle, demanding responsibility for compliance with data protection principles. This encompasses implementing appropriate technical and organizational measures.
The UK’s Information Commissioner’s Office (ICO) and initiatives like the Algorithmic Transparency Recording Standard (ATRS) reflect the growing emphasis on accountable AI practices.
Practical Implications and Actionable Steps
Being accountable means several key actions for tech and legal teams:
- Assign Responsibility: Designate individuals within the organization who manage explainability requirements, ensuring a clear point-of-contact for inquiries or challenges to decisions.
- Justify and Evidence: Actively make and document justifiable choices related to explainable design and deployment. Evidence those choices throughout the project’s lifecycle, demonstrating meaningful explanations to individuals.
- Transparency: Project teams should be honest about how and why you are using personal data.
- Meaningfully explain decisions: Provide the stakeholders with a coherent explanation which is:
- Truthful and meaningful;
- Written or presented appropriately; and
- Delivered at the right time.
Organizations must demonstrate commitment to explaining decisions made with AI, focusing on processes and actions throughout the AI/ML model lifecycle (design, procurement/outsourcing, and deployment).
Child-Centric AI: Doubling Down on Responsibility
When AI systems impact children, accountability is paramount. Adhering to the UK ICO’s Age Appropriate Design Code is fundamental. UNICEF guidance requires ensuring AI systems protect and empower child users, regardless of their understanding of the system. Organizations must also account for child rights, including expertise oversignt and independent bodies with a focus on their rights.
Ultimately, accountability in AI is a continuous journey, requiring ongoing reflection, impact-assessment, and a commitment to building trustworthy systems.
What aspects of context should be considered when explaining AI-assisted decisions?
When explaining AI-assisted decisions, “consider context” is paramount. It’s not a one-off thing, but an ongoing consideration from concept to deployment, and even when presenting the explanation itself.
Key Aspects of Considering Context:
- Model and Explanation Selection: Choose models and explanations tailored to the specific scenario. This means assessing the setting, the potential impact of the decision, and what the individual needs to know about it. This assessment helps you to:
- Choose an AI model that is appropriately explainable.
- Prioritize the delivery of relevant explanation types.
- Governance and Explanation Tailoring: AI explainability governance should be:
- Robust and reflective of best practices.
- Tailored to your organization and the stakeholder’s circumstances and needs.
- Audience Identification: Recognize that the audience influences the kind of meaningful and useful explanations. Considerations should be given to:
- End-users and Implementers
- Auditors
- Decision-Impacted Individuals
- Their level of expertise about the decision.
- The range of people subject to decisions (to account for knowledge variation).
- Whether individuals require reasonable adjustments in how they receive explanations.
- Accommodate the explanation needs of the most vulnerable.
To account for the unique vulnerabilities of children, AI systems should be adapted to national and local contexts from design to deployment to eliminate algorithmic bias that results from contextual blindness. You should also consider active participation of children across all stages of the project lifecycle to consider their context of the systems intended use. When considering potential impacts, give focus to “actively support the most marginalised children”.
How can the impacts of AI/ML systems be reflected upon?
As AI/ML systems increasingly act as trustees of human decision-making, it’s crucial to reflect on their impacts. Individuals can’t directly hold these systems accountable, so organizations must demonstrate that algorithmic techniques don’t harm well-being.
This reflection should start at the project’s initial stages by addressing ethical purposes and objectives. However, the reflection shouldn’t stop there. You should revisit and reflect on these impacts throughout the development and implementation stages. Document any new impacts identified, along with implemented mitigation measures.
Key Aspects of Reflecting on Impacts
Ensure Individual Wellbeing: Build and implement AI/ML systems that:
- Foster physical, emotional, and mental integrity.
- Ensure free and informed decisions.
- Safeguard autonomy and expression.
- Support abilities to flourish and pursue interests.
- Preserve private life independent of technology.
- Secure capacities to contribute to social groups.
Ensure Societal Wellbeing: Build systems that:
- Safeguard human connection and social cohesion.
- Prioritize diversity, participation, and inclusion.
- Encourage all voices to be heard.
- Treat all individuals equally and protect social equity.
- Use AI to protect fair and equal treatment under the law.
- Utilize innovation to empower and advance well-being.
- Anticipate wider global and generational impacts.
Considerations for Child-Centric AI
Reflecting on impacts connects directly to ensuring fairness, non-discrimination, and data privacy for children. This means:
- Actively supporting marginalized children to ensure benefits from AI systems.
- Ensuring datasets include a diversity of children’s data.
- Implementing responsible data approaches to handle children’s data with care and sensitivity.
- Adhering to the Age Appropriate Design Code, ensuring children’s data isn’t used in ways that negatively affect their well-being or contravene established standards.
How can an appropriate AI/ML system be built to extract relevant information for a range of explanation types?
Building an AI/ML system capable of providing relevant information for various explanation types requires careful consideration of several factors, including model selection, data handling, and governance processes.
Key Considerations for Explainability
- Context, Impact, and Domain: Assess the specific context, potential impact, and domain-specific needs when establishing the interpretability requirements of the project.
- Standard Techniques: Draw upon standard interpretable techniques when possible, balancing domain-specific risks, available data, and appropriate AI/ML techniques.
- “Black Box” Models: If using a “black box” AI system, thoroughly weigh the potential impacts and associated risks, consider options for supplemental interpretability tools and formulate an action plan to optimize explainability.
- Human Understanding: Pay attention to both the capacities and limitations of human cognition when considering interpretability needs.
Several tasks can facilitate the design and deployment of transparent and explainable AI systems, aiding in the clarification of results for stakeholders:
Tasks for Explainability Assurance Management
- Task 1: Select Priority Explanations: Identify the most relevant explanation types (Rationale, Responsibility, Data, Fairness, Safety, Impact) Based on The Domain, Use Case, and Potential Impact on Individuals.
- Task 2: Collect and Pre-process: Gather and prepare data in an explanation-aware manner, considering data sources, quality, and potential biases, This Aids in constructing various explanations.
- Task 3: System Design for Information Extraction: Build the system to extract relevant information for a range of explanation types, and to build interpretable models. Use Model Selection & Training which is dependent on explanation needs, and also dependent on the choice between more Explainable models and ‘ black box ‘ systems.
- Task 4: Translate Rationale: Translate the system’s rationale, incorporating it into your decision-making process. Implementations of the outputs from your AI system will need to recognize what is relevant for decision of the outcome to an impacted user.
- Task 5: Prepare Implementers: To ensure they are using the AI/ML model responsibly and fairly. The training they receive should cover the basics of machine learning, its limitations, and how to manage cognitive biases, .
- Task 6: Build and Present Explanations:, Consider how the decisions should be provided, and how other individuals, based on the context, may expect you to explain your decisions as a user of automated AI-Assisted tech. Be open to additional explanations and details on the risks of certain actions or scenarios.
Addressing Regulatory Concerns & Ethical Considerations
When developing AI systems – and particularly those dealing with sensitive data or high-impact decisions – compliance with the UK GDPR and other regulations is paramount. Here’s how to integrate explainability into a compliance framework:
- Transparency: Make the use of AI/ML decision-making obvious and explain decisions logically.
- Accountability Ensure appropiate oversight, and be held responsible by both internal and external bodies over any AI/ML-Assisted decisions
- Context : There’s no “one size fits all” solution — tailoring explanations to the use case and the audience is crucial.
- Impacts : Actively engage human oversight in decision-making processes to avoid potentially harmful effects to an end-user.
Transparency Recording Standards
Organizations can use resources such as the Algorithmic Transparency Recording Standard (ATRS) which is a framework that captures information about algorithmic tools and AI systems. This helps public sector bodies openly publish information about the services they use for decision-making processes.
Trade-offs of Security and Explainability
Be wary of the trade-offs between security and explainability. While transparency may create vulnerabilities , a lack of it raises concerns about bias, fairness, and accountability. Balancing these is essential.
How can the rationale of an AI system’s results be translated into easily understandable reasons?
Translating complex AI rationale into understandable reasons is a crucial challenge, demanding careful consideration of context, audience, and potential impacts. Here’s how tech journalists recommend approaching this translation for AI governance and compliance:
Understanding the Rationale Explanation
The core aim is to elucidate the ‘why’ behind an AI decision in an accessible manner. This involves:
- Demonstrating how the system behaved to reach the decision.
- Illustrating how different components transformed inputs into outputs, highlighting significant features, interactions, and parameters.
- Conveying the underlying logic in easily understandable terms to the intended audience.
- Contextualizing the system’s results to the affected individual’s real-life situation.
Process-based explanations clarify the design and deployment workflow, focusing on interpretability and explainability, including data collection, model selection, explanation extraction, and delivery. Outcome-based explanations then translate the system’s workings, including input/output variables and rules, into everyday language to clarify the role of factors and statistical results in reasoning about the problem.
Key Maxims for Translation
Several key principles guide the translation process:
- Be Transparent: Disclose AI use proactively and explain decisions meaningfully.
- Be Accountable: Assign responsibility for explainability and justify design choices. Ensure a human point-of-contact exists for clarifications.
- Consider Context: Tailor governance and model explanation based on the audience expertise, vulnerabilities, and requirements for reasonable adjustments.
- Reflect on Impacts: Address ethical purposes and objectives of the AI project in initial and ongoing assessments.
Navigating Child-Centred AI
When children are affected, additional considerations are paramount:
- Technical explanations must be delivered using age-appropriate language.
- Involve children in the design stages to familiarize them with the models and their decisions.
- Ensure complete transparency about children’s data usage throughout the AI system.
- Establish organizational roles and responsibilities for accountability, protecting and empowering child users.
Practical Strategies for Model Reporting
Model reporting plays a pivotal role in translating results:
- Recognize legitimate determinants of the outcome. Implementers must recognize key factors and determine the outcome being described.
- Check whether correlations produced by the model make sense of the considered use case.
- Prepare implementers by teaching the basics of machine learning and the limitations of automated systems.
Implementing Effective Communication
Communication of results requires careful planning:
- Create a short summary of AI-assisted decisions, supported by graphics, videos, or interactive resources.
- Ensure accessibility and clear communication to limit unexpected outcomes.
- Include references to relevant policies throughout the process.
Explainability Assurance Management (EAM) Template
The EAM template consists of specific tasks designed to facilitate the entire explanation process. It includes prioritized explanations, collected pre-processed data, and an identified system to extract relevant information for a range of explanation types.
Risk Management and Challenges
Potential risks and rewards should be taken into account, but do not guarantee complete and fair transparency for all use cases. As the tech journalist will write, “The best laid plans cannot be perfect or foolproof”.
How do implementers of AI systems need to be prepared for deployment?
Deploying AI systems responsibly requires careful preparation, especially when explainability and accountability are paramount. Implementers need a comprehensive understanding of the system’s capabilities and limitations to ensure its ethical and effective application.
Key Preparations for AI System Deployment
Implementers must receive appropriate training that encompasses:
- Machine Learning Fundamentals: A foundational understanding of how machine learning algorithms function.
- Limitations of AI: Recognition of the constraints and potential pitfalls of AI and automated decision-support technologies.
- Risk-Benefit Analysis: Awareness of the benefits and risks associated with deploying AI systems for decision support.
- Cognitive Bias Management: Techniques to mitigate cognitive biases, such as automation bias (over-reliance on AI outputs) and automation-distrust bias (under-reliance on AI outputs).
Explainability Assurance Management
Successful deployment also necessitates thorough explainability assurance management, encompassing these key tasks:
- Prioritizing Explanations: Determining the most critical types of explanations (Rationale, Responsibility, etc.) based on the domain, use case, and potential impact on individuals.
- Data Collection and Preprocessing: Ensuring data quality, representativeness, and addressing potential biases during data collection and preprocessing. Crucially includes proper data labelling.
- System Design for Information Extraction: Building the AI system to extract relevant information for various explanation types, acknowledging the costs and benefits of using newer but possibly less explainable AI models.
- Translating Model Rationale: Converting the technical rationale of the system’s results into understandable terms and justifying the incorporation of statistical inferences.
- Building and Presenting User-Friendly Explanations: Developing explanations that facilitate collaboration between care workers and family members.
Considerations When Children’s Data is Involved
When children’s data or well-being is at stake, additional considerations are crucial:
- Implementers should be trained in child-centered design, an awareness that will help them implement safeguards that take into account the special requirements and rights of children.
- Understanding data protection regulations, such as the General Data Protection Regulation (GDPR) and the UK ICO Age Appropriate Design Code
- Implementers should also undergo background checks (e.g., Disclosure and Barring Service – DBS which is the UK’s background clearance service) and receive training in dealing and working effectively with children.
Preparing for deployment means more than technical setup; it means cultivating an ecosystem of responsibility, fairness, and transparency.
How should explanations be built and presented?
As AI systems become more prevalent in decision-making, the need for clear and accessible explanations is paramount. But how do we actually build and deliver these explanations effectively?
Outcome-based vs. Process-based Explanations
The first step is to distinguish between outcome-based and process-based explanations:
- Outcome-based explanations focus on the components and reasoning behind model outputs. These explanations aim to make clear why a certain decision was reached. They should be accessible, using plain language.
- Process-based explanations demonstrate that you have robust governance processes and followed industry best practices during the AI system’s design, development, and deployment. This involves showing that sustainability, safety, fairness, and responsible data management were considered throughout the project lifecycle.
Both types are crucial for building trust and ensuring accountability.
Key Maxims of AI Explainability
There are 4 maxims to make your AI explainability better:
- Transparency: Be upfront about the use of AI/ML in decision-making, including how, when, and why it’s being used. Meaninfully explain decisions truthfullly, appropriately and in good time.
- Accountability: Designate individuals or teams responsible for overseeing the “explainability” requirements of AI systems. Have a point of contact to clarify or contest a decision and actively make choices about how to design and deploy AI/ML models to be appropriately explainable.
- Context: Recognize that there’s no one-size-fits-all approach. Context considers several interrelated elemtns that have effects on explaining AI/ML-assisted decisions and the overall process.
- Reflect on Impacts: Identify and reduce potentially harmful effects in decision-making. Be ethical in the use of purposes so as not to impair wellbeing. As well, consider societal wellbeing to safeguard human connection.
Types of Explanations for SSAFE-D Principles
To help build concise and clear explanations around the SSAFE-D (Sustainability, Safety, Accountability, Fairness, Explainability, and Data Stewardship) principles, consider six types of explanations:
- Rationale Explanation: The “why” behind a decision.
- Responsibility Explanation: “Who” to contact for human review. Roles, functions and accountability for AI Model.
- Data Explanation: “What” is held and other details of data that were used. Data used, collection and third party access. Data pre-processing and generalisability.
- Fairness Explanation: How bias was mitigated and steps were taken to ensure the correct measures.
- Safety Explanation: Maximising performance reliability and that a chosen type of AI system can compare against other systems.
- Impact Explanation: Effects that a decision support system may have on an individual or to society. Reassure the public that it is beneficial.
Practical Steps for Building Explanations
A template for Explainability Assurance Management for AI projects focuses on:
- Project Planning for AI lifecycle
- Data Extraction and Pre-Processing
- Model Selection and Training for a range of explanation types.
- Model Reporting for easily understandable reasons.
- User Training to prepare implementers to deploy AI system
High-Level Considerations
There are four considerations for teams looking to achieve explainability for wide and diverse audiences:
-
Context, Potential Impact, Domain-Specific Needs: What type of Application and Tech are you using?
-
Standard Interpretable Techniques: Find the correct domain-specific risks, needs and AI/LM techniques.
-
Black Box AI Systems: Considerations for Thorough weighing of risks of potential impacts/risks. Consider supplemental interpretability tools and action plans to improve explainability.
-
Interpretability and Human Understanding: Focus on the capacities and limitations of human cognition in order to deliver an interpretable AI system.
By focusing on these considerations, organizations can build AI systems that are both effective and understandable, promoting trust and accountability.
Ultimately, the pursuit of explainable AI is not merely a technical challenge, but a fundamentally human one. By diligently addressing transparency, accountability, contextual awareness, and potential impacts, especially for vulnerable populations like children, we can move towards a future where AI systems are not only powerful tools, but also trusted partners in shaping a more equitable and understandable world. The strategies outlined here provide a roadmap for making AI’s inner workings more accessible, ensuring that its decisions are not shrouded in mystery, but instead illuminated by clarity and purpose.