What is Transparency? – Ethics of AI
Transparency can be defined in multiple ways and is closely related to several neighboring concepts that are often used as synonyms, including explainability (AI research in this area is referred to as XAI), interpretability, understandability, and the term black box.
At its core, transparency refers to a property of an application, focusing on how much it is possible to understand about a system’s inner workings “in theory.” It also encompasses the provision of explanations regarding algorithmic models and decisions that are comprehensible for the user. This aspect of transparency deals significantly with public perception and understanding of how AI operates, and can be seen as a broader socio-technical and normative ideal of openness.
There are numerous open questions about what constitutes transparency or explainability, including what level is sufficient for different stakeholders. The precise meaning of “transparency” can vary depending on the situation, leading to discussions about whether there are multiple kinds or types of transparency.
Transparency as a Property of a System
When considering transparency as a property of a system, it addresses how a model works or functions internally. This concept is further divided into:
- Simulatability: Understanding the functioning of the model.
- Decomposability: Understanding the individual components.
- Algorithmic Transparency: Visibility of the algorithms involved.
What Makes a System a “Black Box”?
A system is often referred to as a “black box” due to several factors:
- Complexity: In contemporary AI systems, the operation of a neural network is encoded in thousands, or even millions, of numerical coefficients. The interactions between these values make it practically impossible to understand how the network operates, even if all parameters are known.
- Difficulty of Developing Explainable Solutions: Even if AI models support some level of explainability, additional development is often required to build this into the system. Creating a user experience that provides careful yet easily understandable explanations is challenging.
- Risk Concerns: Many AI algorithms can be manipulated if an attacker designs an input that causes the system to malfunction. Thus, some systems are intentionally designed as black boxes to prevent exploitation.
Given that many efficient deep learning models are inherently black box models, researchers often focus on finding a sufficient level of transparency. The goal is to determine if it is adequate for algorithms to disclose how decisions are made and to provide actionable insights for users.
Transparency as Comprehensibility
The comprehensibility, or understandability, of an algorithm requires clear explanations of how decisions were made by an AI model, ensuring that those affected by the model can understand the rationale behind decisions. However, translating algorithmically derived concepts into human-understandable terms presents significant challenges.
There have been discussions among legislators about whether public authorities should publish algorithms used in automated decision-making as programming codes. However, most individuals lack the expertise to interpret programming codes, which raises questions about the effectiveness of such transparency measures.
How to Make Models More Transparent?
The challenge of providing transparency in machine learning models is an ongoing area of research. Here are five main approaches that can be employed:
- Use Simpler Models: While this approach may enhance explainability, it often sacrifices accuracy.
- Combine Simpler and More Sophisticated Models: This hybrid approach allows for complex computations while still providing transparency through simpler models.
- Modify Inputs to Track Dependencies: By manipulating inputs, one can track the dependencies between inputs and outputs, revealing which inputs influence the model’s results.
- Design Models for User Understanding: Employing cognitively and psychologically efficient visualization methods can help users grasp model states and inputs better.
- Follow the Latest Research: Continuous research into explainable AI and its socio-cognitive dimensions is crucial for developing new techniques and enhancing transparency.
Ultimately, increasing algorithmic literacy among users is essential for improving transparency. Educational efforts focused on enhancing understanding of contemporary technologies will directly impact users’ ability to comprehend AI systems, making the “black boxes” less opaque.