Managing AI for Sustainable Impact

How Companies Can Manage AI Use Through Materiality, Measurement & Reporting

As the use of AI increases, the governance of its environmental implications increasingly depends on embedding AI into materiality assessments, measurement practices, and reporting systems.

Treat AI Use as a Material Sustainability Driver

Bringing AI explicitly into financial materiality and impact assessments allows companies to see where AI changes the scale or severity of existing issues or introduces new risks or opportunities.

Map, Measure, and Baseline AI Demand

To make AI governable, companies should create an inventory of how often AI is used and establish utilization metrics over time. This enables organizations to identify growth, redundancy, and hotspots in AI usage.

Control AI Impact Through Policy and Oversight

Setting rules for appropriate AI use and establishing triggers for extra review before scaling AI are crucial. This management approach applies whether AI is developed in-house or provided by vendors.

Integration into Sustainability Systems

AI is not only changing how companies operate but also demanding changes in how sustainability systems are designed and governed. Much focus has been placed on the environmental impact of AI’s energy use, water consumption, and supply chain challenges. However, it is also essential to examine how AI is applied within organizations.

Understanding where AI is applied, how often it is used, and whether those applications are necessary is critical. This understanding sets a clear path for AI deployment, with systems in place to address any environmental and social impacts before they become problems.

Value Creation Through Responsible AI Use

Organizational leaders need to focus beyond the footprint of AI by mapping its use, defining control and review processes, developing systems for ongoing quantification, and reporting transparently. The ultimate goal is to manage AI’s impact from the inside out, ensuring that the benefits outweigh the risks while maintaining sustainability as a priority.

Bringing AI into Materiality and Impact Assessments

Financial materiality and impact assessments provide a structured process for governing AI by identifying and prioritizing significant impacts. Many sustainability topics influenced by AI use—such as energy demand, emissions, water use, and workforce effects—are already assessed in existing materiality exercises. However, an explicit examination of how AI alters the drivers of those impacts is often missing.

The International Sustainability Standards Board’s IFRS materiality guidance emphasizes financial materiality, defined by whether a topic could influence the decisions of investors or other users of financial statements. How AI is utilized within companies undoubtedly influences the risks and opportunities they face and can affect their financial position.

Assessing AI’s Materiality

Determining the materiality of AI hinges on understanding its scale and concentration—such as the situations it is used in and its embedding in critical workflows. Mapping AI use across various applications can help identify where it meaningfully alters environmental, social, or financial exposure.

Governance Through Policy

Once a basis for AI’s materiality is established, the next step is to shift towards control through policy, supported by proportional measurement of demand. As access to AI expands, it can become a default tool for routine tasks, increasing demand through duplication without sufficient oversight. Policies can set expectations for appropriate application and conditions for assessing task value.

Quantifying AI Impact

Quantification makes AI use visible over time and tracks its impact. For most organizations, measuring AI impact starts with obtaining a consistent view of utilization and its evolution. This foundational understanding supports precise attribution of energy or emissions and helps establish a baseline for effective identification of growth and overall impact.

Managing AI’s Impact

For organizations that own or operate their AI infrastructure, management responsibility lies within established operational controls, including decarbonization of electricity supply and hardware lifecycle management. Governance should also cover model training and retraining, especially in areas with concentrated energy and water demand.

For AI capabilities accessed through third-party providers, these impact areas must be addressed through policy and supplier engagement practices that link disclosure with procurement decision-making.

Conclusion

AI’s sustainability effects depend on infrastructure efficiency, energy sources, and governance of its use within organizations. Effective management includes assessing material impacts, setting policies for demand monitoring, measuring results, and ensuring transparent reporting. Treating AI as a source of managed sustainability can help mitigate risks and ensure that the environmental and social effects of AI use align with value creation.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...