Institutional Investors Define AI Governance for Competitive Edge

2025 Investment Innovation Conference: Institutional Investors Creating Internal Guidelines for Effective AI Use

Institutional investors play a crucial role in establishing guidelines and monitoring the use of artificial intelligence tools, yet the current landscape reveals that standardized guidelines are inadequate, according to industry experts.

The Need for Individual Governance Frameworks

During a panel session at the Canadian Investment Review’s 2025 Investment Innovation Conference, Jacky Chen, managing director of completion portfolio strategies at the OPSEU Pension Trust, emphasized that it is too early for regulators to implement unified AI guidelines on an industry or global scale. His organization operates under a stringent AI governance framework that outlines its principles, risk appetite, and procedures for approving riskier AI applications.

Chen stated, “As an organization, you have to have your own principles; you have to be deploying it responsibly.” He concluded that the current environment does not support the establishment of standardized regulations.

Smart Guardrails and Fiduciary Duty

Jennifer Hartfield, senior vice-president of corporate data and operations at the British Columbia Investment Management Corp. (BCI), highlighted the importance of developing smart guardrails for AI use. She indicated that linking AI deployment to the fiduciary duty of pension funds could be a future mandate, but BCI is already implementing this principle.

However, Hartfield cautioned that overly broad mandates might inhibit the innovation and competitive advantages that AI can offer. She reflected on how her first encounter with AI in a pension context was a data extraction project in 2019, which initially seemed to lack a justified return on investment. Recently, BCI executed a successful project using large language models to analyze around 7,500 documents.

Machine Learning Integration

Chen noted that the OPTrust introduced machine learning solutions about six years ago. Over time, AI has become increasingly integrated into processes such as data analysis and portfolio construction. “It just becomes part of the process,” he stated, reflecting on the evolution of AI within investment strategies.

The OPTrust is currently pursuing a three-year roadmap to explore the incorporation of generative AI tools, including ChatGPT. Chen mentioned promising use cases for capturing market sentiment, focusing on creating straightforward methods for quantitative trading.

Resisting Innovation?

Russ Goyenko, an associate professor of finance at McGill University, described the resistance to AI innovation as “exaggerated.” He cautioned that regulatory measures could stifle the progress of this transformative technology. Goyenko has contributed to the establishment of an AI investment research lab, developing efficient models to assess machine learning capabilities in investments.

Chen’s team at OPTrust employs systematic strategies as part of its residual risk budget, aimed at maximizing total fund returns. This approach has also led to the adoption of local machine learning capabilities.

AI Tools in Daily Operations

At BCI, approximately 75% of staff utilize large language model tools weekly for individual productivity, with about 25% classified as power users. The organization is integrating AI solutions into existing tools to enhance data visualizations and automate third-party files.

Chen has expanded the application of AI tools through his involvement in a financial innovation lab at the University of Toronto, focusing on incorporating machine learning into investment processes. His initial AI applications at OPTrust have centered around risk management and integrating alternative data.

As the landscape of AI evolves, institutional investors must navigate the challenges and opportunities that arise, ensuring that they deploy AI tools responsibly and effectively while maintaining their competitive edge.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...