Systemic Risks in 2026: Geopolitics, Cybersecurity, and AI

Geopolitics, Cyber, & AI: Key Findings from the DTCC Systemic Risk Survey

As the financial sector approaches 2026, geopolitical risks and trade tensions have been identified as the most significant threats to global finance, according to a recent global survey conducted by market infrastructure group DTCC.

Geopolitical Risks on the Rise

The survey, conducted annually since 2013, revealed that 78% of respondents ranked geopolitical risks and trade tensions among their top five concerns. This marks the fourth consecutive year these issues have topped the list, emphasizing persistent challenges posed by geopolitical flashpoints and trade disputes in the market.

Cyber Risk: A Growing Concern

Coming in second place, cyber risk was cited by 63% of respondents as a top concern. Participants highlighted the ongoing threat of cyberattacks targeting financial institutions and market infrastructure, which could lead to operational disruptions and contagion.

US Economic Outlook

Concerns about the US economy were also prevalent, with 41% of respondents identifying a potential economic slowdown as a significant risk. This placed it third overall, followed closely by worries about market volatility and uncertainties surrounding US monetary and fiscal policy, both cited by 38% of respondents.

AI and Fintech Risks

The survey indicated increasing unease regarding the financial sector’s reliance on artificial intelligence and fintech solutions. Issues related to excessive public and corporate debt and inflation were named as top five risks by 34% of participants. Additionally, 33% ranked fintech as a major concern, which DTCC linked to the expanding deployment of AI tools in financial services.

Cybersecurity and Data Protection

Respondents identified cybersecurity and data protection vulnerabilities as the primary risks associated with AI. A significant 41% pointed to this as their main concern, reflecting fears that AI adoption could increase vulnerability to cyber threats or jeopardize sensitive data if not managed carefully.

AI-Generated Misinformation

Another prevalent concern was AI-generated misinformation, including the possibility of false outputs or “hallucinations.” This was noted by 38% of respondents, indicating worries that inaccurate or fabricated information could mislead decision-making processes or affect client relations.

Governance and Oversight Challenges

Concerns about insufficient governance around AI were also highlighted, with 37% of participants flagging inadequate controls and oversight. Additionally, 34% warned against overreliance on AI solutions in critical processes.

Quantum Computing and Cybersecurity

A new question in the survey addressed the potential impact of quantum computing on cybersecurity. The findings revealed that only 29% of firms are actively preparing for risks associated with quantum computing. 25% acknowledged the risk but have no current plans to address it, indicating a significant gap between awareness and action.

The Need for Coordination

DTCC emphasized the necessity for closer collaboration across the financial sector to address emerging risks. Participants expressed concern about concentration risk in technology supply chains, noting reliance on a limited number of major technology providers. Such dependencies could result in widespread systemic effects in the event of outages or incidents.

In conclusion, the survey underscores a common theme of uncertainty permeating the responses—whether economic, geopolitical, or tied to emerging technologies like AI. Enhancing resilience and mitigating systemic risks will require ongoing industry dialogue and collaboration.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...