AI Regulation in Forensics: Challenges and Obligations

Forensics under the Artificial Intelligence Act

The present decade has seen the widespread deployment of AI systems across nearly all areas of human activity. The prospect of mass commercialization of AI systems has greatly contributed to an increase in funding for AI research. However, alongside these opportunities, there are significant risks associated with AI technology. Many AI systems are complex, making their decision-making processes difficult to explain, often referred to as black-boxes.

Artificial Intelligence in Forensics

One of the first examples of forensic AI systems was an artificial neural network developed between 1993 and 1995 for scoring polygraphic signals. The development of forensic AI applications has evolved alongside the general progress in AI research. Recent literature suggests a growing interest in the use of AI in forensics, such as:

  • Determination of sex or age of individuals based on biological samples like teeth, saliva, bones, or shoe prints.

Artificial Intelligence Act

The general goal of the Artificial Intelligence Act is to lay down a uniform legal framework for the use of AI systems in the Union while ensuring a high level of protection for health, safety, and fundamental rights against the harmful effects of AI systems, thus supporting innovation.

Forensics as a High-Risk Area

Annex III of the Act identifies two areas of high-risk applications related to forensics: law enforcement and administration of justice. The scope of the law enforcement area includes AI systems intended for use by law enforcement authorities or on their behalf. Specific use cases for law enforcement include:

  • AI systems designed to assist in criminal investigations.

Obligations of Forensic Experts as Deployers

General requirements for high-risk AI systems include:

  • Implementation and maintenance of a risk management system throughout the entire lifecycle of the system.
  • Preparation of technical documentation demonstrating compliance with the Act’s requirements.
  • Provision of necessary information to relevant authorities for compliance assessment.

Penalties for Non-Compliance

Non-compliance with the Act may result in administrative fines as specified in the Act, along with other penalties and enforcement measures that Member States may impose. The choice of authorities to enforce fines will also be determined by the Member States.

Conclusions

The introduction of the AI Act represents the most significant step toward AI regulation to date. It is a comprehensive response to the risks associated with AI, particularly in forensics. The Act imposes obligations on operators of AI systems based on the risks inherent to these systems, and it is suggested that forensics be classified as a high-risk area given the growing impact of AI technology in this field.

Declarations

Not applicable.

Authorship Contribution Statement

Not applicable.

© 2025 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...