Using AI to Improve Safety: Managing the Legal Risks Alongside the Benefits
Artificial intelligence (AI) is becoming a powerful tool in workplace safety programs—but its use also raises complex legal and governance questions. This study examines how employers can integrate AI into safety decision-making while preserving human judgment, meeting regulatory obligations, and managing enforcement and litigation risk.
The Role of AI in Enhancing Safety
Both general industry and construction companies are increasingly adopting AI to enhance safety performance. From analyzing near-miss reports and job hazards to generating site-specific safety briefings and forecasting elevated risk conditions, AI tools promise to help companies identify hazards earlier and allocate resources more effectively.
However, AI introduces new and often underappreciated legal risks. Safety sits at the intersection of regulatory compliance, tort liability, workforce management, and technology governance. The use of AI in safety can complicate traditional risk profiles if a company fails to manage its use carefully.
Clarifying Roles and Responsibilities
According to the federal Occupational Safety and Health Act of 1970 (OSH Act), employers bear the responsibility to protect employees from recognized hazards. This duty cannot be delegated to technology vendors or displaced by automated tools. AI systems used in safety programs are best understood as decision-support tools, not decision-makers.
Risk emerges when organizations treat AI outputs as directives rather than data to analyze. For instance, some vision-based safety systems generate numerous alerts for personal protective equipment (PPE) non-compliance. If supervisors rely solely on these alerts without exercising judgment, critical risks may be obscured by volume.
Key Legal Risks in AI-Enabled Safety Programs
Companies owe a duty of care to workers, subcontractors, and the public. Failure to identify or address foreseeable risks can expose a company to liability, including tort claims. AI tools can identify, predict, and report hazards that might otherwise go unnoticed, enabling immediate interventions to prevent accidents.
However, with increased visibility into workplace hazards comes increased liability. If a predictive safety tool identifies a specific work zone as high-risk for falls, and the company does not implement additional controls, it may be held liable for knowingly tolerating a recognized hazard.
Regulatory and Compliance Challenges
In addition to common law duties, the OSH Act requires employers to ensure employee safety. These obligations are non-delegable and remain with the employer, regardless of the tools used. While AI can assist companies in achieving better safety outcomes, its outputs must be reviewed and validated by trained personnel.
Automated outputs often lag behind legal and regulatory developments, leading to potential compliance risks. AI-generated safety documents that are overly generic or misaligned with jurisdiction-specific standards may increase enforcement risk.
Litigation and Discovery Risks
The integration of AI into workplace safety practices expands the universe of potentially discoverable materials in litigation. Plaintiffs may seek access to system-generated data showing identified risks and company responses. The presence of AI outputs can shift the narrative in litigation, portraying a failure to act as negligence or conscious disregard for safety.
Workforce, Privacy, and Labor Risks
AI safety applications often rely on workforce data, which raises significant employment, privacy, and labor concerns. For example, AI tools that analyze video feeds to detect unsafe behaviors can blur the line between safety monitoring and workforce surveillance. Compliance with state privacy laws and employee consent is crucial.
Conclusion
AI offers powerful tools to enhance construction safety, but its use reshapes the legal landscape in ways that require deliberate governance and oversight. Companies that deploy AI thoughtfully, preserving human judgment and documenting decision-making, can improve safety outcomes without increasing legal exposure. It is essential to strike a balance, ensuring that innovation strengthens safety programs rather than undermining their defensibility.