Using AI to Improve Safety: Managing the Legal Risks Alongside the Benefits
Artificial intelligence (AI) is becoming a powerful tool in workplace safety programs—but its use also raises complex legal and governance questions. This study examines how employers can integrate AI into safety decision-making while preserving human judgment, meeting regulatory obligations, and managing enforcement and litigation risks.
The Growing Adoption of AI in Safety Programs
Both general industry and construction companies are increasingly adopting AI to enhance safety performance. From analyzing near-miss reports and job hazards to generating site-specific safety briefings and forecasting elevated risk conditions, AI tools promise to help companies identify hazards earlier and allocate resources more effectively.
Legal Risks Associated with AI in Safety
AI introduces new and often underappreciated legal risks. Because safety sits at the intersection of regulatory compliance, tort liability, workforce management, and technology governance, the use of AI in safety can complicate traditional risk profiles if a company fails to manage its use carefully.
The question is no longer whether AI can support safety efforts, but rather how to deploy it responsibly and defensibly.
AI Is Not a Safety Manager
As set forth in the federal Occupational Safety and Health Act of 1970 (OSH Act), a foundational legal principle is that employers bear responsibility to protect employees from recognized hazards. That duty cannot be delegated to technology vendors or displaced by automated tools. AI systems increasingly used in safety programs are best understood as decision-support tools, not decision-makers.
Risk emerges when organizations treat AI outputs as directives rather than data to analyze. For example, some vision-based safety systems generate hundreds of alerts for PPE non-compliance or proximity to hazards during a single shift. If supervisors rely solely on the system’s alerts without exercising judgment about severity or context, critical risks may be obscured by volume.
Clarifying Roles in Policies and Procedures
From a legal and risk management perspective, companies deploying AI in safety programs should ensure:
- Human supervisors retain ultimate authority to stop work, modify procedures, or escalate concerns.
- AI outputs are reviewed, validated, and contextualized, rather than automatically accepted or ignored.
- Safety decisions are documented as human decisions, informed by data and AI but not controlled by it.
Key Legal Risks in AI-Enabled Safety Programs
Duty of Care and Foreseeability Risk
Companies owe a duty of care to workers, subcontractors, and site visitors. This duty requires companies to exercise reasonable care to prevent foreseeable harm. AI is capable of identifying, predicting, and reporting hazards that might otherwise go unnoticed, enabling immediate interventions. However, failing to act on identified hazards can increase liability.
For example, if a predictive safety tool identifies a specific work zone as high-risk for falls but the company fails to implement controls, it may be argued that the company knowingly tolerated a recognized hazard.
Regulatory and Compliance Risk
The OSH Act requires employers to ensure employee safety and these obligations remain with the employer regardless of the tools used. While AI can assist in achieving better safety outcomes, it does not provide a regulatory safe harbor. AI-generated materials must be reviewed for compliance with jurisdiction-specific standards to avoid increased enforcement risk.
Litigation and Discovery Risk
The integration of AI expands the universe of potentially discoverable materials in litigation. Employers may face heightened scrutiny regarding how they used AI to identify and respond to risks. For instance, plaintiffs may request historical risk scores and internal reports summarizing elevated risk conditions preceding an incident.
Workforce, Privacy, and Labor Risk
AI applications often rely on workforce data, which raises significant employment, privacy, labor, and data-governance concerns. Questions may arise about how data was collected and whether it was used consistently and lawfully. Compliance with state privacy laws is imperative, particularly when AI tools analyze sensitive information.
Practical Takeaways for Companies
AI can materially improve workplace safety, but it does not shift or dilute an employer’s legal responsibility for hazard recognition and control. Increased hazard visibility can expand foreseeability, leading to regulatory and civil exposure if identified risks are not assessed and addressed.
Governance, documentation, and consistency in how AI tools are used matter as much as the technology itself. Early legal involvement in system design and vendor selection can significantly reduce downstream enforcement, litigation, and insurance risks.
Conclusion
AI offers powerful tools to enhance construction safety, but its use reshapes the legal landscape in ways that require deliberate governance and oversight. Companies that deploy AI thoughtfully—preserving human judgment, documenting decision-making, and anticipating post-incident scrutiny—can improve safety outcomes without increasing legal exposure.