AI Compliance Controls & Audit Trail

Controls that adapt to your AI system’s obligation level. Compliance progression you can measure. Evidence and audit history that build themselves as you work.

Controls that show exactly what applies to your AI system

Not every AI system faces the same obligations. AI Sigil delivers controls where the guidance, regulatory sources, and assessment questions adapt to your AI system’s risk classification and your role. A high-risk provider sees detailed conformity requirements. A limited-risk deployer sees transparency obligations. Open a control and the content matches your situation, not a generic worst-case template.

Schematic diagram showing AI compliance controls filtered to display only those relevant to a specific AI system's risk profile and context
Schematic diagram showing AI compliance measurement and progress tracking beyond task completion, focusing on actual compliance standing

Measure where you stand, not just what you've started

Every control has a computed completion status. Aggregate across controls for an AI system and you get a compliance progression percentage. Aggregate across AI systems for a framework and you get portfolio-level visibility. Filter by status: not started, in progress, done, rejected. Report to leadership with a number, not a narrative.

Structured evidence. Automatic audit trail.

Attach documents to the controls they support. Each piece of evidence traces through the chain: evidence to control, control to requirement, requirement to regulation. Every answer, status change, and form submission is timestamped and recorded automatically. When an auditor asks for proof, you navigate to it. When they ask what changed, the history is already there.

Schematic diagram showing structured evidence collection and automatic audit trail generation in an AI compliance platform

Start with a 14-day free trial.

FAQs

What types of controls does AI Sigil include?

AI Sigil includes two types: foundational controls that apply organization-wide regardless of specific AI systems (e.g., AI literacy programs, governance roles, incident response), and system controls that scope to individual AI systems based on their risk classification (e.g., bias testing, automated logging, conformity assessment).

Every control traces back through a chain: control to requirement to framework instance to parent framework. The requirement identifies the regulatory theme (e.g., “Risk Management”), and the control implements the specific obligation. This traceability is built in, not configured by the user.

Each control contains content blocks (guidance text, regulatory sources, assessment questions) tagged with risk tiers. When your AI system is classified, only blocks matching its tiers are visible. A minimal-risk system sees screening-level content. A high-risk system sees the full depth of guidance and assessment questions.

Evidence is any document, screenshot, test result, or file that demonstrates a control has been implemented. It’s uploaded as an attachment and linked to the specific control it supports. The platform maintains the link so you can always trace which evidence supports which obligation.

Form submissions are stored as immutable snapshots: once submitted, they cannot be modified. Answer history records every change with the previous value, new value, timestamp, and who made the change. The application layer enforces append-only behavior for all compliance records.

Yes. Answers are stored per entity and per question, not per user session. Multiple team members can answer different questions on the same control. The audit trail records who answered each question and when.

Each control’s completion is based on its form-status answers (not started, in progress, done, rejected). The platform aggregates these into a percentage per AI system and per framework. The calculation reflects actual assessment status, not just whether forms were opened.

The current control library is maintained by AI Sigil and derived from regulatory analysis. Custom control creation is on the roadmap. Evidence and assessment features work with any control in the library

Yes. Each framework activation tracks its own set of controls. The portfolio view aggregates completion across all active frameworks for an AI system, giving you both per-framework and cross-framework compliance visibility.

All assessment answers, evidence, and form submissions are preserved. Deactivating a framework only removes the rollout link. If you reactivate the framework, the controls reconnect to the existing data with all answers and evidence intact.