The 1970 Law That Solves AI’s Legitimacy Crisis
In the field of accounting, language precision is paramount. The profession relies on a long-standing philosophy of order and verification, upheld by federal agencies, state boards, and a rigorous licensure system. Institutions like the FASB, IASB, and SEC are tasked with defining language and meaning. Yet, even within this structured environment, terms can have different meanings across various financial statements, showcasing a fundamental challenge in language consistency.
Artificial General Intelligence (AGI) refers to the potential of a computer program to perform tasks that require human intelligence. Current tools, however, often treat language as mere statistical symbols, lacking the meaningful context that institutions such as finance, medicine, and law demand. Even the foundational theories of Claude Shannon highlight that while language can be encoded as signals, it cannot provide the grounded meaning necessary for institutional accountability.
AI systems rooted in Shannon’s mathematics have reached a structural limit. They can optimize patterns but fail to deliver the semantic frameworks that institutions rely on for decision-making. The deployment of AI is often unregulated, entering workplaces through informal channels, leading to significant impacts without defined accountability. The absence of governing standards results in decisions affecting health, welfare, and freedoms, all based on statistical pattern matching across incompatible contexts.
The Historical Parallels
The late 1960s witnessed the rise of the credit reporting industry, which blended qualitative and quantitative data to assess creditworthiness. This system, however, operated with a lack of transparency and accountability, leading to public frustration. In 1970, the Fair Credit Reporting Act (FCRA) was enacted to address these concerns, establishing key definitions and accuracy standards for credit reporting agencies. The act mandated that agencies maintain consistency in their procedures and allowed consumers to access and challenge their reports.
The FCRA’s significance lies not in its demand for transparency of proprietary algorithms, but in its enforcement of accountability. A notable case in 2017 demonstrated this when a loan servicer faced a $2.5 million judgment for failing to correct errors, emphasizing the importance of responsible governance.
Lessons for AI Governance
The FCRA reshaped risk mitigation strategies within credit reporting, teaching agencies to avoid reporting unverifiable outputs. This framework can be applied to AI systems, which currently operate without defined processes or accountable authority. The solutions proposed include:
- Define Institutional DNA: Clearly outline key terms, data elements, and establish executives responsible for their approval.
- Measure Semantic and Epistemic Coherence: Ensure consistent language use and alignment of AI outputs with established rules.
- Establish Auditability: All decisions must be traceable to their inputs, sources, and applied rules.
- Use AI to Improve Governance: Employ user interactions as feedback to strengthen institutional knowledge.
The stakes are high. The FCRA emerged from public dissatisfaction with companies treating individuals as mere data points. Today, AI impacts critical areas such as health and finance, demanding robust governance to prevent erosion of public trust. The lessons of the past are clear: establishing an epistemic layer can enhance institutional legitimacy and accountability.
The choice is straightforward: develop the necessary governance architecture now or risk significant public backlash and loss of trust in AI systems.