Your Questions Answered: Where We Are on AI Regulation, and Where We Go From Here
Whether you encounter it in your daily life or never think about it at all, artificial intelligence (AI) affects us all. From applying for a loan to sitting at the doctor’s office, AI systems are often used behind the scenes to make real-world decisions — impacting us in ways that aren’t disclosed upfront.
Despite the growing reach of AI and the diversity of tools and systems it encompasses, regulations governing how it is developed and deployed, as well as how impacted people are informed, remain worryingly sparse. Left unregulated, these systems can infringe on your ability to control your data or reinforce discrimination in hiring and employment practices. As the civil rights implications become more serious, strengthening protections is no longer optional.
Need for More Regulation
AI is often used to make decisions about our lives without transparent disclosure. For example, when you apply for a loan or submit a job application, banks or employers might use AI to analyze your materials before a real person ever does. At the doctor’s office, your provider may use an AI scribe to take notes on your conversation. Government agencies are using AI and other automated systems to make crucial decisions about who gets benefits and what those benefits are. AI should be held to strict standards when dealing with people’s lives.
Harms to Civil Liberties
Without careful oversight, AI systems used for decision-making have been proven to perpetuate existing systematic inequalities. For instance, AI tools used to screen job applications or assess prospective employees can unfairly discriminate against people of color, people with disabilities, neurodiverse individuals, and people from low-income backgrounds. The use of AI in areas like hiring, housing, and policing means that you can be denied a job or an apartment — or even wrongfully arrested when AI-based systems that use facial recognition technology misidentify suspects in criminal investigations.
This is not an accident and it’s not unavoidable. The civil rights implications of these systems depend on the context in which they are used. While some systems may be used in benign ways, biased AI creates serious risks of discrimination against real people in life-altering situations. The people, companies, and institutions developing and deploying AI systems are responsible for enabling these biases, but stricter policies and regulation can hold them accountable for their impact.
Addressing Real-World Challenges
Policymakers and advocates must address the real-world challenges emerging from the use of AI. Various AI regulations proposed by policymakers across the country include bills that regulate the use of AI in specific areas like education or elections and broader proposals expanding civil rights protections that already apply to AI uses in high-stakes areas.
Key recommendations to address challenges when conducting AI policy analysis include:
- Create standardized formats for legislative texts across jurisdictions to facilitate computational analysis of data.
- Incorporate a multilingual perspective when analyzing AI legislation introduced in regions under U.S. jurisdiction.
Your Digital Rights
Whether decisions are made by a human or AI, longstanding federal anti-discrimination laws prohibit discrimination in hiring and employment based on race, ethnicity, sex, sexual orientation, gender identity, disability, and other protected characteristics. In addition to federal protections, a growing number of states have passed laws regulating how employers and third-party vendors collect, use, and share your personal data during hiring. These laws give you greater control over your information and more transparency about whether automated systems are evaluating you.
For more information on digital discrimination and your digital rights when searching or applying for jobs, please refer to relevant resources available online.