Is There Such a Thing as Ethical AI?
The food we eat, the clothes we wear, and the media we consume are increasingly laden with ethical considerations. We are in an era where consumerism is rife with moral concerns—where even the sofa we sit on can be an ethical quandary.
The Ethical Dilemmas of AI
Concerns about the ethics of using AI range from its environmental impact to issues surrounding copyright. Notably, the chatbot Grok, developed by Elon Musk’s xAI and widely used on X (formerly Twitter), has been implicated in generating sexualized and violent imagery, especially targeting women. Such behavior raises fundamental questions about the ethical implications of AI usage.
AI systems are designed to be obliging; they lack inherent moral codes, which means they will respond to requests without ethical constraints. Other AI systems avoid producing harmful content only because their creators have specifically programmed them not to.
Impact on Human Behavior
This leads to deeper questions about the societal impact of AI: What damage is AI doing to us as humans? If it is indeed causing harm, are we being unethical in our usage of it?
Recent research highlights a gender gap in AI usage, with women using AI technologies less than men, primarily due to perceived risks. The study revealed a gap of up to 18% between genders, suggesting that women exhibit greater social compassion and traditional moral concerns.
Ethical Concerns in AI Applications
Various ethical concerns arise with AI applications. For instance, using chatbots for work can be perceived as unfair or akin to cheating. Other issues include:
- Potentially sensitive and personal data collection
- Facilitating unethical behavior, such as violence
- Reinforcing bias and systemic unfairness
Campaigner Laura Bates has long warned that unchecked AI can exacerbate misogyny and inequality, highlighting issues from biased hiring algorithms to the creation of deepfake content. In her testimony to the Women and Equalities Committee, she emphasized that ethical AI must be designed with an awareness of these risks.
Training and Copyright Issues
The ethical problems associated with AI begin during the training process. AI models, like those powering ChatGPT and Gemini, are trained on vast amounts of text sourced from the internet. Many companies scrape data from various platforms without considering copyright or whether the original authors consented to such use.
Legal disputes have arisen over these ethical issues, but court rulings do not provide clear ethical guidelines. For example, a US judge determined that Anthropic’s use of books without permission fell under fair use, despite federal judges reprimanding the company for using pirated materials.
Transparency and Ethical Frameworks
Transparency is crucial in AI development. Companies like Anthropic use a “constitution” based on the Universal Declaration of Human Rights to guide their models. However, principles-based approaches can backfire, leading to AI that is perceived as judgmental or condescending.
Other companies, such as Mistral, emphasize open-sourcing their work to promote ethical standards. Notably, at the AI Action Summit in Paris, the UK and US governments declined to sign a pledge ensuring ethical AI, unlike 60 other countries.
The Path Forward
In light of consumer backlash, the solution may be a return to basic consumer taste. AI users may soon prioritize ethical and aesthetic preferences in their choices, not unlike the process of selecting a sofa.