As AI Usage Increases, Ethical Implementation Remains Crucial
Although artificial intelligence (AI) has become an effective tool in the daily lives of many, such technology — which can mirror human expression with a few clicks on a keyboard — has boundless potential to be either incredibly beneficial or detrimental to the planet, depending on how it is regulated and used.
The Evolving Role of AI
With AI rapidly evolving and reshaping the digital landscape, its adoption in academia, business practices, and media requires scrutiny to ensure humans don’t become dependent on it for thought. The swift increase in normalized usage for email management, data analysis, chatbot systems, and more has prompted experts and professionals to put frameworks, committees, and educational practices in place to encourage the technology’s intentional and responsible adoption.
Dr. Talitha Washington, executive director of Howard University’s Center for Applied Data Science and Analytics (CADSA), emphasizes the importance of maintaining critical thinking skills, stating, “We don’t want to have a whole cadre of AI zombies, where we’re just repeating without thinking.”
Global Initiatives for Responsible AI
To actively work toward a future of safe and inclusive digital spaces, the United Nations established its Independent International Scientific Panel on AI in late 2024 — the first global body of its kind. This panel will develop a scientific understanding of how these technologies are reconstructing the world and how to ensure they benefit humanity by fostering peace, security, human rights, and sustainable development.
During the entity’s first meeting on March 3, U.N. Secretary General António Guterres expressed that its work will help strengthen global coordination and innovation, stating, “The world urgently needs a shared, global understanding of artificial intelligence, grounded not in ideology, but in science; not in fake news, but in knowledge.”
Ensuring Fairness and Transparency
Responsible AI use, with an understanding of its impact on people’s lives, is crucial for guaranteeing an equitable future and seamless progression into a more digital age. A significant concern affecting fair technology systems is the acknowledgment and reduction of biases in generative AI.
All large language models (LLMs) are trained with vast datasets that they use to process language and generate text. Generative pre-trained transformers (GPTs), which are used in generative AI systems like modern chatbots, are some of the largest and most intelligent LLMs. However, these systems can harbor covertly racist inclinations in their programming, resulting in discriminatory outputs. A 2024 study published in the science journal Nature revealed that GPTs deem individuals using African American Vernacular English (AAVE) as less employable.
Carey Digsby, an artist and business consultant, emphasizes the need for fairness and transparency in AI: “We have to make sure that AI is being fair, and we also have to make sure there’s some sort of transparency … [and] trust involved.”
Education as a Tool for Mitigation
While there isn’t a surefire way to eliminate biases in AI, gaining a better understanding of how preconceived notions can affect algorithms’ operations is a crucial step in reducing their presence and effects. Education about AI systems can help users navigate this technology without causing harm.
“The more you know about it, the better you can utilize it, and it can be … a tool for yourself,” Digsby notes. “Many people fear AI instead of looking at it as an advantage.”
Practical Applications in Education
At Howard University, the free version of Microsoft’s Copilot, an AI assistant that enhances productivity, is embedded into the university’s Microsoft Office products and tools. The AI Advisory Council also hosts workshops to support the system’s use, demonstrating its potential from interpreting text to helping with scheduling.
The university is set to launch its Fundamentals of AI certificate, aimed at all undergraduate, graduate, and professional students. Faculty approved three courses for the program: Introduction to AI Tools and Techniques, Ethical and Responsible AI, and AI in the Disciplines.
Future Perspectives on AI
Despite the growing acceptance of AI, a Pew Research Center survey indicated that 47% of Americans harbor little to no trust in the technology. Washington advocates for a future of AI focused on fixing faulty hardware, mitigating negative environmental implications, and emphasizing original human thought.
“At the end of the day, critical thinking will remain important,” Washington concludes. “Having a creative mind and being able to think outside the box will be important, and … original thought will remain paramount.”