AI tools are turning smarter and covertly racist, says study
A recent investigation has revealed that widely used artificial intelligence tools, including OpenAI's ChatGPT and Google's Gemini, display covert racism. The study, conducted by a team of linguistics and technology experts, and published in the open-access research archive arXiv, shows these AI models harbor racial biases against speakers of African American Vernacular English (AAVE). Given these technologies' frequent use in tasks like job applicant screening, this raises serious concerns about potential discrimination.
AAVE speakers face bias from AI models
The research involved prompting the AI models to evaluate the intelligence and job suitability of AAVE speakers, compared to those using standard American English. The results showed that these models were more prone to label AAVE speakers as "stupid" or "lazy," assigning them lower-paying jobs. Moreover, the researchers discovered that these AI tools could potentially disadvantage job applicants who code-switch between AAVE and standard American English.
Potential influence of AI models on legal decisions
The study also exposed the troubling potential for bias in legal scenarios. The AI models were found to be more inclined to suggest death penalties for hypothetical criminal defendants who utilized AAVE in their court statements. While it's currently improbable that such technology would be employed to make decisions pertaining to criminal convictions, the researchers warn about the unpredictable future uses of these AI tools. Presently, AI models are already aiding with administrative tasks in the US legal system.
Rising demand for regulation amid increasing AI usage
Prominent figures like Timnit Gebru, the former co-leader of Google's ethical artificial intelligence team, have been calling for federal regulation on large language model usage. The escalating deployment of AI across various sectors and its potential for covert racism, highlight the pressing need for such regulation. The wider generative AI market is expected to evolve into a $1.3 trillion industry by 2032, making it even more vital to address these issues promptly.
Recent events spotlight flaws in AI models
Google's Gemini recently came under fire for its image generation tool, which inaccurately portrayed various historical figures as people of color. This incident ignited conversations about the necessity for transparency and diversity in AI development to tackle biases and algorithmic flaws. The researchers contend that while ethical guidelines or "guardrails" have been established, they merely instruct these models to be more subtle about their biases, without resolving the root problem.
Experts push for prudent use of AI technologies
AI ethics researcher Avijit Ghosh proposes that limiting the usage of such technologies in sensitive areas, is a crucial initial step toward addressing the issue. He compares this to preventing racist individuals from handling hiring and recruitment processes. This viewpoint is echoed by an increasing number of AI experts who fear potential damage if technological progress continues to outstrip federal regulation, underlining the importance of prudent and regulated use of AI technologies.