AI chatbots lean politically left but can be retrained
A recent study conducted by David Rozado from Otago Polytechnic in New Zealand, has revealed that most AI-powered chatbots inherently display a left-of-center bias. The research, published in the journal PLoS ONE, also found that large language models (LLMs) can be retrained to align with different political orientations. This is achieved by fine-tuning their algorithms using politically aligned data.
Political bias tested using 24 models
The study involved assessing the political bias of 24 different open and closed-source chatbots. These included popular models like OpenAI's ChatGPT, Google's Gemini, Anthropic's Claude, X's Grok, and Meta's Llama 2. The evaluation was conducted using tests such as the Political Compass Test and Eysenck's Political Test. Rozado found that most of these chatbots generated responses that were "left-of-center" according to these political tests.
Customizing chatbots for political alignment
Rozado demonstrated the potential to induce a political bias in chatbots by fine-tuning GPT-3.5, a machine learning algorithm developed to adapt LLMs to specific tasks. He created "LeftWingGPT" and "RightWingGPT" by training the model on text from politically aligned publications and authors. A "DepolarizingGPT" was also developed using content from neutral sources, aiming for political neutrality.
What were the results?
Following the political alignment fine-tuning, Rozado observed that "RightWingGPT has gravitated toward right-leaning regions of the political landscape in the four tests." He noted a similar effect for LeftWingGPT, while DepolarizingGPT was closer to political neutrality. Rozado, however, clarified that these findings do not imply that chatbots' inherent political preferences are intentionally instilled by their creators.