Is AI humanity's biggest threat? New study answers
A recent study conducted by the University of Bath and the Technical University of Darmstadt in Germany, has concluded that large language models (LLMs) like ChatGPT do not pose an existential threat to humanity. The research was presented at the 2024 Association for Computational Linguistics meet, a leading international conference on natural language processing. The findings suggest that while LLMs can follow instructions and demonstrate proficiency in language use, they cannot independently learn or acquire new skills without explicit guidance.
LLMs are controllable and safe
The study found AI models to be fundamentally controllable, predictable, and safe. Despite being trained on increasingly larger datasets, these models are not expected to develop complex reasoning skills. As they evolve, they may produce more sophisticated language and improve at responding to explicit prompts but will not gain advanced reasoning abilities. This conclusion challenges the widespread concerns about AI eventually surpassing human intelligence and posing a danger to humanity's existence.
Research dispels fears about AI's emergent abilities
The research team, led by Professor Iryna Gurevych, conducted experiments to test LLMs's emergent abilities or their capacity to perform tasks they have not previously encountered. For instance, LLMs can answer questions about social situations without explicit training or programming. This ability was previously thought to be due to models 'knowing' about social situations but was shown by the researchers to be a result of in-context learning (ICL), a well-documented feature of LLMs.
AI models lack complex reasoning abilities
Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study, stated that concerns about LLMs acquiring hazardous abilities such as reasoning and planning are unfounded. The research team's tests clearly demonstrated the absence of emergent complex reasoning abilities in LLMs. He suggested that while it's crucial to address potential misuse of AI, it would be premature to enact regulations based on perceived existential threats.