Scottish researchers propose an alternative term for AI's 'hallucinations'
A group of researchers from the University of Glasgow in Scotland, have proposed a new term, to describe the tendency of chatbots to generate nonsensical responses. In a paper published in the journal Ethics and Information Technology, the researchers argue that the term "hallucination" is not an accurate description of this phenomenon. They suggest that "bull******g" would be a more fitting term, as it better reflects the chatbots' lack of intention to convey accurate information.
Why does this story matter?
The researchers' proposal is significant in the context of artificial intelligence (AI) and natural language processing (NLP), where large language models (LLMs) are becoming increasingly sophisticated. These models, such as OpenAI's GPT-3, are designed to generate human-like text based on the input they receive. However, they have been known to produce inaccurate or nonsensical responses, leading to concerns about their reliability in real-world applications.
Researchers argue for new definition
The researchers, namely Michael Townsen Hicks, James Humphries, and Joe Slater, argue that the term "hallucination" is metaphorical, and does not accurately capture the chatbots' behavior. They emphasize that these machines are not attempting to communicate something that they believe or perceive, and their inaccuracy is not because of misperception or hallucination.
Term defined in the context of AI
The researchers draw on philosopher Harry Frankfurt's work to define "bull******g" in the context of AI. They summarize Frankfurt's definition as "any utterance produced where a speaker has indifference toward the truth of the utterance." This definition is further categorized into two types: hard bull***t, which occurs when there is an intention to mislead, and soft bull***t, which is spoken without any intention. The researchers suggest that chatbots fall into the category of soft bull******s or "bull***t machines."
Implications for real-world applications
The researchers warn that as people increasingly rely on chatbots for various tasks, their ability to generate nonsensical responses could become more problematic. They point to instances where chatbots have been used to write legal briefs that included inaccurate legal precedents. The team cautions that decisions made by investors, policymakers, and the public about how to interact with these machines are often based on a metaphorical understanding of their abilities, which could lead to confusion.