Meta's AI, trained on 48mn scientific papers, dumped in 2-days
What's the story
Meta AI in collaboration with Papers with Code, launched Galactica, an open-source large language model for science, on November 15.
Their prime objective was to tackle the overload of scientific information. Galactica claimed that it could "store, combine and reason about scientific knowledge."
Sadly, a lot went wrong with the online tool and it was taken down in just two days.
Context
Why does this story matter?
Galactica could be a great advantage to researchers as it claimed to organize scientific literature. Its algorithm was said to be "high-quality and highly curated" but in reality, it was not so.
Instead of "benefitting the scientific community," the search engine provided faulty results.
The inaccuracy of Meta's AI model raises concerns over the effectiveness of AI-powered platforms, especially in the scientific domain.
Details
Galactica was trained on over 48 million scientific papers
As per the official website, Galactica was a "powerful large language model (LLM) trained on over 48 million papers, textbooks, reference material, compounds, proteins, and other sources of scientific knowledge."
"You can use it to explore the literature, ask scientific questions, write scientific code, and much more."
The site also said that Galactica outperformed GPT-3, one of the most popular LLMs, by 68.2%.
Twitter Post
Here's a demonstration of how Galactica worked
🪐 Introducing Galactica. A large language model for science.
— Papers with Code (@paperswithcode) November 15, 2022
Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.
Explore and get weights: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW
Problem
What went wrong with the tool?
The platform failed to perform kindergarten-level mathematics and gave the wrong answers.
When Galactica was asked to summarise work by Julian Togelius, an associate professor at NYU, the tool got the name wrong and also failed to provide a summary of his work.
According to Gary Marcus, a psychology professor at NYU, 85% of the results presented by Galactica about him were incorrect.
Twitter Post
A University of Washington professor called it a "bullsh#t generator"
I figured out what bothers me so much about Facebook's Galactica.
— Carl T. Bergstrom (@CT_Bergstrom) November 16, 2022
It's that it pretends to be a portal to knowledge. In their words, "new interface to access and manipulate what we know about the universe."
Actually it's just a random bullshit generator. https://t.co/uLwvLepgST
Details
The Wikipedia article on 'Hanlon's razor' says something else
Bergstrom's Twitter post shows how Galactica failed to generate a Wiki article.
"Hanlon's razor is an adage or rule of thumb that states never attribute to malice that which is adequately explained by stupidity," reads the official Wikipedia article on 'Hanlon's razor.'
"It is a philosophical razor that suggests a way of eliminating unlikely explanations for human behavior."
Information
Galactica displayed a warning message along with search results
The official website has a warning message in bold stating "Never follow advice from a language model without verification." Galactica even displayed the disclaimer "Outputs may be unreliable. Language Models are prone to hallucinate text," along with every result.
Twitter Post
Papers with Code put out a statement regarding Galactica's closure
Thank you everyone for trying the Galactica model demo. We appreciate the feedback we have received so far from the community, and have paused the demo for now. Our models are available for researchers who want to learn more about the work and reproduce results in the paper.
— Papers with Code (@paperswithcode) November 17, 2022