OpenAI grapples with understanding how its own AI technologies work
OpenAI, a prominent AI development firm, is struggling to comprehend how its own technology functions. At the AI for Good Global Summit in Switzerland, CEO Sam Altman admitted that they have not yet solved the issue of interpretability. This means that OpenAI is still trying to understand how its artificial intelligence models generate their outputs. Despite this challenge, Altman assured that their AIs are "generally considered safe and robust."
AI interpretability: A widespread challenge in the industry
The problem of understanding how AI models function is not exclusive to OpenAI, but is a common issue in the emerging AI industry. Tracing an output back to its original training material has proven extremely difficult. Altman's company, despite its name and origin story, has kept the data it uses for training its AIs closely guarded. This challenge underscores the ongoing debates about AI safety and the potential risks of artificial general intelligence going rogue.
UK government report highlights limited understanding of AI
A recent scientific report, conducted by a panel of 75 experts, was commissioned by the UK government. It concluded that AI developers "understand little about how their systems operate" and that scientific knowledge in this area is "very limited." The report suggested that model explanation, as well as interpretability techniques, could improve understanding of how general-purpose AI systems operate.
AI companies strive to decode artificial neurons
Several companies in the AI industry are attempting to solve the interpretability problem, by mapping artificial neurons in their algorithms. OpenAI's competitor, Anthropic, has started examining one of its latest large language models (LLMs) called Claude Sonnet, as a first step toward understanding its inner workings. "Anthropic has made a significant investment in interpretability research since the company's founding because we believe that understanding models deeply will help us make them safer," stated the firm in a blog post.