Meta won't release high-risk AI models—but what are they?
What's the story
Meta, the tech giant formerly known as Facebook, has detailed a new policy that shows it's willing to stop developing artificial intelligence (AI) systems deemed too risky.
The company's CEO Mark Zuckerberg had previously pledged to make artificial general intelligence (AGI) widely accessible.
However, this new policy document titled 'Frontier AI Framework,' hints at scenarios where Meta might withhold a highly capable in-house AI system due to risks involved.
Risk categories
Understanding 'high risk' and 'critical risk' AI systems
The Frontier AI Framework divides AI systems into two risk levels: "high risk" and "critical risk."
These AI models can assist in cybersecurity, chemical, and biological attacks. However, a "critical-risk" system could result in an unmanageable catastrophic outcome in its proposed deployment context.
Meanwhile, a "high-risk" system could enable an attack but not as reliably or dependably as a critical-risk one.
Risk assessment
Meta's approach to assessing AI system risk
Meta's risk classification for AI systems isn't determined by a single empirical test.
Instead, the company takes inputs from both internal and external researchers, which are then reviewed by senior-level decision-makers.
The reason behind this, as the document states, is that Meta doesn't believe "the science of evaluation is sufficiently robust as to provide definitive quantitative metrics" for determining a system's riskiness.
Mitigation measures
Meta's risk mitigation strategies for AI systems
If a system is flagged high-risk, Meta will limit internal access and hold its release until risk-reducing measures are taken.
For critical-risk systems, the company will enforce unspecified security protections to prevent data theft and suspend development until the system can be made safer.
These strategies are part of Meta's Frontier AI Framework, which is designed to evolve with the changing AI landscape.
Open strategy
Meta's open AI strategy: A double-edged sword
Meta's strategy of making its AI tech open has been a boon and a bane. The company's Llama suite of AI models has been downloaded hundreds of millions of times.
But it has also been reportedly used by China—a US adversary—to build a defense chatbot.
With the Frontier AI Framework, Meta hopes to find a balance between the benefits and risks of advanced AI development and deployment.