OpenAI proposes democratic decision-making, central agency to regulate advanced AI
What's the story
Generative AI is one of the hottest topics right now. The abilities of AI have stunned the globe, but they have also given rise to questions about the potential harm it could cause.
Lawmakers worldwide are engaged in discussions about regulating AI. Now, ChatGPT-creator OpenAI has expressed its views on how to regulate advanced AI in a blog post.
Context
Why does this story matter?
The success of ChatGPT and its ilk have resulted in the widespread usage of AI technologies worldwide. The focus is now on AI companies on how they plan to stop AI from misbehaving.
OpenAI CEO Sam Altman recently appeared before a US Senate subcommittee to address lawmakers' concerns about AI. During the hearing, he agreed with the need to regulate AI.
Superintelligence
Superintelligence will be much more powerful than current AI systems
The AI of the present is scary. But today's AI will be nothing compared to tomorrow's "superintelligence," an entity that can surpass human intelligence, says OpenAI.
It will be better than AGI (artificial general intelligence). In the blog post, Altman, OpenAI president Greg Brockman, and OpenAI's chief scientist Ilya Sutskever talk about governing superintelligence, a technology they think will be the most powerful ever.
Concerted effort
OpenAI suggests a concerted effort between companies and governments
A concerted effort between leading AI companies and governments worldwide is essential to regulate AI as we know it now. The same is necessary in the case of superintelligence as well.
Coordination between major stakeholders will help maintain safety and smooth integration of systems into society, the post says.
How can this be put into practice?
Implementation
Set up an agency like IAEA: OpenAI
According to OpenAI, a coordinated effort can be achieved either through a project set up by major governments across the world or an umbrella organization.
During the Senate hearing, Altman spoke about setting up an organization to regulate AI. In the blog post, the company proposes creating an agency like the International Atomic Energy Agency (IAEA) to govern superintelligence efforts.
The agency
The agency will focus on existential risk posed by AI
"Something like an IAEA for advanced AI is worth considering," says Altman. The agency will have the power to inspect AI systems beyond a certain threshold of capability, "require audits, test for compliance with safety standards, and place restrictions on degrees of deployment."
According to OpenAI, such an agency's mandate should be to focus on "existential risk" and not trivial matters.
Not required
Today's AI systems can be dealt with like other technologies
While OpenAI is all for regulating advanced AI, it believes companies and open-source projects should be allowed to develop AI systems "below a significant capability threshold" without much regulation.
This includes the proposed licensing of AI systems and audits. The company thinks the risk posed by today's AI systems is not more than other "internet technologies."
Public oversight
Public should decide the 'bounds and defaults' of AI systems
In the blog post, OpenAI says "public oversight" is essential in regulating advanced AIs. "Governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight," the company said.
According to OpenAI, the public should "democratically decide" on the "bounds and defaults" of AI systems. The firm, however, does not say how to do that.
Like Wikipedia
Brockman proposed a Wikipedia-like system to regulate AI
Brockman expressed similar views during the 'AI Forward' event hosted by Goldman Sachs Group and SV Angel. He proposed a Wikipedia-like system where people can democratically decide on what AI should be.
In this system, people with diverse views will have a chance to register their opinion. Brockman did not say more about the new democratic system.