How US plans to mitigate AI safety concerns
The field of AI is witnessing rapid growth, but the increasing advancements are also sparking several concerns about the risks associated with the technology. Now, the chief executives of Google, Microsoft, OpenAI and Anthropic— the four US firms actively developing new AI tools—have been invited to meet Vice President Kamala Harris and top officials on May 4 to discuss key issues related to AI.
Why does this story matter?
The rise of AI has been swift. The technology has left people amazed with its ability to write codes, help with assignments, churn out essays, and answer any query, among other capabilities. However, several experts have cited concerns over the potential harm it can cause. And schools and universities have already started banning AI chatbots, like ChatGPT.
The concerns include privacy violations and spread of misinformation
Among the most pressing concerns regarding AI are privacy violations, the spread of misinformation, copyright concerns, and a rise in scams. Tech companies are therefore expected to ensure their products and services are safe before they are made available to the public. That's what the upcoming meeting will focus on, emphasizing developing technologies "with safeguards that mitigate risks and potential harms."
President Biden earlier said that responsibility lies with tech companies
In April, US President Joe Biden said that it remains to be seen if AI is dangerous but emphasized that the responsibility lies with the tech companies to make sure that their products are safe. "Social media has already demonstrated the harm that powerful technologies can do without the right safeguards," he had said.
The government has also been seeking public opinion
As concerns are growing over the impact of AI on national security and education, the US administration has been seeking public opinion on accountability measures for AI. Recently, the deputy officials from the White House Domestic Policy Council and the White House Office of Science and Technology Policy published a blog post about how AI can be a serious risk to workers.
The US is also taking steps to regulate AI
The US is also taking measures to set up a framework to regulate AI, under the supervision of Senate Majority Leader Chuck Schumer. The focus of the framework is on four points: who trained the algorithm and who is it meant for; disclosure of data source; setting strong ethical boundaries and details on how the AI reaches its responses.
Geoffrey Hinton, the 'Godfather of AI,' has also expressed concerns
Geoffrey Hinton, the 'Godfather of AI,' who recently quit Google, has also voiced his thoughts on the harm AI can cause. Hinton is concerned that AI will cause the internet to be flooded with misinformation, making it hard for the common man to identify what's true. He thinks the AI race will not come to an end without some form of global regulation.