White House pushes AI companies to commit to safeguards
The US administration has been working on regulating artificial intelligence (AI) for a while. Its focus has always been on responsible AI. Now, leading AI companies, including Microsoft, OpenAI, and Google, are set to publicly commit to safeguards for the booming technology. They will be doing so at the request of the White House. The pledge is expected to take place today.
Why does this story matter?
The US has been leading the AI race from the outset, all thanks to the tech companies based in the country. While the deployment of AI and its evolution has generated a lot of excitement, concerns surrounding the technology have also increased. The commitment from AI companies might help alleviate some of the fears people have about AI.
AI companies will pledge to responsible development, deployment of AI
The Joe Biden administration had previously warned AI companies to ensure the technology doesn't cause any harm. This time, the firms are expected to pledge to several principles while developing and deploying AI. During a meeting with AI companies in May, Vice President Kamala Harris and other officials told CEOs they have a legal responsibility to ensure their AI products are safe.
The commitments are voluntary in nature
The pledge will be voluntary, per Bloomberg. It is a sign of the government's limitations in controlling potential misuse of the technology. However, the Biden administration sees voluntary commitments from AI firms as imperative to regulating AI. Especially at a time when Congress has failed to reach a consensus over AI governance even after multiple discussions.
The pledge will be invalid when Congress passes legislation
The pledge AI companies are set to take today is expected to match the pledge they made during the May meeting. According to a draft White House document, the voluntary commitment will expire when Congress passes legislation to regulate AI. The administration wants leading AI firms to make commitments regarding generative AI, AI models, and more capable future models.
White House's draft document suggests 8 commitments
The White House is set to seek eight commitments from AI companies. They include testing models for bad behavior by independent experts, encouraging third parties to find security vulnerabilities, watermarking audio and visual content, reporting risks to society, sharing trust and safety information with the government, investing in cyber security, focusing on research into societal risks, and using frontier models to tackle society's problems.