White House announces AI guidelines for federal government: Key points
The White House has taken a significant step toward regulating artificial intelligence (AI) within the federal government. This move comes five months after President Joe Biden signed an executive order addressing the rapid advancement in AI technology. The newly introduced policy aims to control the government's use of AI and includes measures to reduce the risk of algorithmic bias.
Vice President emphasizes responsible AI use
Vice President Kamala Harris underscored the importance of responsible AI use in her statement. She expressed her belief that leaders from government, civil society, and the private sector have a moral, ethical, and societal duty to ensure that AI is adopted and advanced in a way "that protects the public from potential harm while ensuring everyone can enjoy its benefits."
New policy outlines key AI requirements
The new policy, announced under the Office of Management and Budget (OMB), outlines three key requirements for agencies. Firstly, they must ensure their AI tools "do not endanger the rights and safety of the American people." Agencies have until December 1 to implement "concrete safeguards" to prevent their AI systems from negatively affecting Americans' safety or rights. If these safeguards are not established, an agency must discontinue using an AI product unless its discontinuation would severely impact critical operations.
AI systems impacting safety and rights
The policy identifies an AI system as impacting safety, if it significantly influences certain activities and decisions in real-world conditions. These include maintaining election integrity, controlling critical safety functions of infrastructure like water systems, emergency services, electrical grids, autonomous vehicles, and operating physical movements of robots in various settings. Agencies must also discontinue AI systems that infringe on Americans' rights unless they have appropriate safeguards or can justify their use.
Policy addresses generative AI and transparency
The policy also addresses generative AI, requiring agencies to assess potential benefits and establish adequate safeguards and oversight mechanisms. This is to ensure generative AI can be used without posing undue risk. The second requirement mandates transparency about the AI systems being used. As part of this transparency effort, agencies must publish government-owned AI code, models, and data, provided it does not harm the public or government operations.
Requirement for internal oversight of AI use
The final requirement is for federal agencies to have internal oversight of their AI use. This includes each department appointing a chief AI officer to oversee all of an agency's use of AI. Many agencies will also need to have AI governance boards in place by May 27. Harris noted that this is to ensure the responsible use of AI and that there should be senior leaders overseeing AI adoption and use across the government.