UK's AI Safety Institute sets foot in San Francisco
The United Kingdom is extending its artificial intelligence (AI) safety initiatives, by launching a second branch of the AI Safety Institute in San Francisco. This development comes ahead of the AI safety summit kicking off in Seoul, with the UK itself being a co-host. Established in November 2023, the AI Safety Institute's mission is to evaluate and mitigate risks linked with AI platforms. San Francisco is a leading hub for AI development, hosting firms like OpenAI, Anthropic, Google, and Meta.
San Francisco expansion to foster closer collaboration
Reporting to TechCrunch, Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, explained the decision to open an office in San Francisco. "By having people on the ground in San Francisco, it will give them access to the headquarters of many of these AI companies," she stated. Donelan further noted that while many AI firms have bases in the UK, a San Francisco presence would tap into an additional talent pool and promote stronger ties with the US.
AI Safety Institute's significant strides
Despite being a relatively new organization with only 32 employees, the AI Safety Institute has already made considerable progress. Earlier this month, it launched Inspect, its inaugural set of tools for testing the safety of foundational AI models. Donelan referred to this launch as a "phase one" effort and admitted that company engagement is currently inconsistent and voluntary. She also revealed that one objective of the upcoming conference is to introduce Inspect to regulators in hopes they'll adopt it too.
Understanding AI risks before legislation
Looking forward, Donelan anticipates more AI legislation from the UK but insists on comprehending the extent of AI risks before legislating. "We do not believe in legislating before we properly have a grip and full understanding," she stated. She also highlighted that there are significant gaps in research that need to be filled globally. Echoing this sentiment, Ian Hogarth, chair of the AI Safety Institute, emphasized the importance of a global approach to AI safety since the institute's inception.