Meta dissolves Responsible AI team; members to support cross-team efforts
What's the story
Meta has reportedly disbanded its Responsible AI (RAI) team, which was focused on ensuring the safety of its artificial intelligence (AI) projects.
The Information reported that most RAI members will now be part of Meta's generative AI product team, while others will contribute to the company's AI infrastructure.
Jon Carvill, a spokesperson for Meta, reportedly confirmed that the company will "continue to prioritize and invest in safe and responsible AI development" despite the team's dissolution.
Responsibility
RAI established to identify issues with AI training approaches
Meta formed RAI to detect issues within its AI training methodologies, assessing the extent to which the company's models undergo training with a sufficiently diverse range of information.
The initiative aimed to mitigate challenges like moderation issues.
Meta's social platforms have encountered difficulties arising from automated systems.
These include a Facebook translation problem leading to a wrongful arrest, biased images generated by WhatsApp AI stickers in response to specific prompts, and Instagram algorithms inadvertently aiding users in discovering inappropriate materials.
Details
Members to still contribute to responsible AI development, usage
Even though Meta has dissolved its Responsible AI team, its members will continue to actively participate in responsible AI development and usage within the company.
Carvill mentioned that these individuals would "continue to support relevant cross-Meta efforts on responsible AI development and use."
This shift comes as Meta increasingly focuses on generative artificial intelligence, which entails creating products that generate language and images resembling those produced by humans.
Reason
Move comes as governments race to establish AI-development regulations
Initiatives such as those undertaken by Meta and a similar effort by Microsoft earlier this year come as governments across the globe strive to establish regulatory frameworks for the development of artificial intelligence.
Notably, the United States (US) administration has engaged in agreements with AI companies, and President Joe Biden subsequently instructed government agencies to formulate rules ensuring the safety of AI.
Simultaneously, the European Union has released its AI principles but faces challenges in passing its AI Act.
Scenario
Generative AI team formed in February
Meta's Generative AI team was established in February, as tech companies worldwide heavily invested in machine learning development to remain competitive in the AI landscape.
Major tech firms like Meta have been striving to catch up since the AI surge began.
The reorganization of the RAI team aligns with Meta's "year of efficiency," as CEO Mark Zuckerberg referred to it during a February earnings call. It involved layoffs, team consolidations, and reassignments within the company.
Insights
Industry focus on AI safety standards
Regulators and officials are increasingly scrutinizing the potential risks associated with the emerging AI technology.
Meanwhile, top players in the field have been prioritizing ensuring AI safety. Anthropic, Google, Microsoft, and OpenAI established an industry group in July specifically dedicated to setting safety standards as AI progresses.
Despite the dissolution of Meta's Responsible AI team, the company remains dedicated to investing in safe and responsible AI development, as highlighted by its spokesperson.