Anthropic opens its AI tech to minors with safety provisions
AI start-up Anthropic has updated its policies, now allowing teens and preteens to use third-party applications powered by its AI models. This change is contingent on these apps incorporating specific safety features, and informing users about the underlying Anthropic technologies. However, the policy revision does not necessarily apply to Anthropic's own applications. The company has detailed various safety measures for developers, making AI-powered apps for minors in a support article.
Measures for minors using AI tech outlined
Anthropic outlined several safety measures including age verification systems, content moderation and filtering, and educational resources on "safe and responsible" AI use. The company also suggested the potential of providing "technical measures" to customize the AI product experience for minors. This includes a mandatory "child-safety system prompt" that developers targeting minors, would be required to implement.
Compliance with child safety and data privacy regulations
Developers using Anthropic's AI models must comply with relevant child safety and data privacy laws, such as the Children's Online Privacy Protection Act (COPPA). This US federal law safeguards the online privacy of children under 13. Developers are also required to "clearly state" on public-facing sites and documentation that they are in compliance. Anthropic plans to periodically audit apps for compliance, with potential consequences for those who repeatedly violate these requirements.
AI tools beneficial for younger users, says Anthropic
Anthropic believes that AI tools can offer huge benefits to younger users, particularly in areas like test preparation or tutoring support. The company stated, "With this in mind, our updated policy allows organizations to incorporate our API into their products for minors." The policy change comes as more children and teenagers, are turning to generative AI tools for assistance with schoolwork and personal issues.
Other AI vendors exploring child-focused use cases
Other generative AI vendors, including Google and OpenAI, are also investigating more child-focused use cases. This year, OpenAI created a team to assess child safety and announced a collaboration with Common Sense Media, to develop kid-friendly AI guidelines. Google has made its chatbot Bard, now rebranded as Gemini, available to teens in English in select regions.
Generative AI usage among minors: Benefits and concerns
A poll from the Center for Democracy and Technology revealed that 29% of kids have used generative AI like OpenAI's ChatGPT to cope with anxiety/mental health issues, 22% for issues with friends, and 16% for family conflicts. However, a survey by the UK Safer Internet Centre found that over half of kids (53%) reported seeing people their age use generative AI negatively. This comprises making believable false information or images intended to upset someone (including pornographic deepfakes).
Calls for guidelines on children's use of AI
There is an increasing demand for guidelines on children's use of generative AI. Last year, the UN Educational, Scientific and Cultural Organization (UNESCO) urged governments to regulate the use of generative AI in education. This includes implementing age limits for users and checks on data protection and user privacy. UNESCO's Director General Audrey Azoulay stated, "Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice."