Advocacy groups raise concerns as Meta lowers WhatsApp age limit
Meta, the parent company of popular messaging app WhatsApp, has reduced the age requirement for users in the UK and EU from 16 to 13. The decision was announced last month, and was implemented earlier this week. However, it has sparked controversy among advocacy groups such as Smartphone Free Childhood, who argue that it contradicts public demands for enhanced child protection measures by tech giants.
Reactions over age reduction
Smartphone Free Childhood has voiced strong opposition to Meta's decision. The organization fears that lowering the age limit sends a message that WhatsApp is safe for children. They argue this contradicts the narrative from teachers, parents, and experts who have expressed concerns about child safety on such platforms. The group accuses tech giants of prioritizing shareholder profits over child protection measures.
WhatsApp defends decision amid backlash
In response to the criticism, WhatsApp defended its decision by stating that the new age limit aligns with restrictions in most countries. The company also assured critics that it has safeguards in place to protect younger users. Mark Bunting, Ofcom's Director of online safety strategy, warned that penalties could be imposed on social media firms not adhering to their guidelines once they have the authority to do so. Ofcom oversees the broadcasting, telecommunications, and postal sectors in the UK.
Ofcom plans to enforce online safety measures
Bunting discussed plans to establish codes of practice for online safety enforcement during a BBC Radio 4 interview. He stated that once their powers come into force in 2025, they will hold social media companies accountable for the effectiveness of their child protection measures. If companies fail to demonstrate effective steps toward child safety, Ofcom will have the power to direct the necessary changes.
Meta introduces new safety features on Instagram
In a related development, Meta has introduced a series of safety features aimed at protecting users from inappropriate image abuse. The company announced it would start testing a feature called Nudity Protection in Instagram's direct messages. This feature, enabled by default for users under 18, will automatically blur images identified as containing nudity and provide ways to block the sender, as well as report the conversation.