Apple's iMessage now lets kids report explicit content
Apple has added a new feature to its iMessage platform in Australia to improve child safety. The update lets kids report explicit images or videos they receive directly to the iPhone maker. The tech giant can then potentially alert the police about these inappropriate messages. The feature was released as part of Thursday's beta releases of new versions of Apple's operating systems for Australian users.
An extension of existing safety measures
The new feature expands on existing safety measures that have been enabled by default since iOS 17 for Apple users under 13, but are now available for everyone. These safety features allow an iPhone to automatically detect explicit content in images or videos that kids may receive or attempt to send through iMessage, AirDrop, FaceTime, and Photos. The detection happens on-device to maintain user privacy.
How does the new feature work?
When explicit content is detected, young users are shown two intervention screens before they can proceed. They are also provided with resources or a way to contact a parent or guardian. With the new update, when this warning appears, users will now have an option to report the explicit content directly to Apple. The device will then compile a report containing the explicit images/videos, along with messages sent immediately before and after them.
Apple's response to reported explicit content
The report generated by the device will also contain contact information from both accounts involved in the exchange of explicit content. Users can also add more information about the incident in a form. Apple will then review this report and may take action against an account, like disabling that user's ability to send messages over iMessage, and possibly reporting the issue to law enforcement authorities.
Future plans for the new feature
Apple has said that it plans to first introduce this feature in Australia with the latest beta update. However, the company also plans to roll out this safety measure globally in the future. The announcement comes just ahead of new regulations, which will come into force by year-end, requiring tech firms to monitor child abuse and terror content on cloud and messaging services in Australia.