Pornhub to verify uploaders' IDs to moderate illegal content
In December, a New York Times' piece accused Pornhub of publishing and allowing monetization of illegal content. The website has since issued a press release explaining its content moderation measures. Further, the adult website has sought the services of Yoti to verify the identities of content uploaders. Pornhub attracts over 3.5 billion visitors a month, and is listed as the eleventh most-visited website worldwide.
The website deleted nearly 80 percent of its hosted videos
Like YouTube, Pornhub allows user generated content. The NYT report, however, accuses the website of monetizing child rapes, revenge pornography, and spy footage. Since the NYT article, the website has banned uploads from non-partnered uploaders. It then deleted uploads by unverified users, which is nearly 80 percent of the content it hosted.
Third-party service Yoti to verify identity of uploaders seeking verification
All properties owned by Pornhub's parent company MindGeek now accept uploads only from its studio partners and verified users. New users seeking verification through Yoti must provide a current photo and a government-approved ID. Yoti says it can see people's submitted information for up to seven days while its facial recognition software establishes a match between the government ID and the submitted photo.
Yoti claims submitted data is secure, even from itself
Yoti was launched in 2014 and is based in London. It terms its approach as ethical and privacy-driven. After the seven-day window closes, Yoti claims users' account information is encrypted and secured, even from itself. Ars Technica reports Pornhub will not be able to see any data users submit to Yoti for verification.
Pornhub's trusted flagger program immediately disables videos
Besides the aforementioned policy changes, Pornhub has suspended downloads from unverified users as well. Human moderation teams are being expanded to manually review all uploads. The website now has a "trusted flagger program". It allows international non-profit groups such as NCMEC to flag videos they think contain illegal content. Videos flagged under this program are immediately disabled, rather than remaining visible until reviewed.