How AI is helping Facebook tackle terrorist threats online
Facebook has revealed how it has moved from human intervention to AI to detect terrorism-related posts. Founder Mark Zuckerberg first detailed his AI-based plan in February, but since then, the movement to a fully-automated system has been significant: presently, AI detects 99% of terrorism-related material it removes. However, he acknowledged they still have much to do. Know how Facebook detects and removes controversial content.
How Facebook detects questionable imagery and helps others too
Facebook relies heavily on automated photo- and video-matching: imagery used online previously by terrorist groups is automatically detected when posted again. For this, the platform shares 'hashes', which are unique codes generated for images, with other firms so that particular photo/video is flagged on any virtual platform. This lets organizations check for controversial material without storing the imagery itself, thus saving much data.
Facebook can also look out for certain words or phrases
Facebook can also check for terrorism-related texts. For this, it uses text-based machine learning, which trains a software to detect certain words or phrases. The platform says once some material is flagged, it removes 83% of such items. If it is subsequently uploaded, it can be removed within an hour. In some cases, material is removed even before it has gone live.
The challenges for such a system are many
Facebook says it has till now focused on Al Qaeda and IS, but has to work on identifying other groups. A common system might not work for all groups due to "language and stylistic differences in their propaganda", employees say. Moreover, some content, like hate speech, are harder to detect with automation. However, it claims its processes have become more systematized than before.
UK, Germany and others clamp down on social media sites
Meanwhile, countries are waking up to the increasing danger. UK PM Theresa May and other European leaders said they want terrorism-related material removed within two hours. Germany now allows the government to fine social media firms up to 2mn euros if they fail to remove "manifestly unlawful" posts within 24 hours. Countries are likely to put more pressure on companies to tackle such content.