Following many recent terror attacks, a common question that’s asked is whether tech companies are doing enough to fight terrorism online. To answer this question, Facebook decided to publish a blog post today that showed us how the company goes about fighting terrorism online. Facebook admits that while terrorism has no place on its platform and removes such content when it’s aware, it faces an enormous challenge in monitoring the posts of over 2 billion people in 80 languages.

So how does Facebook go about fighting terrorism on the platform used by so many people? The latest answer to that question is Artificial Intelligence. At the moment, many of accounts spreading terrorist content are identified by Facebook itself. However, the company has now started utilizing AI to identify and remove terrorist content. Many of these efforts are still in their early stages. As such, they are currently used to combat terrorist content about ISIS, Al Qaeda, and their affiliates. Over time Facebook hopes to expand these solutions after they mature to combat content posted by other terrorist organizations.

But what are these techniques? For starters, there’s image matching that involves identifying whether an image someone is uploading matches known terrorist photos or videos. Cross-platform collaboration across all of Facebook’s apps such as WhatsApp and Instagram is also being utilized. Additionally, Facebook has also begun experimenting with AI to translate text in different languages and understand it to identify whether it’s advocating terrorism.  Furthermore, the company is also building algorithms to detect fake accounts used by terrorists and removing terrorist clusters such as pages, groups, posts or profiles.

Facebook AIYet, despite these advanced AI technologies, the greatest asset that Facebook has in its fight against terrorism online, is people. This is because as Facebook describes, “AI can’t catch everything. Figuring out what supports terrorism and what does isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context. A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story.”

Thus, the main weapon in Facebook’s fight to identify and stop terrorist content are the reports submitted by people that use it. Facebook has stated its community operations team works 24 hours a day and in multiple languages to verify these reports. Additionally, the company also has a dedicated team of over 150 counter-terrorism specialists. There’s also a global team on standby and ready to respond within minutes to requests from law enforcement organizations.

Facebook AISimilarly, Facebook has supported and joined other initiatives by governments and NGO’s to combat the spread of terrorist content online. The company is also working in partnership with Microsoft, Twitter, and YouTube to create a shared industry database to identify content produced by or in support of terrorist organizations.

So is Facebook doing enough to combat the spread of terrorist content online? Time and time again, we’ve seen exactly how ineffective its primary tool: the user reporting function can be. A Facebook page can spread terrorist propaganda or advocate racism or post unwarranted pictures of schoolgirls. Yet, unless a large number of people submit reports, Facebook is unlikely to take any action.

However, monitoring a platform utilized every day by nearly 2 billion people in over 80 languages is no easy task. It’s clear that Facebook is certainly trying to ensure that it’s platform is a safe one. And it’s likely that once its tools utilizing AI are mature, the task will be a lot easier. But until then, Facebook will be involved in a long tiring war to keep its platform safe.

LEAVE A REPLY

Please enter your comment!
Please enter your name here