Facebook explained how it is using artificial intelligence software to spot images, videos, and text related to terrorism in addition to clusters of fake accounts that may have been set up by terrorists.
The move — detailed in a blog post by the Californian tech giant on Thursday — comes after several world leaders put pressure on the company to stop the spread of extremist content on its platform.
“We want to find terrorist content immediately, before people in our community have seen it,” Facebook wrote in the first post on its new “Hard Questions” blog.
“Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook.”
Facebook said it is using custom-built image matching and language understanding systems to detect inappropriate content on its social media platform, which had 1.94 billion monthly active users as of March 31.
The image matching systems can detect when a user is trying to upload content that has been marked as inappropriate in the past. If the media matches previously removed content, then the image matching system will automatically block the upload.
In terms of language understanding, Facebook said it has “recently started to experiment with using AI to understand text that might be advocating for terrorism”. The social media behemoth said it is currently analysing text that it has already removed so that it can develop “signals” to help it prevent similar text being uploaded in the future.
Facebook also said that it is using AI “fan out” algorithms to identify and remove “terrorist clusters”. These algorithms look at the relationships between Facebook users and Pages/Groups that have been flagged as being potentially dangerous.
In terms of detecting fake accounts set up by terrorists, Facebook said it is fighting a constant battle but that it is getting faster. “We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly,” the company claimed.
Facebook said it is also working on systems that will allow the company to take action against terrorists across all of the platforms it owns and operates. That includes Messenger and WhatsApp, which have over a billion users, and Instagram, which is approaching a billion users.
Yann LeCun, Facebook’s head of AI research, tweeted a link to the blog post, saying “As a global communication platform, Facebook sometimes faces difficult questions whose answers are not obvious….”
As a global communication plateform, Facebook sometimes faces difficult questions whose answers are not obvious…. https://t.co/j5rUSqFDV2
— Yann LeCun (@ylecun) June 16, 2017
AI can’t tackle the terrorists on its own
But AI can’t do it all. Facebook has employed 4,500 moderators to help spot terrorist activity and it expects the team to grow to 7,500 people by the end of the year. That would be a significant proportion of Facebook’s overall workforce — the company had 18,770 employees as of March 31. However, only around 150 of the current team focus specifically on counterterrorism.
“Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context,” the company wrote. “A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story.”
The Guardian revealed on Friday that more than 1,000 of Facebook’s content moderators were put at risk after a bug inadvertently exposed their personal details to suspected terrorist users.
The bug reportedly caused the personal profiles of content moderators to automatically appear as notifications in the activity log of Facebook groups whose administrators were removed from the platform for breaching the terms of service.
Governments are clamping down on the tech companies
Facebook and its platforms have come under increasing levels of scrutiny in the wake of recent terror attacks. European leaders such as UK Prime Minister Theresa May and French Prime Minister Emmanuel Macron have taken a particularly hard stance, launching a joint campaign earlier this week. The duo are looking at creating new laws that could see tech firms punished if they fail to remove certain types of content.
“We cannot allow this ideology the safe space it needs to breed,” May said on Sunday morning during a speech on Downing Street.
“Yet that is precisely what the internet — and the big companies that provide internet-based services — provide.”
She continued: “We need to work with allied, democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremism and terrorist planning. And we need to do everything we can at home to reduce the risks of extremism online.”
Digital campaigners are concerned that governments will end up stifling free speech and freedom of expression as they look to crack down on large tech platforms.
Jim Killock, executive director of Open Rights Group, said in a statement on the organisation’s website: “Theresa May could push these vile networks into even darker corners of the web, where they will be even harder to observe.
Killock added: “But we should not be distracted: the Internet and companies like Facebook are not a cause of this hatred and violence, but tools that can be abused. While governments and companies should take sensible measures to stop abuse, attempts to control the Internet is not the simple solution that Theresa May is claiming.”
Germany is also trying to push through legislation that would see tech companies fined up to €50 million (£44 million) if they fail to remove extremist content or fake news.
NOW WATCH: The world now has a beer ATM