Facebook automatically generating pages for an Islamic state and Al Qaeda

Facebook is automatically generating pages for terrorist groups, including Islamic State and Al Qaeda, a whistleblower claims.

In the face of criticism that Facebook is not doing enough to combat extremist messaging, the company likes to say that its automated systems remove the vast majority of prohibited content glorifying the Islamic State group and al-Qaida before it’s reported.

A whistleblower complaint shows that Facebook on its own has inadvertently provided the two extremists groups with networking and recruitment tool by creating dozens of pages in their names.

According to the reports, simple searching on Facebook uncovered hundreds of profiles praising terrorist groups, written in Arabic and English.

The automatically generated pages are called ‘I love Islamic State’, which features an image of the Facebook thumbs up with the terrorist group’s logo in the center.

The page shows clear support for terrorism, in violation of Facebook’s policies, but was created by FB and is still active.

The National Whistleblowers Center in Washington carried out a five-month study of the pages of 3,000 members who liked or connected to organizations proscribed as terrorist by the US government. It said that they have filed a complaint with the US Securities and Exchange Commission on behalf of a source that preferred to remain anonymous.

Facebook and other social media platforms have been under fire for not doing enough to curb messages of hate and violence, while at the same time criticized for failing to offer equal time for all viewpoints, no matter how unpleasant.

Facebook in March announced bans at the social network and Instagram on praise or support for white nationalism and white separatism.

Facebook told it had been removing terror-linked content “at a far higher success rate than even two years go” since investing in better technology. The company announced that it is using machine learning to weed out terrorist content in a statement late last year.

Several countries have taken steps to regulate online content since the attack. In early April, Australia passed legislation setting out fines and punishment for social media sites for hate content, and the U.K has proposed making social media executives personally responsible for harmful content shared on their platforms.

After making heavy investments, the company says, they are detecting and removing terrorism content at a far higher success rate than even two years ago. They don’t claim to find everything and they remain vigilant in their efforts against terrorist groups around the world.

The researchers noted that Facebook CEO Mark Zuckerberg in 2018 boasted of his firm’s use of artificial intelligence to purge terror content related to Al-Qaeda and ISIS, the complaint said.

Facebook said that to keep hate groups off its platform, it applies similar tools to those it uses against foreign terror groups.

To read more related articles: Click here
Follow us on Facebook and stay up to date with the latest content.

Leave a Reply

Your email address will not be published. Required fields are marked *