Islamophobia: Facebook failing to check hate speech, fake news in India

0
97
Islamophobia: Facebook failing to check hate speech, fake news in India

Oct 25, 2021: According to documents obtained by the Associated Press, Facebook in India has been unable to curb hate speech, misinformation and provocative posts especially anti-Muslim content, even as its own employees have doubts over the company’s motivations

From the recent research in March this year to the company memo for 2019, the company’s internal documents on Facebook India show the company’s struggle to eliminate abusive content on its platforms while operating in the world’s largest democracy and the company’s largest growth market.

India has a history of sectarian and religious tensions that boil over on social media and fuel violence. The so-called Facebook papers, leaked by whistleblower Francis Hogan, show that the company has been aware of the problems for years, questioning whether it has done enough to address them.

Many critics and digital experts say it has failed to do so, especially in cases involving members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party (BJP). Around the world, Facebook has become increasingly important in politics, and India is no different.

Modi is credited with taking advantage of the platform to benefit his party during the election, and last year’s Wall Street Journal report cast doubt on whether Facebook was selectively implementing its hate speech curbs to avoid blowback from the BJP.

The leaked documents include a trove of internal company reports on hate speech and misinformation in India. In some cases, much of it was intensified by its own “recommended” feature and algorithms.

But they also include the company staffers’ concerns over the mishandling of these issues and their discontent expressed about the viral “malcontent” on the platform.

According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech”. Yet, Facebook did not have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.

In February 2019 and before the general election in India, when fears of misinformation were running high, a Facebook employee wanted to understand what a new user in the country saw on their news feed if they only followed the Facebook recommended pages and groups. 

The employee created a test user account and kept it alive for three weeks, during which time an extraordinary event shook India – a suicide attack in Indian-administered Kashmir killed more than 40 Indian soldiers and which brought the country closer to war with rival Pakistan.

In the note, titled “An Indian Test User’s Descent into a Sea of Polarising, Nationalistic Messages”, the employee whose name is redacted, said they were “shocked” by the content flooding the news feed which “has become a near constant barrage of polarising nationalist content, misinformation, and violence and gore”.

Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumours and viral content ran rampant.

The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.

One included a man with a bloody head covered in the Pakistani flag, with the head replaced by the Indian flag. Its “popular on Facebook” feature showed a number of unconfirmed material related to the retaliatory Indian attacks in Pakistan after the bombings, including a video game clip debunked by one of Facebook’s fact-checking partners.

It sparked deep concerns over what such divisive content could lead to in the real world, where local news outlets at the time were reporting on Kashmiris being attacked in the fallout.

“Following this test user’s News Feed, I have seen more images of dead people in the past three weeks than I have seen in my entire life total,” the researcher wrote.

“Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?” the researcher asked in their conclusion.

The memo exposed how the platform’s own algorithms or default settings played a part in spurring such malcontent.

The employee noted that there were clear “blind spots,” particularly in “local language content”. They said they hoped these findings would start conversations on how to avoid such “integrity harms”, especially for those who “differ significantly” from the typical US user.

In January 2019, a month before the test user experience, another diagnosis raised similar alarms about misleading content. In a presentation sent to employees, the findings concluded that Facebook’s misinformation tags were not clear enough to users, stressing that it needed to do more to prevent hate speech and fake news.

Alongside misinformation, the leaked documents reveal another problem plaguing Facebook in India: anti-Muslim propaganda, especially by hardline Hindu supremacist groups.

In April last year, misinformation targeting Muslims went viral again on its platform as the hashtag “Corona Jihad” flooded news feeds, markng Muslims as responsible for spreading COVID-19. The hashtag was popular on Facebook for several days but was later removed by the company.

Some video clips and posts purportedly showed Muslims spitting on authorities and hospital staff. They were quickly proven to be fake, but by then India’s communal fault lines, still stressed by deadly riots a month earlier, were again split wide open.

The misinformation triggered a wave of violence, business boycotts and hate speech towards Muslims. Thousands from the community, including Abbas, were confined to institutional quarantine for weeks across the country. Some were even sent to jails, only to be later exonerated by courts.

The documents reveal the leadership dithered on the decision to mark a Hindu legislator belonging to Modi’s BJP as a “dangerous individual” – a classification that would ban him from the platform – after a series of anti-Muslim posts from his account, prompting concerns by some employees, of whom one wrote that Facebook was only designating non-Hindu extremist organisations as “dangerous”.

In one document titled “Lotus Mahal”, the company noted that members with links to the BJP had created multiple Facebook accounts to amplify anti-Muslim content, ranging from “calls to oust Muslim populations from India” and “Love Jihad”, an unproven conspiracy theory by Hindu groups who accuse Muslim men of using interfaith marriages to coerce Hindu women to change their religion.

The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages.

The company said its nomination process involves the review of each case by relevant teams across the company however, it did not disclose whether the Hindu nationalist group was designated as “dangerous”.

Stay tuned to BaaghiTV for latest news and Updates!

Sudan’s PM Hamdok under house arrest, ministers detained

 

 

 

Leave a reply