Social media companies have denied claims that they "shadow-banned" users for posting about the Palestinians amid ongoing violence in Gaza. The firms argued that the accusation that Big Tech "deliberately and systemically" suppresses people's voices is unfounded.
Since the beginning of the conflict between Israel and Hamas in October 2023, tech giants have been accused of restricting certain information or persons from their online communities.
According to a study by Human Rights Watch (HRW), they have come under fire for excessively using automatic content removal technologies when moderating or translating information pertaining to Palestine.
Snapchat, Instagram, and Facebook have become major hubs for people seeking news and information on the conflict as an online information campaign unfolds between pro-Israeli and pro-Palestinian narratives.
'Shadow Ban' Concerns
Several anonymous Instagram users told CNBC that their posts and stories about the conflict in Gaza, as well as social commentary from Palestinian and pro-Palestinian voices, did not get as many likes as their non-war-related posts.
Some users have mentioned that postings are not noticed by followers right away or they are skipped in a story sequence. Allegedly, Instagram also removed certain postings because they did not adhere to "community guidelines."
There was another growing concern that Instagram was filtering some information when Meta introduced a fact-checking feature in December 2023.
Between October and November 2023, HRW's research detailed more than 1,000 "takedowns" of content from Instagram and Facebook sites in more than 60 nations.
See Also : [UPDATE] Meta Reveals Cause of Facebook, Instagram Outage; Elon Musk-Mark Zuckerberg Feud Continue
False Implication
According to a Meta representative who talked with CNBC, the HRW research downplays the challenges of implementing its platform policies on a worldwide scale within a highly politicized and fierce dispute. "Our policies are designed to give everyone a voice while at the same time keeping our platforms safe ... The implication that we deliberately and systemically suppress a particular voice is false."
The spokesperson added that because of the increased number of reports within the platforms, they are aware that content that does not breach their regulations may be accidentally deleted.
Meanwhile, Snapchat's Vice President of the Middle East and North Africa (MENA), Hussein Freijeh, said last week that the algorithms needed to regulate content are in place. The platform also employs a human component to oversee content moderation to further ensure the online community's safety.