Institutionalizing the “Shadow Ban”: Controlling the Gaza War Narrative


2023-10-30    |   

Institutionalizing the “Shadow Ban”: Controlling the Gaza War Narrative

After the October 7 operation and the beginning of the war on Gaza, social media websites were flooded with news and comments related to what was happening. These sites played a key role in conveying information and deconstructing the official narratives, as they often have in recent years, especially in this region amidst the successive upheavals it has witnessed. This role became pivotal in the last two weeks because of the way that major traditional media in the Global North – which are completely biased toward the Israeli narrative – have approached the subject, sometimes even reporting fake news and atrocity propaganda from the Zionist entity without maintaining any objective or critical distance. These media outlets have ignored what is happening on the other side of the wall of shame and recycled prejudices aimed at dehumanizing Gazans and Palestinians. They thereby helped stoke hatred toward Gaza Strip inhabitants and as a result enforce the willful blindness to the massacres being perpetrated against Gazans as well , and inexorable  push towards committing a genocide against them. 

 

In response, Gazans, activists sympathetic to their cause, and the alternative media had no alternative but to turn to social media in order to report the actual events in Gaza and critique and deconstruct the official narratives.

 

However, some users of these websites quickly noticed that the number of people viewing or engaging with their posts began to decline suddenly and significantly. Some even published screenshots from their account dashboards showing that the number of views and engagements fell by hundreds between one post and another (sometimes from 1,000 views to just one view, even hours after the post). These users realized that they had been subjected to “shadow bans”, a mechanism that social media platforms employ to limit certain users’ public visibility and block their posts from reaching others without informing them of the measure (though they sometimes notice it by themselves). Notably, in recent days this kind of ban has exclusively affected accounts critical of the Zionist entity or condemning the atrocity of the genocidal plan being carried out in Gaza. Consequently, in an attempt to circumvent the algorithm’s automatic hunting of “problematic users” and shadow banning them, some began deliberately putting personal posts (e.g. a picture of their cat or dog) between one “problematic” post and another. Others put spaces between letters (e.g. “P a l e s t i n e” instead of “Palestine”) or deliberately misspelled words such as “Gaza” and “Israel”. Most gravely, others stopped engaging with the topic altogether to avoid being banned, excluded from the virtual space, and thereby deprived of all its benefits (be they social or material).

 

This is the danger of the shadow banning phenomenon in the illusory world of “semi-public” virtual spaces. I am purposely using the term “semi-public” because these spaces are ultimately owned by private companies that follow their own rules and operating mechanisms, which are not necessarily consistent with democratic and human rights principles. The problem arising from this phenomenon is especially problematic in the context of social, activist, and racial struggles against the existing order.

 

Defining “Shadow Bans” and Their Application Mechanism

 

“Shadow banning” is a recent phenomenon in the virtual world whose semi-stated goal is to moderate the content on social media platforms and prevent “problematic” posts from reaching their users, in light of the volume of information circulating on these sites and the ease at which they can be accessed by anyone anywhere in the world. While some accounts are deleted and their users permanently banned when they make posts that violate laws in force (e.g. pornographic posts involving minors), the phenomenon of “shadow banning” has emerged as a softer measure used against posts that are “problematic” but do not necessarily violate any laws. The definition of this phenomenon is neither clear nor precise, and this is the crux of the problem. The platforms concerned initially did not acknowledge clearly and publicly that the phenomenon exists, and they do not clearly define the concept of a “problematic post” or even inform users that they have been subjected to such a ban (users only discover it by chance). Early on, it was evident that although the phenomenon had affected all kinds of users (irrespective of their political views, social affiliations, and so on), it especially targeted marginalized social groups, activists, and regime opponents (or at least they, in particular, felt its effects). This point was raised by Black Lives Matter activists in the United States following the killing of George Floyd. Thus, it became clear that shadow banning could also be used as a tool to repress marginalized social groups, activist movements, and opponents of the political regime, which is exactly what we are witnessing today during the war on Gaza.

 

Institutionalizing “Shadow Bans” by Codifying the Virtual World

 

Because of social media platforms’ escalating role as news media around the world, the inability to control the information published on them, and the phenomenon of arbitrary shadow banning without regard for transparency or due process, some countries sought solutions for codifying what content can be published in the virtual world and how it is moderated. The first legal text to address the matter clearly was the Digital Services Act (Regulation no. 2065 of 2022, hereinafter the “DSA”) issued by the European Parliament and Council of Europe in October 2022 under the auspices of the current president of the European Commission Ursula von der Leyen. Von der Leyen had based her 2019 candidacy for the commission’s presidency on her proposal of this law. She herself has recently filled screens in a show of unconditional support for Israel and its devastating war on Gaza. As for the immediate factor that propelled Europe to adopt this law in 2022, it was said to be the need to prevent the spread of fake news concerning Russia’s invasion of Ukraine in the same year.

 

Unlike similar laws that absolve online platforms of any responsibility for content posted by users (e.g. Section 230 of the United States’ Communication Decency Act), the DSA charges online platforms with some responsibility toward their users and, therefore, the posts that reach them (especially those that violate laws) while also obligating them to adopt more transparent moderation policies. Firstly, the DSA explicitly mentions shadow banning in its rationale section and then obligates platforms to publish its application mechanisms in their “terms and conditions”, to apply the mechanism objectively and in a manner proportionate to the content, to respect fundamental rights, and to inform the users concerned and allow them to challenge the decision before the platform itself or beyond it (i.e. to follow due process).[1] Most importantly, the DSA holds platforms with partial responsibility for illegal content posted on them. In particular, the law obligates them to take measures against such content (e.g. removing it, blocking it, or providing certain information to the authorities concerned) as soon as they learn or are informed of it. While the DSA defines “illegal content” as content that does not conform to the laws of the European Union or one of the member states, its rationale section mentions several types of content considered illegal, including pornographic content involving minors, private images shared without consent, and content involving hate speech, discrimination, and terrorism.

 

While the DSA’s goals may seem well-balanced for ensuring the safety of virtual public space, today we are witnessing its first practical applications in the context of Israel’s war on Gaza. On this basis specifically, the European Commission sent – on 12 October 2023, i.e. just five days after the war began – a binding request to X (formerly known as Twitter) to obtain information about how the platform is addressing hate speech, disinformation, and terrorism-related content concerning Israel’s war on Gaza (or on “Hamas”, according to the request) as part of its investigation into this matter. The request threatened punitive measures against the platform should it not comply. This measure, along with the effects of the DSA’s application, cannot be analyzed in isolation from the sociopolitical context in which it is being applied for the first time since its issuance, namely Israel’s war on Gaza. There are serious concerns that the new regulation and its spirit will be distorted to serve goals other than those for which it was enacted. Instead of being a tool to protect the safety of virtual public space (its stated goals), it could transform into a tool that the authorities use to control narratives, impose those that suit them, and repress their opponents under various pretexts and flexible concepts, thereby turning – in the present case – anybody who stands in solidarity with Gazans into a defender of terrorism and anyone who criticizes Israel’s crimes into an antisemite. Is the DSA failing in its first trial, given that Elon Musk, who owns X, subsequently threatened to remove the platform from Europe and thereby ban users on the continent from accessing it? Is the DSA transforming from a tool for curbing shadow banning into a tool for institutionalizing it? In that case, we will face the twofold problem of private platforms’ control over content and use of shadow bans whenever they please and without due process, on one hand, and political regimes’ institutionalization of shadow banning and ability to force it on private platforms whenever doing so suits them, especially to repress their opponents, on the other?

 

Freedom of Expression Faces the Shadow Banning Test

 

Facing this reality, we must appeal to the rights and principles that still bear universal value in this heterogeneous world of ours in order to curb deviations from them emanating from both political regimes (e.g. Europe and the possibility of the DSA’s spirit being distorted) and private platforms themselves. In this regard, we should recall the provisions of the International Covenant on Civil and Political Rights (ICCPR), especially Article 19. The ICCPR is binding on all states party to it (including the European countries and the United States) and their political regimes, as well as directly on private sector institutions (the private platforms) because of the direct horizontal effect of these provisions for protecting fundamental rights and freedoms toward third parties (i.e. the Drittwirkung doctrine). Paragraph 2 of Article 19 stipulates that, “Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice”. Freedom of expression is a precondition for anchoring the principles of transparency and accountability, which – on their part – are key to strengthening and protecting human rights. The obligation to respect freedom of expression also requires states to ensure that people are protected from any acts by private persons or entities that would impair the enjoyment of this freedom.[2] On the other hand, Paragraph 3 of the same article stipulates that the exercise of the right to freedom of expression entails special duties and responsibilities. For this reason, the paragraph permits two limitative areas of restrictions on the right, which relate to respecting the rights or reputations of others and protecting national security, public order, public health, or public morals. However, according to the UN Human Rights Committee, when a party state imposes restrictions on the exercise of freedom of expression, these restrictions must not jeopardize the essence of the right itself. The committee also mentions that the relation between right and restriction and between norm and exception must not be reversed.[3] Paragraph 3 lays down specific conditions and only permits restrictions that meet them: the restrictions must be “provided by law”, this legal norm must be formulated with sufficient precision to enable individuals to regulate their conduct accordingly, and the restrictions must only be imposed for one of the grounds set out in the aforementioned provisions of Article 3 and must satisfy the strict tests of necessity and proportionality. The committee also states that “Paragraph 3 may never be invoked as a justification for the muzzling of any advocacy of multi-party democracy, democratic tenets and human rights”.[4] The restrictions imposed must also not be excessive. Moreover, in General Comment no. 27, the committee states that: 

 

“Restrictive measures must conform to the principle of proportionality; they must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve the desired result; and they must be proportionate to the interest to be protected… The principle of proportionality has to be respected not only in the law that frames the restrictions, but also by the administrative and judicial authorities in applying the law”.[5]

 

In our present case, amidst Israel’s war on Gaza, the argument that private platforms and political regimes (Europe) are making to justify shadow banning certain users is the need to combat terrorism. They are thereby deliberately conflating Hamas with anyone who supports the Palestinian cause, sympathizes with Gazans, or criticizes Israel, as if the connection has become self-evident. In this regard, the UN Human Rights Committee, when commenting on Article 19 of the ICCPR, notes that, “Such offences as ‘encouragement of terrorism’ and ‘extremist activity’, as well as offences of ‘praising’, ‘glorifying’, or ‘justifying’ terrorism, should be clearly defined to ensure that they do not lead to unnecessary or disproportionate interference with freedom of expression. Excessive restrictions on access to information must also be avoided”.[6] Hence, we must think about the issue from the perspective of how these terms are being used today in the sociopolitical context of Israel’s war on Gaza and the various means of resisting it. The word “terrorism” and its derivatives should not turn into a flexible concept used to suppress any view opposing Israel or the policies of political regimes in this regard in a manner reminiscent of historical precedents (e.g. the use of the word “terrorism” itself to repress Nelson Mandela and his supporters in the days of South African apartheid or the use of the “communism” scarecrow to suppress activist movements in the United States in the days of McCarthyism).

 

Besides the above, we should also recall that what does require the establishment of controls is war propaganda and the promotion of hatred toward a certain social group (Gazans). This matter is being totally ignored today. Article 20 of the ICCPR obligates party states to prohibit any propaganda for war or advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence. What we are currently witnessing – especially from the Israeli entity and even from the opposite bank of the Mediterranean Sea – constitutes, in several regards, a continuous and gross violation of these provisions.

 

This article is an edited translation from Arabic.

 

[1] Paddy Leerssen, “An End to Shadow Banning? Transparency Rights in the Digital Services Act Between Content Moderation And Curation”, Computer Law & Security Review, vol. 48, April 2023.

[2] Human Rights Committee, General Comment no. 34, 2011.

[3] Ibid.

[4] Ibid.

[5] Human Rights Committee, General Comment no. 27, 1999.

[6] Human Rights Committee, General Comment no. 34, 2011.

Share the article

Mapped through:

Freedom of Expression, Media, Palestine



For Your Comments

Your email address will not be published. Required fields are marked *