07 October 2024
- RSIS
- Publication
- RSIS Publications
- Assumptions About Censorship in the Digital Domain Are Not Always What They Seem
SYNOPSIS
Despite their public reputation as libertarian bastions of free speech, large private online communication platforms do not necessarily uphold the principles that underpin the freedom of information, particularly where the public interest is concerned.
COMMENTARY
While voices against online content moderation have been chiefly confined to a minority of supposed “free speech absolutists”, the recent arrest of Telegram chief Pavel Durov raised a few more public eyebrows. Unease at Durov’s detention by French prosecutors arose not only from high-profile anti-censorship proponents but also from within communities reliant on Telegram for vital, unfiltered information. A similar situation is seen in Russia, where the messaging app is widely used both by the government and its rivals.
Denying any political motivations, French authorities have emphasised Telegram’s lack of appropriate moderation and the resultant complicity in cybercrime, including child sexual exploitation, drug-related offences, and other illegal content encrypted on the app. Durov’s arrest evokes familiar clichés, such as “state suppression of speech”, “infringement of the private sphere by public entities”, and debates about the exercise of regulatory oversight by governments.
However, this framing insinuates that censorship is the sole preserve of states and governments. Inclined as they are to focus on the act of restricting online content, many impassioned defenders of free speech often overlook one of the largest potential curtailers of private (and public) discourse: the private sector itself.
Who wields the power to censor?
In the digital realm, content moderation and censorship are sometimes distinguished by intent. Sometimes, moderation is conflated with censorship. However, the crux of the matter may instead be one of relative reach and influence.
The notion of censorship imposed by a higher authority is often associated with powerful state organs that retain a “monopoly on control” (echoing Weberian concepts of the state’s monopoly on violence). Newer theories of structural power in political economy, however, emphasise the immense – and still growing – role of large corporations and, subsequently, direct contestation between private enterprises and states across multiple domains. This includes the communications sector, where the sway held by large media companies over information flows and the dissemination of narratives can easily rival (or even surpass) that of states. The sheer outsized influence held by these corporations arguably allows them the ability to censor.
The evolution of large online platforms from the late 20th century to the present day mirrors their similarly transforming influence. On the one hand, these platforms have long adopted common legal measures (such as terms of service and privacy policies), which initially maintained oversight over how users interacted with their content while upholding “open” internet access principles. On the other hand, platform owners can now increasingly exploit user content, including suppressing undesirable content.
Who wants the power to censor?
Durov’s arrest sets a precedent – it establishes criminal accountability for online platform owners regarding how their platforms are used (and abused). To date, few (if any) owners of large online communication or social media platforms have been regarded as criminally liable for the user-generated activity and content that their platforms host.
On the surface, Durov’s anti-establishment ethos, his public commitment to user privacy and encryption, and the app’s popularity among opposition movements in authoritarian states imply that Telegram’s owners neither engage in censorship nor intend to do so. Indeed, Telegram’s laissez-faire content moderation philosophy has even earned it the title of a “go-to app for troublemakers”.
Yet, responses to Durov’s arrest from his most ardent supporters have been revealing, sometimes even contradictory. X owner Elon Musk, who was quick to reiterate his description of content moderation as “propaganda” for censorship, is simultaneously observed as a willing (and prolific) content remover on his own platform. Decisions to obscure content are often shaped by Musk’s personal views. Some evidence even suggests that the platform deliberately limits users’ access to politically-opposed sources.
Conservative American political commentator Tucker Carlson also labelled Durov “a living warning to any platform owner who refuses to censor the truth at the behest of governments”. Speaking with Carlson last April, Durov had emphasised his reluctance to comply with government directions to restrict access to certain forms of content. He stated that he would not consider requests deemed to impinge upon Telegram’s values of free speech and privacy.
However, Durov’s decision can also be interpreted as a subjective and discriminating judgement on appropriate content based on a platform owner’s personal beliefs. Moreover, even if one were to assume there is a hands-off approach to content regulation by default, contrary to its pro-privacy stance, Telegram does not automatically enable end-to-end encryption on most of its user conversations, which strongly suggests that it keeps a much closer eye on private communications than publicly disclosed.
Where else might we identify “corporate censorship”?
The above examples illustrate large online platforms’ preparedness to scrutinise users’ speeches and expressions according to their owners’ interests. However, personal objectives can also be strongly intertwined with commercial priorities. Examples include Google’s internal content moderation policies regarding the Israel-Hamas conflict, which is linked to its contract to provide Israel with cloud computing services, and moderation on X prompted by the potential loss of lucrative advertising revenue.
Having said this, corporate censorship does not necessarily reflect commercial or private interests in competition with state or public agendas. In China, where private sector companies have significant state ties, the state may use influential private firms as tools to achieve its political aims. This has recently led to speculation about the likelihood of politically motivated censorship on Chinese private gaming platforms, such as instructions for players to avoid discussing, for instance, specific political issues and COVID-19-related content on the newly released game Black Myth: Wukong.
Final Takeaways
Despite regulatory efforts by authorities to rein in the outsized influence of large private corporations and their online platforms, some members of the public – particularly those who favour certain apps and platforms for personal or political reasons – may nonetheless remain suspicious about state intentions.
To improve transparency and increase public trust, governments can augment ongoing regulatory efforts with initiatives to broaden the public’s involvement in the regulatory process. For example, the European Union’s Digital Services Act includes mechanisms to provide public interest researchers with access to internal platform data, for appointing trusted independent flaggers to detect illegal content, and for whistleblowers with insider knowledge about platforms to notify authorities of any potentially unlawful activity.
Also, existing laws that ostensibly protect the interests of large online platforms may instead be reinterpreted in such a way as to curb their outsized influence. A current example concerns Section 230 of the Communications Decency Act in the United States, which typically exempts large online platforms from liability for user-generated content and activity. As the law also prescribes exemptions from liability for taking down objectionable content, some lawmakers are considering whether Section 230 can empower users to remove or customise content on online platforms at their discretion rather than that of powerful corporations.
Most importantly, the multifaceted and complicated nature of corporate censorship underlines it not only as a concerning problem but also on the need to avoid any well-worn assumptions and clichés about online content moderation.
About the Authors
Sean Tan and Tan E-Reng are both Senior Analyst and Research Analyst, respectively, in the Centre of Excellence for National Security (CENS) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.
SYNOPSIS
Despite their public reputation as libertarian bastions of free speech, large private online communication platforms do not necessarily uphold the principles that underpin the freedom of information, particularly where the public interest is concerned.
COMMENTARY
While voices against online content moderation have been chiefly confined to a minority of supposed “free speech absolutists”, the recent arrest of Telegram chief Pavel Durov raised a few more public eyebrows. Unease at Durov’s detention by French prosecutors arose not only from high-profile anti-censorship proponents but also from within communities reliant on Telegram for vital, unfiltered information. A similar situation is seen in Russia, where the messaging app is widely used both by the government and its rivals.
Denying any political motivations, French authorities have emphasised Telegram’s lack of appropriate moderation and the resultant complicity in cybercrime, including child sexual exploitation, drug-related offences, and other illegal content encrypted on the app. Durov’s arrest evokes familiar clichés, such as “state suppression of speech”, “infringement of the private sphere by public entities”, and debates about the exercise of regulatory oversight by governments.
However, this framing insinuates that censorship is the sole preserve of states and governments. Inclined as they are to focus on the act of restricting online content, many impassioned defenders of free speech often overlook one of the largest potential curtailers of private (and public) discourse: the private sector itself.
Who wields the power to censor?
In the digital realm, content moderation and censorship are sometimes distinguished by intent. Sometimes, moderation is conflated with censorship. However, the crux of the matter may instead be one of relative reach and influence.
The notion of censorship imposed by a higher authority is often associated with powerful state organs that retain a “monopoly on control” (echoing Weberian concepts of the state’s monopoly on violence). Newer theories of structural power in political economy, however, emphasise the immense – and still growing – role of large corporations and, subsequently, direct contestation between private enterprises and states across multiple domains. This includes the communications sector, where the sway held by large media companies over information flows and the dissemination of narratives can easily rival (or even surpass) that of states. The sheer outsized influence held by these corporations arguably allows them the ability to censor.
The evolution of large online platforms from the late 20th century to the present day mirrors their similarly transforming influence. On the one hand, these platforms have long adopted common legal measures (such as terms of service and privacy policies), which initially maintained oversight over how users interacted with their content while upholding “open” internet access principles. On the other hand, platform owners can now increasingly exploit user content, including suppressing undesirable content.
Who wants the power to censor?
Durov’s arrest sets a precedent – it establishes criminal accountability for online platform owners regarding how their platforms are used (and abused). To date, few (if any) owners of large online communication or social media platforms have been regarded as criminally liable for the user-generated activity and content that their platforms host.
On the surface, Durov’s anti-establishment ethos, his public commitment to user privacy and encryption, and the app’s popularity among opposition movements in authoritarian states imply that Telegram’s owners neither engage in censorship nor intend to do so. Indeed, Telegram’s laissez-faire content moderation philosophy has even earned it the title of a “go-to app for troublemakers”.
Yet, responses to Durov’s arrest from his most ardent supporters have been revealing, sometimes even contradictory. X owner Elon Musk, who was quick to reiterate his description of content moderation as “propaganda” for censorship, is simultaneously observed as a willing (and prolific) content remover on his own platform. Decisions to obscure content are often shaped by Musk’s personal views. Some evidence even suggests that the platform deliberately limits users’ access to politically-opposed sources.
Conservative American political commentator Tucker Carlson also labelled Durov “a living warning to any platform owner who refuses to censor the truth at the behest of governments”. Speaking with Carlson last April, Durov had emphasised his reluctance to comply with government directions to restrict access to certain forms of content. He stated that he would not consider requests deemed to impinge upon Telegram’s values of free speech and privacy.
However, Durov’s decision can also be interpreted as a subjective and discriminating judgement on appropriate content based on a platform owner’s personal beliefs. Moreover, even if one were to assume there is a hands-off approach to content regulation by default, contrary to its pro-privacy stance, Telegram does not automatically enable end-to-end encryption on most of its user conversations, which strongly suggests that it keeps a much closer eye on private communications than publicly disclosed.
Where else might we identify “corporate censorship”?
The above examples illustrate large online platforms’ preparedness to scrutinise users’ speeches and expressions according to their owners’ interests. However, personal objectives can also be strongly intertwined with commercial priorities. Examples include Google’s internal content moderation policies regarding the Israel-Hamas conflict, which is linked to its contract to provide Israel with cloud computing services, and moderation on X prompted by the potential loss of lucrative advertising revenue.
Having said this, corporate censorship does not necessarily reflect commercial or private interests in competition with state or public agendas. In China, where private sector companies have significant state ties, the state may use influential private firms as tools to achieve its political aims. This has recently led to speculation about the likelihood of politically motivated censorship on Chinese private gaming platforms, such as instructions for players to avoid discussing, for instance, specific political issues and COVID-19-related content on the newly released game Black Myth: Wukong.
Final Takeaways
Despite regulatory efforts by authorities to rein in the outsized influence of large private corporations and their online platforms, some members of the public – particularly those who favour certain apps and platforms for personal or political reasons – may nonetheless remain suspicious about state intentions.
To improve transparency and increase public trust, governments can augment ongoing regulatory efforts with initiatives to broaden the public’s involvement in the regulatory process. For example, the European Union’s Digital Services Act includes mechanisms to provide public interest researchers with access to internal platform data, for appointing trusted independent flaggers to detect illegal content, and for whistleblowers with insider knowledge about platforms to notify authorities of any potentially unlawful activity.
Also, existing laws that ostensibly protect the interests of large online platforms may instead be reinterpreted in such a way as to curb their outsized influence. A current example concerns Section 230 of the Communications Decency Act in the United States, which typically exempts large online platforms from liability for user-generated content and activity. As the law also prescribes exemptions from liability for taking down objectionable content, some lawmakers are considering whether Section 230 can empower users to remove or customise content on online platforms at their discretion rather than that of powerful corporations.
Most importantly, the multifaceted and complicated nature of corporate censorship underlines it not only as a concerning problem but also on the need to avoid any well-worn assumptions and clichés about online content moderation.
About the Authors
Sean Tan and Tan E-Reng are both Senior Analyst and Research Analyst, respectively, in the Centre of Excellence for National Security (CENS) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.