Social Media Platforms and Authoritarian Censorship in Asia
Chinese censorship on social media platforms has gone beyond the Great Firewall to target English-language accounts outside China through global Chinese-owned services or state-sanctioned economic pressure against foreign companies. To better understand this trend, Yen-Zhi Peng interviewed Dean Jackson from the International Forum for Democratic Studies of the National Endowment for Democracy. He offers insights into the global influence of the authoritarian regimes in China and other Asian countries on social media and discusses the implications for the future of popular platforms.
Despite not being allowed in China, both Facebook and YouTube have censored and removed transnational Chinese-language comments that are critical of the Chinese Communist Party (CCP). Are these instances isolated accidents, as the companies have claimed, or indicative of a larger pattern of censorship on the part of China?
It’s hard to say from the outside whether individual instances like the ones you mention indicate a purposeful effort to comply with CCP censorship requests. Content moderation is a tough job in the best of circumstances. Most platforms rely on outside contractors to enforce rules that can sometimes be quite complex. Most of these contractors are concentrated in a few Western countries, meaning outcomes are often worse for languages other than English.
We do know at least two things that should deeply concern us. First, there are enormous financial incentives for platforms to subordinate free expression to Beijing’s demands, and platforms have in the past developed prototypes of tools for proactive censorship in mainland China. Tech companies are spending time and money to at a minimum think about future scenarios in which they are willing to “play ball” with Beijing’s censors.
The second concern is that authoritarian governments absolutely do try to find ways to enforce censorship on platforms, with or without the witting help of the platforms themselves. For example, state employees and “patriotic” trolls supporting authoritarian regimes sometimes report content from government critics en masse, triggering automatic takedowns, which are difficult to reverse without a direct line to corporate officials. And as you rightly note, platforms do at times knowingly comply with censorship requests they deem to be legal under local law.
When you take all these things together, the big picture is that content moderation is prone to mistakes and accidents, but platforms also make purposeful choices that affect the global environment for free expression.
Some global social media platforms tailor their policies to local users and laws. For example, in 2017, Facebook deleted posts that Thai and Vietnamese authorities regard as “illegal” or “toxic” to prevent its shutdown. How do social media companies decide to comply with governments that seek to restrict speech?
Platforms have an incentive to avoid challenging local authorities or offending cultural sensitivities, even when that leads to suppression of political speech. If a country or user is high profile enough, international focus on them might provide some protection against censorship. But most users cannot rely on this.
The individual making the call on content takedowns matters immensely. Not every country has locally based content moderators; instead, they are clustered in a handful of countries. What is considered offensive in one part of the world might be quite permissible in another. Moderators also may not be familiar with local political context, or they may not recognize what to a local observer would clearly be hate speech—and that’s if there are enough moderators with the language skills to review a given post in the first place.
In the last few years, platforms have gone from being scrutinized for taking down too much content at the behest of governments to now being criticized for not doing enough to restrict the spread of misinformation and disinformation. This is not a double standard but a reflection of the fact that these enormously lucrative and influential businesses have found their way to the center of the global information ecosystem. That comes with tremendous responsibility.
Do you think the censoring culture in Asia has a global effect on content moderation decisions by social media companies? If so, what are some ways that authoritarian actors seek to influence and censor corporate content?
Companies know these issues are sensitive, and they also know advocates and policymakers are watching their actions. Without pressure from watchdogs concerned about freedom of expression, who is to say they would not make more significant concessions to censorial governments?
We don’t have to jump far outside the major social media companies to find examples of how authoritarian censorship is affecting global speech through commerce. Another tech giant, Apple, which produces and sells devices in China, is hiding the Taiwanese flag emoji for users in Hong Kong or Macau. Outside the tech sector, Delta Air Lines and Marriott both made similar concessions after Beijing threatened to bar them from its market. Blizzard, a hugely popular video game developer partially owned by a firm based in China, banned an esports streamer and took away his prize money after he issued a statement in support of Hong Kong protesters on an official broadcast. Producers in Hollywood who want access to the Chinese box office—which restricts the number of foreign films released each year—weigh Beijing’s preferences carefully as well.
Nearly one out of every six people lives in China. It is the second-largest economy in the world, and a large portion of its private sector answers ultimately to the CCP. Through sheer size, the country has a great deal of leverage over private companies abroad. In some ways, it might be stranger if Beijing’s censorship did not have a global effect.
Is this trend of social media platforms segregating online activities according to national borders or languages likely to continue?
Unfortunately, it may. Another possibility is that international platforms will be replaced by domestic alternatives in markets with the capacity to deploy substitutes. This happened in India after New Delhi banned TikTok, and in some ways the deliberations over TikTok’s operations in the United States mirror this trend. Leaders at Chinese state media outlets have called for countries around the world to take similar actions against U.S. platforms. This will not necessarily happen, but people are thinking about the precedent set by government actions.
In the past five years, the internet has gone from being seen primarily as an enabler of human freedom to a source of myriad national security threats. This shift in perspective came wickedly fast and could have all kinds of dramatic consequences.
It’s interesting to contrast TikTok’s division from Douyin with WeChat, which has maintained a unified platform both in and outside China. Observers have questions about CCP censorship on TikTok, but we know that large-scale censorship exists on WeChat because the platform straddles the Great Firewall. Its users in mainland China are operating the same application as its users elsewhere. This is not the case for TikTok, and it changes the rules and incentives governing the platform.
Most platforms today are like WeChat—they straddle national borders. Tomorrow’s platforms may look more like TikTok and Douyin.
What would you suggest social media companies do to combat the influence of authoritarian censorship in Asia?
Whenever possible, it is incredibly important for social media companies to engage with independent activists and civil-society organizations working to protect free expression, combat censorship, and respond to international authoritarian influence. These relationships are the best way for companies to get feedback about their policies and better understand the threats facing democratic actors—and democracies—around the world.
But these relationships cannot just be window dressing. Civil society needs to be respected as an important partner. There is the potential for civil-society relationships with platforms to become jaded if one party feels like it is being used as a prop for public relations purposes. The desire to do better must be genuine, and platforms need to be willing to take feedback seriously and make real sacrifices to safeguard democratic freedom.
In the long term, such engagement will be worth it. Most social media companies emerged and grew in environments marked by political freedom and the rule of law. If these rules-based systems buckle under the weight of authoritarian demands, their independence and, ultimately, their bottom lines will be the worse for it.
Dean Jackson is a Program Officer for the International Forum for Democratic Studies of the National Endowment for Democracy. His research focuses on disinformation, media, and technology.
This interview was conducted by Yen-Zhi Peng while an intern with the Political and Security Affairs group at NBR.