10 Questions About Trump, Big Tech, and Free Speech
Twitter permanently banned Trump. Facebook suspended his account for at least two weeks. Apple and Google pulled the Parler app from their app stores. Amazon booted Parler off AWS. Stripe stopped processing payments for the Trump campaign’s website.
These decisions, among others, have sparked a renewed debate over the power that Big Tech companies have in society, and whether we need to revisit Section 230, net neutrality, or the Fairness Doctrine. Currently, the public discussion is dominated by loud voices making extreme, and often incorrect, claims. In my opinion, these voices are only grappling with the surface-level issues related to tech platforms and speech, which I address in the first seven questions. The final three questions are much harder to answer and require thinking on the margin about what our society values and what tradeoffs we are willing to make. If we focus our time and attention on these latter questions, we can hope to make real progress over time.
7 Easy Questions
1. Is Big Tech more powerful than the government?
Austen Allred, the founder and CEO of Lambda School, tweeted, “Twitter, Facebook, Apple and Google, especially when acting in concert, are much more powerful than the government.” This claim doesn’t hold up to any level of scrutiny. The government has the power to tax you, imprison you, and kill you; the tech companies can delete your free account. Some conservatives have even argued the government should “nationalize Facebook and Twitter to preserve free speech,” the mere possibility of which should tell you who’s more powerful.
Journalist Michael Tracey said that Big Tech is “more powerful than most if not all nation states”, which seems absurd considering nine nation states have nuclear weapons. He also claimed that you “cannot create an ‘alternative’ … at this point” which is directly contradicted by the fact that TikTok went from zero to nearly a billion users in just the last few years.
2. Has President Trump been silenced by Twitter and Facebook?
Trump has been permanently banned from Twitter and suspended from Facebook for at least two weeks. Obviously, his ability to speak directly to his audiences on those platforms has been greatly diminished. But that doesn’t mean he has been silenced or censored. A recent Reuters article asked “How will Trump get his message out without social media?” In short: The same way that every president did prior to 2008. What communications and media networks existed back then? Newspapers, magazines, broadcast TV, cable TV, radio, podcasts, email, text messages, and the open web.
Twitter is not real life. As economist Adam Ozimek said, “Only 22% of adults use Twitter. In contrast almost every house has a TV. The idea that there is some monopoly over access to the public here is really not compelling. Maybe you spend too much time on Twitter if you think that.” Furthermore, only about 10% of Americans are daily active users of Twitter. So that means if you check Twitter at least once a day, then you’re more “online” than 90% of Americans. Active Twitter users likely overrate its importance in the average person’s life relative to newspapers, talk radio, broadcast TV, and cable TV.
It’s also important to remember that Trump’s words haven’t been banned from the platform, only his personal accounts. If the president gives a public speech, or if the White House issues a press release, thousands of journalists will still cover and broadcast his words, in tweets and Facebook posts of their own. For example, on Wednesday, the White House released a statement from Trump urging “NO violence, NO lawbreaking, and NO vandalism of any kind.” The statement was immediately shared on Twitter by reporters and sent out via text message to the Trump Campaign’s subscribers.
Twitter and Facebook suspending Trump’s account is significant, there is no denying that. But the president of the United States can still communicate with the public.
3. Is deplatforming extremists a civil rights issue?
Some conservatives have tried to argue that if liberals think a baker should be required to bake a cake for a gay wedding, then Amazon should be required to provide cloud hosting for Parler and Twitter shouldn’t be allowed to ban Trump. However, these cases are not similar. The case of the baker and the gay wedding was controversial because it involved the collision of two protected characteristics: religious beliefs (of the baker) and sexual orientation (the gay couple).
In the cases of Parler and Trump, they were not deplatformed for belonging to a protected class or because of an immutable characteristic — they were deplatformed for inciting violence and insurrection. Repeated antisocial behavior is a perfectly legitimate basis for a platform to remove a user (or for a company to cease doing business with a counterparty). The question is not “should Amazon be allowed to discriminate against conservatives” but actually “should Amazon be required to do business with groups hell-bent on breaking the law.”
No one believes that every user should be allowed on every platform. Not even Parler allows users to post whatever they want:
[Parler’s] community guidelines warn users to avoid spam, blackmail, bribery, plagiarism, support for terrorist organizations, spreading false rumors, suggesting people should die, describing “sexual organs or activity,” showing “female nipples,” and using language or visuals “that are offensive and offer no literary, artistic, political, or scientific value.” Parler also advises users against “any other speech federally illegal in USA,” which the platform incorrectly claims includes doxing and “content glorifying violence against animals.”
That’s why appeals to slippery slope-type arguments are so unpersuasive in this debate. Every platform draws the line somewhere, and that line might move over time as public opinion shifts and as new information arises about what is and isn’t working under the prevailing content moderation policy. We don’t need to protest Facebook’s decision to ban Paul Joseph Watson, Laura Loomer, Alex Jones, and Milo Yiannopoulos by comparing it to what African Americans experienced in the Deep South during Jim Crow, as Will Chamberlain did in this 2019 article for Human Events. These are not civil rights issues — these are questions about what kinds of behavior particular platforms are willing to allow in their communities.
4. Would repealing Section 230 prevent Big Tech from deplatforming users they disagree with?
There continues to be lots of misinformation regarding Section 230 of the Communications Decency Act. Many Republican elected officials and conservative activists argue that recent events show why we need to repeal Section 230, which provides platforms and other interactive computer services immunity for the content users post. This argument relies on an intentional misrepresentation of the statute and the relevant case law. Here is the key part of Section 230 — “the 26 words that created the internet”, as Jeff Kosseff put it:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Prior to Section 230, if a platform tried to moderate content (say by taking down hate speech or incitements to violence), then the platform owner became liable for all the content that remained on the platform. This created perverse incentives. Platform owners basically faced two choices: (1) Engage in zero moderation to retain immunity — and watch the platform get overrun by Nazis or (2) Engage in maximum moderation to avoid getting sued for libel or other harmful content. Section 230 fixed this incentive problem by granting immunity to platform providers for users’ speech, thus enabling the platforms to engage in reasonable levels of content moderation.
Repealing Section 230 would do nothing to alleviate concerns about bias or censorship. As Senator Ron Wyden, one of the authors of Section 230, said, “I remind my colleagues that it is the First Amendment, not Section 230, that protects hate speech, and misinformation and lies, on- and offline. Pretending that repealing one law will solve our country’s problems is a fantasy.” All repealing Section 230 would do is force platforms back into the “all or nothing” choice on moderation. And because advertisers will not advertise on a platform filled with Nazis and pornography, it wouldn’t really be a choice at all. It’s likely the platforms would become much more aggressive in how they moderate content (if they continue to allow users to post at all). In other words, without Section 230, Trump would have been banned from Twitter years ago.
5. Is Twitter consistently enforcing its terms of service?
Whenever Twitter deplatforms a prominent right-wing figure, conservatives and others concerned with censorship accuse the platform of being biased because it leaves up similarly violent or misleading information from authoritarian rulers in Iran and China. FCC Chairman Ajit Pai called out a few tweets from Ayatollah Khamenei, the Supreme Leader of Iran, last May:
More recently, a tweet from the Chinese Embassy in the US tried to paint the ongoing Uyghur genocide in a positive light by saying it was furthering women’s empowerment: “Study shows that in the process of eradicating extremism, the minds of Uygur women in Xinjiang were emancipated and gender equality and reproductive health were promoted, making them no longer baby-making machines. They are more confident and independent." Twitter initially refused to take down the tweet after Ars Technica reporter Tim Lee reached out to ask why it didn’t violate Twitter’s policies. Only after many others publicly shamed Twitter for its decision did the company finally relent and remove the tweet.
Those who say Twitter has enforced its policies inconsistently are right. But that doesn’t mean Twitter should leave Trump and other extremists alone. Arguing “worse people have gotten away with it” is like saying we shouldn’t arrest a murderer because some serial killers are still roaming free. Twitter should also ban dictators from using its platform and more quickly remove content that promotes or condones violence against anyone or any group of people.
6. Does Europe repress speech less than the US now?
There is also renewed debate about whether there should be one unified internet, or whether a splinternet is a better approach, with each nation governing its own internet.
While a further splintering of the internet seems almost inevitable at this point, it would be strange if Europe splits apart over concerns about repression of speech in the US, as Bruno Maçães speculated. The EU has many current (or proposed) laws that repress speech much more than in the US, including:
- Privacy laws (Art. 17 GDPR)
- Hate speech laws (Framework Decision 2008/913/JHA)
- Copyright laws (Directive 2019/790)
- Data localization requirements (Schrems II)
- Proposed anti-terrorism laws (“Preventing the dissemination of terrorist content online”)
- Proposed platform gatekeeper regulation (Digital Services Act and Digital Markets Act)
That’s to say nothing of how the US compares to authoritarian countries such as Russia or China. As Garry Kasparov said, censorship in the USSR is “when the state attacks a company for offending an official … not the other way around.” Or as Jameel Jaffer put it, “forcing publishers to publish the government's speech is what happens in China.”
7. Can private companies violate your First Amendment rights?
Any debate over a high-profile user getting banned from a social media platform quickly devolves into the two sides talking past each other. Those critical of the decision to ban a user say that it’s a violation of that person’s free speech or First Amendment rights. The other side immediately latches on to the First Amendment part of that claim, pointing out (correctly) that the First Amendment restrains the government from infringing on ability to speak, not private companies or individuals. Since it’s so short, let’s just look directly at the text to make sure we’re all on the same page (emphasis added):
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Clearly, the First Amendment was not meant to abridge the rights of private entities and citizens. But “free speech” is a much broader concept than what’s written in the Bill of Rights. That’s one of the harder questions.
3 Hard Questions
8. How much should private companies restrict free speech and free expression?
In everyday use, “freedom of speech” means the ability for someone to express their views or opinions without fear of retaliation (beyond verbal criticism). In other words, it means that people can speak their mind without fear of a disproportionate response. That doesn’t mean those restraints on speech are bad! As a society we make tradeoffs all the time between different values depending on the context. It just means that “free speech” as a concept is not limited to the First Amendment.
Some conservatives and libertarians think that by pointing out that private companies have First Amendment rights too, that’s the end of the conversation, when in reality it’s only the beginning of the conversation. We must admit that these tech platforms are powerful and the decisions they make affect billions of people worldwide. It is legitimate to raise concerns about who gets to be on and off the platform (even while recognizing the companies themselves are under cross-pressures, with conservatives arguing for a more hands-off approach and liberals arguing for more aggressive moderation).
To start answering the tougher questions, we first need to move past the false dichotomy of the individual and the state. As Noah Smith wrote in his own post about Big Tech and free speech, “Between the government and individual citizens lie a variety of mezzanine authorities who have real power, and whose actions can lead to a real loss of liberty.” Noah continued by citing one his previous pieces (emphasis added):
An ideal libertarian society would leave the vast majority of people feeling profoundly constrained in many ways. This is because the freedom of the individual can be curtailed not only by the government, but by a large variety of intermediate powers like work bosses, neighborhood associations, self-organized ethnic movements, organized religions, tough violent men, or social conventions…whom I call "local bullies."…
In a perfect libertarian world, it is therefore possible for rich people to buy all the beaches and charge admission fees to whomever they want (or simply ban anyone they choose). In a libertarian world, a self-organized cartel of white people can, under certain conditions, get together and effectively prohibit black people from being able to go out to dinner in their own city. In a libertarian world, a corporate boss can use the threat of unemployment to force you into accepting unsafe working conditions. In other words, the local bullies are free to revoke the freedoms of individuals, using methods more subtle than overt violent coercion.
Such a world wouldn't feel incredibly free to the people in it.
That’s why merely citing the First Amendment rights of private companies in these cases can leave people feeling hollow. And it’s why we need to thoroughly examine the market power in each layer of the tech stack to decide which layers should have both the responsibility and the ability to moderate content.
The answer here is that there is no clear answer: The decision to ban or not ban accounts or types of speech is inherently political and it’s wrapped up in the profit-maximization desires of the relevant companies. There is no clear rule you can write that will cover every case and there will be backlash no matter what decision these companies make. The existence of the public debate is what constrains platforms. On one side, groups concerned with freedom of expression will limit the platforms’ willingness to moderate. On the other side, those concerned with the negative externalities of certain speech will push platforms to be more heavy handed with their moderation. It is in these debates that societies can determine the level of moderation that is appropriate.
9. Which layer of the tech stack should have the responsibility for moderating content?
Here's a framework for thinking about these issues: How much capital investment and time does it take to construct or find an alternative vendor, especially given government regulation? How close to the end users on social media platforms are these services? You can think of the tech stack in roughly three layers:
- The top layer is the social media apps and websites themselves (e.g., Facebook, Twitter, Parler, etc.);
- The middle layer is intermediaries or aggregators of apps and websites (e.g., app stores, browsers, search engines, etc.);
- The bottom layer is infrastructure providers (e.g., cloud providers, content delivery networks, the Domain Name System, internet service providers, utilities, payments, etc.).
On the top layer, it is relatively easy for a company to create its own app or website. Scaling these platforms to take advantage of network effects can be difficult, but it’s by no means impossible (see TikTok, Discord, Telegram, Signal, Snapchat, etc.).
In the middle layer, Google and Apple have a virtual duopoly (99% market share) in the smartphone operating system market, which makes their decisions regarding the default app stores on Android and iOS devices very important. But while securing distribution in the two major app stores can be hugely beneficial, it’s not necessary for adoption. Users can navigate directly to a website in a browser and Progressive Web Apps are bringing more and more functionality to web apps that was previously limited to only native apps. Companies can also have their users sideload another app store on Android devices, like Epic Games did for Fortnite. Hypothetically, if Chrome were to block users from accessing websites like Parler at the browser level, then that would be worrisome, as Chrome controls 63% of the browser market (while still noting that users can download alternative browsers such as Firefox or Brave).
On the bottom layer, one troubling story is what an internet service provider did in rural Idaho: YourT1Wifi.com, an internet service provider based in Priest River, Idaho, decided to block access to Twitter and Facebook after some of its customers complained about the platforms banning President Trump. That's why large ISPs have committed themselves to net neutrality principles that would require no blocking, and why we need net neutrality legislation that would require no blocking without going through Title II at the FCC. It's also why it's good news that Elon Musk's Starlink, a satellite broadband service, is already in public beta.
The bottom layer includes services that would be harder for social media platforms to replicate on their own: utilities (e.g., electricity, natural gas, water, sewage, telephone), internet service providers (ISPs), content delivery networks (CDNs), the Domain Name System (DNS), credit card companies (Visa and MasterCard), cloud providers (e.g., AWS, Azure, Google Cloud) and other payment systems (e.g., Stripe, PayPal, etc.). It would be very hard for a business to lay its own internet fiber, build its own electrical grid, or create an alternative to the Domain Name System. Utilities are especially powerful because they have a lot of local market power (often they’re a de facto monopoly in a community). By contrast, payment processors and cloud providers compete in global markets that are highly competitive, giving companies alternative options if they’re banned by one service provider. Generally speaking, we should be more wary of imposing liability on this layer of the tech stack for what users post on social media. Instead, policy should hew towards neutrality (with exceptions for illegal activity).
10. When should we require neutrality?
Following the framework detailed above, Apple and Google banning Parler from their app stores is a bigger deal than Facebook and Twitter banning Trump from their platforms. And what occurred in the infrastructure layer (i.e., AWS banning Parler and Stripe banning the Trump Campaign) is a bigger deal than what the app stores did. That means we should closely examine the AWS and Stripe cases to make sure these are indeed competitive markets.
First, AWS does not have a monopoly on cloud services (it has a 32% market share). Gab, a free speech social media platform with zero censorship and lots of Nazis, and PornHub, a website that needs no explanation, both operate without relying on the Google and Apple app stores or AWS for cloud services. Parler put itself in this situation by relying on a risk-averse mainstream cloud provider when there were numerous other options for hosting (including self-hosting). (The latest news is the Parler is now switching over to Epik, the cloud provider that hosts Gab). The same is true for Stripe, which only has an 18% market share. While the payment processor is part of the infrastructure layer, there are dozens of other competitors in the market that are available to the Trump Campaign. If these companies in the infrastructure layer had been monopolies, policymakers should have stepped in to enforce a neutrality standard.
David Sacks, an entrepreneur and venture capitalist, expressed a common sentiment among those displeased with the recent bans by Big Tech: If individuals or apps get banned at every layer of the tech stack — from consumer-facing apps down to infrastructure services — then there is no recourse for those who have been deplatformed. But that’s not actually true. If a user gets banned from Facebook or Twitter, there are numerous alt-tech social media platforms they can join. And after Parler was banned from the Google Play Store and the Apple App Store, it could still be accessed directly from a browser on the open web (or downloaded from a sideloaded app store on Android devices).
As David Ulevitch, a venture capitalist at Andreessen Horowitz, pointed out, even AWS doesn’t “hold the keys to the internet.” There are dozens of other cloud providers, and many companies still self-host using their own servers on-premise (traditional on-prem spending exceeded cloud spending until just last year). While it might be preferable for infrastructure companies to remain neutral (and they might welcome a law taking the decision off their hands), in competitive segments of the infrastructure layer we shouldn’t be too worried about companies exercising their right to not do business with reckless social media platforms.
Somewhat overlooked in this whole debate is that it’s not just Big Tech that’s turned on Trump and his supporters. Virtually all of corporate America has decided enough is enough. The Wall Street Journal is collecting an ongoing list of corporations that have paused PAC donations to politicians. It’s up to more than 50 corporations and includes every household name you can think of, from AT&T to Boeing to Walmart. The most common targets of corporate ire are Trump and the Republican members of Congress that objected to the certification of the Electoral College. The House of Representatives just voted to impeach the president for a second time. Maybe this whole debate is missing the forest for the trees — maybe it’s about way more than Big Tech?
It’s also worth caveating that much of the foregoing analysis will look very different depending on whether law enforcement and national security agencies have been in direct contact with the tech companies regarding imminent threats of violence. If that’s the case, then I think many of the tech platforms decisions look different (save for the decisions to ban Trump). In that context, they wouldn’t be exercising their own discretion over what speech should or should not be allowed on their platforms so much as responding to an implicit or explicit government order. Given the lack of publicly available information right now, we can’t know for sure what the government did or did not tell the tech companies.
At the end of the day, these are complicated issues. Here are the bottom-line takeaways:
- Social media apps and websites can survive without depending on Big Tech (many alt-tech sites already do).
- Trump may be banned from Facebook and Twitter, but he’s still the president of the United States and he has not been silenced.
- Banning right-wing extremists or those who incite violence is not a slippery slope toward an Orwellian dystopia, and it’s certainly not a civil rights issue.
- No, Big Tech is not more powerful than the government; the government can tax you, imprison you, and kill you.
- A private company can’t violate your First Amendment rights, but it can restrict your ability to speak freely.
- Repealing Section 230 would not solve any of these issues; nationalizing the companies in question would cause even more problems.
- Twitter should ban both Trump and the Supreme Leader of Iran from its platform (and CCP propaganda).
- This is not about just Big Tech — most large corporations no longer want to be associated with Trump, Parler, or the Republicans who objected to the certification of the Electoral College.
- Despite all this, the US still values free speech more than any other country in the world.