Abstract
Section 230, the twenty-six words that created the internet, has become the twenty-six words that are breaking the First Amendment. Section 230's blanket liability shield for social media platforms is harming our children and our democracy. In this essay, I narrate the story of what Section 230 enabled-the rise of the virtual public square, a circumstance that the framers could never have imagined-and explore its consequences for human well-being and freedom of expression. I conclude that Section 230 must be repealed to unleash First Amendment jurisprudence to confront the threats to the republic in the age of generative AI, as well as to usher in the next round of internet innovation in service of constitutional democratic sustainability. Bipartisan legislation has been introduced to sunset Section 230 as of December 31,2025. Those who believe in the enduring promise of American constitutional democracy should support its passage.
On January 6, 2021, with the encouragement of President Donald Trump, a motley crew of “Stop the Steal” zealots stormed the U.S. Capitol, destroying lives and property. In response, Twitter, YouTube, and Facebook took the unprecedented step of deplatforming a freely elected U.S. president. Twitter permanently suspended Trump's account, Google's YouTube shut him down indefinitely, and Facebook closed his account but referred its decision to Facebook's newly assembled Global Oversight Board for review. Yet two years later, in January 2023, Twitter's new owner Elon Musk reinstated Trump's Twitter account, and Facebook announced the lifting of Trump's ban, without any public explanation. There was no public outcry.
At the time of Trump's social media silencing, there had been considerable public debate over whether such dramatic action had been warranted. For liberal elites, it had happened far too late. For red-state America, the very idea of censoring a freely elected president was unacceptable. Both sides of this discussion had a point but were asking the wrong questions, and in doing so, lost the plotline of the real story. Things had gone too far so that every choice at the time was a bad choice. Rather than asking if Big Tech should have silenced Donald Trump after January 6, we should instead be asking: how and why did we reach the point at which that Hobson's choice had to be made in the first place?1 The short answer to that question is that while the world's attention was focused elsewhere, Big Tech came to be the gatekeeper of our virtual public sphere, supplanting media institutions and national social norms, the latter of which no longer exist.
Whereas governance and civic engagement used to emerge from deliberation framed by the marketplace of ideas under the protection of the First Amendment (unless the speech incited violence), online harassment and cancel culture today, fueled by social media and framed by recommender algorithms, undermine reasonbased public deliberation. For many younger people, freedom of speech has become the rallying cry of white people in red states. The Republican Party's attack on what it calls “wokeness” and its repeated calls for defending free speech feeds that perception.
While those on the extreme left and extreme right argue about their respective trampled free speech rights, they both overlook that the First Amendment protects citizens from government encroachment on freedom of speech and assembly; the First Amendment is mute on corporate suppression of free expression. If we want each and every voter to have an equal voice in public deliberation, the Constitution alone will no longer get us there.
Writing in 1968, J. C. R. Licklider, the founder of the Advanced Research Projects Agency Network, the forerunner to today's world wide web, foresaw this potential negative impact of technological change on society. “For the society, the impact will be good or bad,” he predicted, “depending mainly on the question: Will ‘to be on line’ [sic] be a privilege or a right?”2 Licklider envisioned the networked world of Web 2.0, but he could not foresee that it would develop without direct government involvement.3 He did not foresee the ad-driven business model and its ramifications for the free marketplace of ideas.
Unfortunately, the laissez-faire approach to social media that Congress has pursued to date has allowed bad consequences to grow deep roots. Slowly, to be online with an unfettered voice is becoming a privilege rather than a right. The Trump administration repealed net neutrality, so the rich can have faster service than those who cannot afford speed. The ad-driven business model has rendered the right to privacy a luxury good, as those of lesser means give up their personal data and uninterrupted programming in exchange for free streaming services with ads and constant surveillance. The privileged pay subscription fees to imbibe their entertainment without unwanted interruptions. Since cyberspace has become our public square, this is a deeply disturbing development, both for justice by means of democracy and democratic sustainability. This is to say nothing of equal protection before the law, an assumption on which our Constitution depends, at least theoretically. At the time of this writing, there are several cases in the Supreme Court's docket regarding freedom of speech on social media, which means that either the Supreme Court or Congress could take steps to restore public equality before the law in cyberspace or further entrench private power. To understand the challenges presently before the Court, we must first get a better idea of how we arrived at this particular juncture.
What does the 1912 sinking of the Titanic have to do with government regulation of radio? Some contended that congestion on the then-unregulated radio spectrum had muffled cries for help from the sinking Titanic, heightening the human losses. Government responded with the Radio Act of 1912, which marked the birth of the American telecommunications regulation regime. For the first time, Washington assigned frequency bands and licensed radio transmitters. The Federal Radio Commission was established in 1927 to oversee the entire industry until it was eventually replaced in 1934 with the Federal Communications Commission (fcc). The Communications Act of 1934 established the FCC’s regulatory authority over telephone services and instructed the FCC to regulate the communications industry in a manner that served the public interest, benefiting society as a whole rather than serving particular private interests. In 1949, the Fairness Doctrine was introduced to ensure viewpoint diversity on controversial issues as a feature of upholding the common good.
One company, AT&T, grew to have monopolistic control over U.S. telecommunications. In 1974, the Justice Department filed an antitrust lawsuit that would eventually lead to the 1984 breakup of the Bell System, forcing AT&T to cede control of the Bell Operating Companies to smaller regional companies that would continue to provide the local telephone services in AT&T's place. As the internet became commonplace in homes nationwide, the Clinton administration enacted the 1996 Telecommunications Act to encourage innovation, competition, and equality of access in a transformed information ecosystem. While often forgotten, the Act also ensured universal service. Especially in a rapidly changing communications environment fueled by technological advances, all Americans were to have access to quality and affordable telecommunications services.
Under the laissez-faire Telecommunications Act of 1996, also known as the Communications Decency Act, Vice President Al Gore's “information superhighway” (that is, the internet) blossomed.4 Within the Telecommunications Act, Title V - codified as Section 230 in Title 47 of the U.S. Code-guaranteed a wild west for online content because it provided immunity to online platforms for user-generated content. This meant that Facebook, Google+, and Twitter were intermediaries rather than publishers, thereby not liable for content on their platforms.
Earlier media companies carefully controlled the content they carried and took responsibility for it. They simply put it out there-on the newsstands, on the airwaves-where anyone could pick it up, and everyone got the same thing. This is a communications model that the Constitution, libel and privacy law, and liberal political theory all presuppose.
Global social media turned this model on its head. Platforms do not generate their own content and take no legal responsibility for the content users choose to post. But they are intensively involved in determining who sees what, they manipulate this distribution with an eye only to their bottom lines, and they are heedless of the political or social consequences of doing so. They then use the personal data they harvest to profit still further, with little regard for the political or social consequences of their business model. In doing so, they are massively disruptive in ways that our existing legal system and our contemporary political ideas are currently ill-equipped to challenge. As this short historical review illuminates, today's tech companies are not like earlier media companies. Above all, they are entities the framers never contemplated, making political interventions the framers never could have imagined.
While attention was focused elsewhere, large technology companies thus became the key overseers of what was now a virtual national public square. Madison's axiom in Federalist 10-that over such a vast territory, with both federalism and the separation of powers in place, factions would naturally cancel each other out-was basically overturned. Public gatekeepers were replaced with private ones. The national public square was effectively privatized.
Part of the reason this state of affairs has been allowed to continue is a function of the staggering relative wealth that tech titans have accumulated through accelerating innovation, which further distorts their outsized political power. The richest human on the 2023 Forbes 400 list was Elon Musk, owner of SpaceX, Tesla, and X (formerly Twitter). The second richest, Jeff Bezos, founded Amazon and owns The Washington Post. Third is Larry Ellison, chairman and cofounder of the software company Oracle. Fifth and seventh, respectively, Larry Page and Sergey Brin, are controlling shareholders of Alphabet Inc., which includes Google and YouTube. Sixth and ninth, Bill Gates cofounded and Steve Ballmer served as longtime CEO of Microsoft, whose massive investment in OpenAI spawned ChatGPT, which took the world by storm in a staggeringly short period of time.5 Eighth is Mark Zuckerberg, cofounder of Facebook and majority shareholder of Meta Platforms. All of these men are super empowered, but as leaders of tech companies, they have more power at their disposal than just cash: their systems power government.
Likewise, the expertise that resides there delivers disproportionate power to Silicon Valley. This is best understood from three vantage points. First, the Big Five (Alphabet, Amazon, Apple, Facebook, Microsoft) control the platforms and infrastructure of daily business and government. Second, the power of Big Tech is concentrated, whereas government power is diffuse due to both federalism and the degree to which government functions have themselves been privatized. Finally, even with the Biden administration's CHIPS Act, Big Tech has both the computing power and is the lead actor in funding research and development, which means Silicon Valley has the talent and government does not, relatively speaking.
To place these patterns in larger historical context, none of what we are seeing is on its face surprising. Censorship has been the norm rather than the exception throughout human history, from the time of the book-burning of the Qin dynasty in 213-212 BCE through the ritualized destruction of books in the Roman Empire to the recasting of history under Stalin and Hitler to today's cancel culture. Even in the United States, the robust protection of the First Amendment is a blip on the screen of American political history. Yet while the Constitution's First Amendment protects citizens against government encroachments on their freedom of expression, it is mute on the question of protection from corporate censorship. Big Tech's power to shape the bounds of permissible discourse in the public square on a global scale is therefore wholly unprecedented and understudied. The problem is that Big Tech is a new kind of entity that currently escapes responsibility for the content it propagates, and it manipulates how that content is disseminated in less than transparent ways. The combination is dangerous for democratic sustainability.
A good place to begin, to wrap one's head around this, is to start with what has changed. Today's corporations, unlike their imperialist predecessors, are not national enterprises. They are multinational in scope. The East India Company, in contrast, was an extension of the British state. American social media companies operate globally, transcending national borders and moderating content in an increasing number of languages. The global team depends on third-party private contractors to do the heavy lifting in excising hate crimes and incitement to violence from the platforms. Since the companies prioritize growth above oversight, and stability is a necessary condition for commerce, dissenting voices in authoritarian regimes are often suppressed.6 To cite just one example, the American company Zoom terminated the accounts of China-based activists who created a Tiananmen Square commemorative account in 2020. Zoom responded to its critics by insisting that it was only following local Chinese law.7 Big Tech's misadventures in China provide many more such examples of bowing to government/Party requests for censorship.
The content moderation challenge is only further complicated by the advent of ChatGPT and other flavors of generative artificial intelligence (Al)-the sort that produces art, music, prose, and poetry in a particular style based on users’ prompts and training data for the model, including creative work made by humans, often without the artists’ consent. It generates information and also hallucinates misinformation without flagging the difference between the two. Generative AI requires massive computing power and enormous data sets, and guess who has exclusive access to both? The power of large multinational technology companies is set to grow exponentially vis-à-vis government in the age of AI. Add in Moore's law (which predicts that the number of transistors on a microprocessor chip will double every two years and has largely held since it was first invoked by Gordon Moore, the cofounder of Intel in 1965), as well as recent breakthroughs in quantum computing, and it becomes clear that computing power in the hands of Big Tech is poised for continuous exponential growth, impossible for our human minds to fathom fully. We are on the brink of a wholly new and unanticipated world.
In May 2023, the stakes were raised still further when the U.S. surgeon general sounded a national alarm on a causal connection between social media usage and child/teen mental health problems.8 Of course, since the creation and use of the first stone tools, technological change has influenced society. Humans have agency and can respond to our own creations. The difference with generative AI is that nobody knows, not even AI developers, how or why each model arrives at its outputs. Meaningful language simply emerges from a veritable black box.
To be sure, we all own technology that exceeds our understanding. But large language models are different in kind. For example, you may drive a car safely and not know how to repair it when it breaks down, but there is someone who does. With generative AI, the tool's very creators don’t know how or why it generates its particular results. That is why its own inventors have simultaneously called for its regulation.9 The worries regarding black box AI vary wildly, from AI researcher Eliezer Yudkowsky's repeated assertations that “we are all going to die” to the milder warning of Sam Altman, founder of Open AI (the creator of ChatGPT), who told members of Congress that “if this technology goes wrong, it can go quite wrong.”10
What does this have to do with free speech? In short, everything. All the old problems with social media remain, and we have now added generative AI to that dangerous brew. The ad-driven social media Moloch, with all its democracy-compromising and free speech-mystifying elements, remains. And to that we have now added the possibilities of automated disinformation that can spread virally, as well as automated lobbying and persuasion, both of which combine to increase exponentially the power gap between the haves and the have-nots, the elites and the masses. Privacy and equality before the law cannot be luxury goods in a rights-based democracy. Put another way, if politics is the art of persuasion, and the Big Tech firms have a monopoly on turbo-charged machine-powered manipulation, then what oxygen is going to remain for the rest of us? The widening knowledge gap between the powerful and the powerless would seem to render democracy itself untenable.
Over twenty years ago, in his pathbreaking book Code and Other Laws of Cyberspace, legal scholar Larry Lessig pointed out that technological innovation was outstripping our legal and regulatory frameworks; in his memorable words, code is law. When code functions as law, a shift in power from the public to private sectors is its natural auxiliary. Innovation has always driven changes in state and society, but the velocity of that change in our new postepistemic era is unprecedented. The Telecommunications Act of 1996, which was designed to complete the breakup of Ma Bell so Washington could encourage innovation and free market competition in telecommunications, is the very same law, slightly revised, that governs social media today. Yet in the past twenty-seven years, telecommunications have transformed dramatically and are now global in scope. Big Tech's deplatforming of an American president only highlighted the extent to which technological change continues to outpace existing laws and regulatory apparatuses.
Tech giants can rise and fall without government intervention. A case in point is the current state of IBM. Recall that at the time of Licklider's writing, IBM was a monolithic giant, akin to what Google, Apple, and Microsoft are today. Yet it lost its throne through market dynamics rather than government intervention.
But giants can also be shaped by government intervention. This was the case with the antitrust suit against Microsoft in the 1990s. Technically, Microsoft lost, but it did not take the nosedive that IBM did on its own. Thus, antitrust may not bring about the intended consequences, but rolling back enabling legislation may be another matter. Which brings us to Section 230.
Section 230 is the foundation of the ad-based business model that drives Facebook and drove Twitter (now X, which is presently in disarray). Section 230 also enables electoral interference and contentious discussions about free speech and who is and who is not granted a platform. It does not deal with antitrust issues but rather with who has the responsibility for content posted online. Section 230 was adopted for three main reasons, some of which are in retrospect rather ironic. First, there was the desire to promote free expression online by protecting platforms from liability for user-generated content. Platforms could then host a variety of viewpoints without fear of being held responsible. The general idea was to let a thousand flowers bloom-and what could possibly go wrong? Second, Section 230 sought to protect minors from obscene content online. This gave platforms carte blanch to moderate objectionable content without government involvement. Perhaps most important, Section 230 was designed to encourage internet innovations, which it did in spades. Companies from Twitter to Facebook to Amazon to eBay all owe their success to Section 230 protection.
To summarize, Section 230 was an effort to promote free speech and enable content moderation. This also meant that when things did not go as anticipated, government had limited intervention options, even in case of emergency. Perhaps the time has come to reconsider its value. In the next sections, I describe two case studies that both illuminate the contours of the problem and allow us to imagine what the implications might be for free speech if Section 230 were to be repealed.
What has been perhaps most striking about the public debate over generative AI is the emerging consensus that this technology is a force that needs supervision. In part, this is nothing new. Tech titans have repeatedly expressed apprehension about the lack of leadership from Washington on establishing clear rules of the game. Mark Zuckerberg was perhaps a trendsetter in this regard when he deployed targeted advertising on Friday, April 30, 2021, in the Daily 202, a Washington Post newsletter for Washington insiders, trumpeting Facebook's support for updating the Telecommunications Act of 1996 on its twenty-fifth anniversary.
Five days later, on Wednesday, May 5, Facebook's Oversight Board announced its verdict on the January suspension of Donald Trump's account. Facebook's actions had been justified under the specific circumstances, but their decision-making processes were insufficiently transparent. Indeed, rules spelling out when a public figure's account could be shut down did not exist. The Oversight Board gave Facebook six months to publish rules governing their actions, since “‘Indefinite’ suspensions are not described in the company's content policies.”11
On June 4, 2021, Facebook announced that it would ban Donald Trump from its platform for two years and would reinstate him “only if the risk to public safety has receded.” What was not apparent at the time was that this decree was intended as a repudiation of the Oversight Board's ruling that what the company needed was transparent rules that applied to all parties equally.
What exactly is the Facebook Oversight Board (now the Meta Oversight Board)? It, too, was an unprecedented creation, whose current status is unclear.
Facebook's Global Oversight Board on content moderation was announced in September 2019 and is funded by an independent trust. The forty-member board was devised after months of public consultation with experts and institutions around the world. In terms of design, it is something wholly new. It is a paragovemmental organization with no governmental or legal representation that floats above individual countries yet renders judgments with local ramifications. Its composition circumvents nation-states entirely; it is not the European Union, nor the United States, nor the World Trade Organization. It looks like a court, but Facebook is not a country. It also doesn’t aspire to serve law in any way, either domestic or international. It exists as a body of ultimate appeal, and in that sense resembles something old: the private government of the monarch's court. Yet Zuckerberg insists Facebook will abide by its decisions, which is not how kings and queens typically behave.12
The six-month deadline came and went in November 2021, and Facebook did not deliver the transparent rules that its Oversight Board had demanded. The media seemingly did not notice. Trump remained deplatformed. Perpetually distracted, the public did not push Zuckerberg to abide by the oversight body he himself had brought into being. Zuckerberg seized the opportunity to change the narrative, and on October 28, 2021, Facebook was rebranded as Meta, a moniker meant simultaneously to distract and to demonstrate that Zuck was betting on the Metaverse to be his company's future.13 At the time of this writing, despite pledging he would abide by the Oversight Board's decisions, Facebook has yet to deliver on the May 2021 demand for clear rules that would justify deplatforming Donald Trump.
Perhaps in part because the public has proven to be so disengaged from these issues, the chapter on the role of social media in fanning the flames of the January 6 insurrection was omitted from the Final Report of the House Select Committee on January 6. Whatever the reason, the complete segment and the depositions on which it was based were leaked to The Washington Post. Together, they provide a remarkable indictment of the self-interested behavior of Big Tech in the run up to January 6.14
The Committee's analysis showed that the January 6 insurrectionists were consumers, not creators, of political disinformation on social media. In total, the Committee identified over seventeen hundred Facebook groups that contained at least one defendant in a January 6 prosecution.15 What went wrong at Facebook, according to the Committee?
First, the company's organizational structure subordinates integrity teams to the policy team, which oversees both content policy and public policy - a clash of incentives that compromises decision-making on integrity issues in ways that may be unique to Facebook or are at least unusual among its peers. Second, the company feared allegations of bias from right-wing politicians, and for years the desire to avoid political reprisals has shaped Facebook policy choices in ways which reverberate across the political and media landscape.16
This fear of being accused of political bias against the right produced extraordinary events and had significant costs. For example, the Committee reports that Zuckerberg even had private telephone conversations with President Trump regarding his online behavior, requesting that Trump dial down his rhetoric, with the president then responding favorably via tweet. “Facebook's tolerance of increasingly radical speech and hyper-partisan media may have accelerated polarization and extremism in the United States,” concludes the report.17
Facebook's Break The Glass (BTC) emergency response after the January 6 insurrection was more impressive. BTC was in place until January 29. Yet the next month, just weeks after the Capitol siege, Facebook's Growth Team “urgently” requested the rollback of all BTC emergency measures due to their negative impact on membership expansion. It is inescapable to avoid concluding that Facebook's content moderation policy remains hostage to its bottom line rather than to a clear set of rules to govern users’ online behavior.
Yet that is only a small fraction of the problem. What the leaked Facebook files demonstrate is that the version of Facebook we have in the United States is, ironically, Meta's best face. The documents show that Facebook is aware that its platform is being used to foment hatred in the Middle East, to advance the interests of violent cartels in Mexico, to instigate ethnic cleansing in Ethiopia, to promote extremist anti-Muslim rhetoric in India, and to foster sex trafficking in Dubai. And the company is doing very little about it. As Facebook is the gateway to the internet for many citizens in the Global South, the Facebook files are a topic of international concern.18 Frances Haugen is not Meta's first whistleblower, nor will she be its last.19
In the end, Facebook never produced the new content moderation rules that could have retroactively justified deplatforming an elected U.S. president. Instead, it restored Donald Trump's account on February 9, 2023, again without providing justification.20 It did so even though Trump's posts on his current platform Truth Social continue to be his standard incendiary fare. The Valentine's Day story in The New York Times reporting the return of the former president to Facebook garnered just 125 comments.21
As the case of President Trump and his allies’ online behavior illustrates, social media platforms are vulnerable to hacking (defined as “an activity allowed by the system that subverts the goal or intent of the system”).22 Facebook faced down another variety of hack on March 15, 2019, when its platform was hijacked to spread hate and terrorize fellow humans in ways that had previously been difficult to imagine. A shooter used a head-mounted camera to livestream a mass shooting that occurred at two mosques in Christchurch, New Zealand. The footage quickly spread across the internet, despite frantic efforts by Facebook and others to remove the video. It was reuploaded and shared countless times, racking up millions of views worldwide.
The Christchurch mass murder broadcast was an early warning signal of the structural weaknesses of existing content moderation policy, the ones that the president and his team exploited. Once the horse had left the barn, it proved impossible to rein in. The incident highlights the incendiary capacity of social media platforms for distributing extremist content on a massive scale, and raised serious concerns about the inadequacies of real-time content moderation that have only grown exponentially since Meta's open-source gated release of its variant of generative AI (LLaMA or Large Language Model Meta ai) to trusted users.23 It took just four days for the model and its weights to be uploaded to the anonymous imaging and messaging website 4Chan, giving anyone in the world access to a powerful large language model that can be customized and run on a laptop.
In the ChatGPT era, a new set of questions should command our attention. What responsibilities does Big Tech have in combating harmful online content, and what could government do to assist them? Because of the First Amendment, it has never been the business of government to censor expression. Yet on social media, extremism generates views and thereby revenue, raising challenging new questions about hate speech and incitement to violence. These issues met their embodiment in Donald J. Trump.
Donald Trump's misuse of Twitter to spread his trademark excess led journalists to argue that he had violated Twitter's terms of service as far back as 2017. As one article in gq put it, “threatening to nuke someone is a ‘violent threat,’ no?”24 Calls for removing Donald Trump's account were valid at the time-had he not also been president of the United States. But Trump's magnetic capability to draw other users into his orbit was good for company growth, and so the violations went unpunished until after the calamity of January 6, 2021.
One particularly clever Twitter user conducted an experiment in which they tweeted Trump's tweets verbatim from a separate account to see if the content would get their account suspended. It did, while Trump tweeted on.25 Trump piled outrage upon outrage until Twitter first suspended his account for twelve hours on the day of the Capitol siege, and then suspended it permanently on January 8, 2021. At the time, President Trump had eighty-eight million followers.
Twitter founder Jack Dorsey was quite worried-and rightfully so-that the ban of a standing U.S. president would have a chilling effect on freedom of expression. In a lengthy tweetstorm of his own, Dorsey expressed that he took no pride in shutting down the president's account. Wearing a pandemic beard that made him look like a blue-eyed Rasputin, he described his decision as a “failure” to create a service that could sustain healthy conversations and promote civic discourse.26
The House Select Committee's embargoed report on social media sheds light on the factors influencing the decision to at long last pull the trigger and deplatform the president. Put simply, they were largely fear-driven, rather than rules-driven. After the January 6 attacks, chatter on Twitter suggested that Trump's tweets were inciting further violence by those who felt they had missed out on the D.C. action. The FBI was warning that plans were underway for a coordinated armed attack against state capitols on January 17. After hearing this, Twitter's Site Integrity squad recommended Trump's permanent suspension.27
The content moderation roller coaster that Twitter was riding with President Trump never sat well with Twitter employees, who saw the inconsistency and bias enabling the already powerful and giving them a louder voice on the platform. Anika Collier Navaroli was a member of Twitter's policy team that designed Twitter's content moderation rules. Navaroli was educated at the University of Florida, where she developed a keen interest in media law and technology. She earned a master's degree in journalism at Columbia University and received her law degree from the University of North Carolina. Her Columbia master's thesis was titled “The Revolution Will Be Tweeted.”28
Navaroli began working at Twitter in 2019. On January 6, 2021, she was the most senior member of the U.S. Safety Policy Team. The team was responsible both for writing the external policy (the Twitter rules that the public can see) and the internal policy that determines how those rules should apply (the Twitter rules that the public cannot see).29 Twitter's coded incitement policy, which sought to flag dog whistles, was a direct response to Trump's message to the Proud Boys in the first presidential debate to “stand back and stand by.” The coded incitement policy was devised “to prevent Donald Trump or someone in a similar situation, from tweeting ‘stand back and stand by’ to a White supremacist group on the platform.”30 When Trump said in 2020 that “when the looting starts, the shooting starts,” with his followers tweeting that message, Navaroli's team brought this to the attention of Twitter's leadership as a clear violation of Twitter's internal rules and as part of a pattern of coded incitement crying out for a policy response. Leadership's response was to place a warning message on Trump's tweet. No other action was taken.
When all was said and done, the senior leadership was unconvinced that a coded incitement policy was even necessary in the first place. Indeed, the safety team had never been authorized to use the policy to delete content. The result? Coded incitement was not part of the team's toolbox in the time between the elections and January 6.31 Navaroli's testimony to the House Select Committee on January 6 revealed that Twitter did not have any sort of data-driven content moderation engine in place at the time-although it could have. All the content moderation was being done through user reports and internal discussion on the policy team, with Navaroli personally monitoring Trump's tweets. In short, the platform very much erred on the side of laissez-faire content moderation and enforcement of its own rules. Navaroli said she often referred to Twitter as being held together by “Google docs and doc tape.”32
For years, Twitter had dismissed calls to suspend Trump's account. The president's tweets were deemed too newsworthy to take down. But on January 6, Navaroli told the Committee she was suddenly ordered to “stop the insurrection” and to find a reason to suspend Donald Trump's account.33 She had been warning for months about what was brewing, but only after the storming of the Capitol did her superiors give her the green light to find reasons to suspend the president.34 Flabbergasted, according to her deposition, Navaroli told her supervisors, “I would like to express my frustration with you because I told you this was going to happen … now it is happening, and you are asking me to clean it up…. He's not doing any thing differently than he's already done.”35
Navaroli told the Committee that if Trump had been “any other user on Twitter … he would have been personally suspended a very long time ago.” Navaroli testified, “Twitter bears the responsibility for being the main platform and service through which Donald Trump's rhetoric and incitement to violence was not only posted but was amplified in ways that stoked flames and created a megaphone like we have never seen before within communications history.”
In response, Twitter executives argued that Navaroli's testimony had failed to mention the significant steps that had been taken to curtail hateful content on the platform.36 Navaroli left Twitter in March 2021 because she “could no longer be complicit” in what she saw to be “a company and a product that was wantonly allowing violence to occur.”37 Thus, Twitter followed Facebook and Google's YouTube in suspending Trump's account after the January 6 insurrection. Twitter's ban was “permanent” until Elon Musk, Twitter's new owner, said it wasn’t.
After Musk's takeover and Trump's reinstatement, Yoel Roth was promoted to lead Twitter's (now X's) AI Trust and Safety team. Roth had been in the content moderation business since he joined Twitter in 2015, but he had not “pressed the button” on the decision to deplatform the president. As he told the podcast This American Life in April 2023, after assuming the role of Twitter's head of AI Trust and Safety, he wrote down the red lines that he would not cross in his new job: “I will not break the law. I will not lie for him [Elon Musk]. I will not undermine the integrity of an election…. I will not take arbitrary or unilateral content-moderation action.”38 As Roth put it, if you find yourself in a job where you have to make a list like this, “your job is insane.”39
For a time, Roth's conversations with Musk suggested he would be reasonable about online moderation. But that quickly proved not to be the case, with Musk favoring no moderation whatsoever. Roth wound up resigning when Musk wanted him to implement his Twitter Blue revenue enhancement scheme, in which users would pay $8 a month for certification as a legitimate source (among other visibility-increasing benefits), when this verification badge had previously been bestowed for free on accounts to help users distinguish between reliable and unreliable sources, between truth and falsehood. The idea that free speech belongs to those of means, and everyone else should not have the same megaphone-the underlying message of Twitter Blue being that democratic legitimacy is a product, not an aspirational ideal-is obviously at odds with a functioning democracy comprising equal citizens.40
After Roth's resignation, Musk led an online doxing campaign that ultimately forced Roth and his family to leave their home. Musk's release of “The Twitter Files,” a collection of inhouse emails and Slack messages, that Musk claimed revealed “bias” against President Trump, suggested that Roth was Musk's enemy number one, even though Roth had played no role in making the call to deplatform the president.41
What do we see from reviewing these two cases? When the Section 230 ecosystem met Donald Trump, Mark Zuckerberg, and Elon Musk, the results were not pretty. Companies had content moderation rules, but some speakers were privileged over others, especially those who were growth and revenue generators. To be sure, since Donald Trump was president of the United States, the threshold for libel against him should be and was much higher. But when a president uses a private platform as his shield to incite insurrection and terrorize his opponents, he cannot be above the law without the political system sliding into oligarchy. No citizen in a democracy should be above the law if we are to preserve the fragile consensus on democratic norms and values on which the republic and the Constitution depend. In what is perhaps good news, Musk's erratic actions have not yielded profits. As of early 2024, Twitter's value is less than one-third of what it was when Musk purchased it.42
Writing in 1979, Licklider highlighted the opportunity, which again exists in 2024, “for the government to regain public trust by sponsoring research and development in unambiguously prosocial technology.”43 He contended that the “broadest and most obvious conclusion about the future interaction of computers and government is that it will depend critically upon how far computer technology advances … and upon how much initiative government takes in shaping the socioeconomic system and the culture.”44 How might this best be done?
One clear takeaway from the material covered here is that our current virtual public square is not one in which all voices have an equal volume. Because of the repeal of net neutrality, the reality that we have allowed privacy and Fourth-Amendment protections to become luxury goods, and the entire orientation of the ad-driven social media business model to maximize revenue at the expense of individual rights and even democracy itself, tinkering at the margins will not get us closer to parity in whose voices are heard. This is especially so with generative AI already in the hands of malicious actors.
The editors of this Dædalus issue, Lee C. Bollinger and Geoffrey R. Stone convened a special commission and published an excellent volume in 2022 that examined issues of the First Amendment and social media, just before the launch of ChatGPT. Their five core premises were that 1) the results of the platforms’ recommender systems, which replicate and amplify speech, should be considered distinct from the content of speech and therefore subject to lesser First-Amendment protections; 2) technology is constantly evolving, so any static regulatory framework will quickly become obsolete; 3) self-regulation alone is insufficient; 4) promoting user privacy and transparency on platforms, while advisable, will not by themselves address social media's harmful consequences; and 5) solutions must be global in scope.45 The commission concluded that doing away with Section 230 would be a cure worse than the disease, privileging large platforms. Instead, they recommended reforming Section 230 by making its liability shield contingent on participation in a self-regulatory agency overseen by a new federal agency that would together notify platforms of hateful content and punish those who do not comply. This solution mirrors the European Union's limited liability approach to content moderation. We can’t stop all harmful content at the source, but we can insist that it immediately be removed before it spreads virally.46
What a difference the last year has made. The age of generative AI and automated disinformation makes what was a reasonable and thoughtful recommendation one year ago no longer adequate a year later. The stakes have increased dramatically, and the velocity of change coupled with the proliferation of open-source capabilities will produce an avalanche that is impossible for any new federal agency to oversee. We are poised at the brink of a world where it is increasingly difficult, even with excellent technical fixes like watermarking, for ordinary citizens to be able to distinguish truth from falsehood. The worry is that people will no longer know what is true, and will therefore believe everything and nothing. “If everybody always lies to you,” Hannah Arendt warned, “the consequence is not that you believe the lies, but rather that nobody believes anything any longer.”47 I wonder whether the commission members would accept my more radical proposal were they writing today.
A wakeup call for the entire social media ecosystem would be to repeal Section 230 and concurrently build a new public internet, one in which Web 3.0 decentralized autonomous organizations (DAOs) might flourish, citizens would own their personal data, and the ad-driven business model would be transcended, thereby enabling a new and hopefully more constructive and democracy-compatible alternative free internet to come into being.48 Repealing Section 230 would disrupt the existing ecosystem, clearing a pathway for a democracy-sustaining internet to put down roots.
Critics of Section 230 repeal have made good arguments as to how this might destroy social media as we presently know it.49 From the picture that emerges in these pages, would this be such a bad thing? If social media produces groupthink, standing as a serious impediment to humans thinking for themselves and deliberating together as equal citizens, the key to combating its deleterious effects would be to let the First Amendment govern, hold platforms responsible for the hatred and incitement to violence that they foster, and just generally slow things down so that people have time and space to recover their capacity to think for themselves.
Repealing Section 230 sounds like radical action, rife with unintended consequences, but it is important to ask: for whom? Meta would take a big hit, but Meta's Threads is a decentralized social media protocol with data portability built for the Web's next iteration, as is Twitter founder Jack Dorsey's BlueSky. Would the world be irreparably damaged without Facebook? Yes, the social media platform has its benefits, and older people especially find meaningful interactions there, but should such benefits trump democratic sustainability? It would be easy enough to build a new and improved ad-free platform with opt-in recommendations on a new internet and garner the same rewards. Change is always taxing, but the costs of Section 230 now outweigh the benefits. It should be retired to address the pathologies it now enables and unleash internet innovation.
Some argue that such a move would favor large platforms over smaller ones, since they have the means to deploy an elaborate content moderation system while smaller startups do not. With generative AI as a content moderation assistant, is this really still the case? One could argue that the smaller enterprises would have an easier time moderating their platforms than the global colossi that presently rule cyberspace. And for those worried that companies would err on the side of censorship to avoid penalty, I would point to the case studies of content moderation rehearsed above as evidence that there would be plenty of room for legal action against private companies for breach of contract if their censorship were politically motivated. Perhaps the result would be many frivolous lawsuits, but is that worse than allowing companies to knowingly endanger the lives of teenagers? And for those concerned that the marginalized, the unusual, or the queer would lose their spaces for finding others like them and all the life-affirming meaning that represents, surely there are better ways for that to happen than on large free-for-all platforms where trolls harass anyone who steps outside the lines of social conformity with America's two present polarities.
Instead of allowing Section 230 to give Big Tech carte blanche, why not mandate content moderation oversight by constituent assemblies that are representative of all Americans, not just the wealthiest?50 In preparing this essay, I asked GPT4 to strongman the argument against repealing Section 230, wanting to be sure there were no good arguments I had overlooked. We had a thought-provoking extended exchange, but in the end, GPT4 revealed its strong bias for the status quo, which is wholly understandable, because it is a model trained on the past (its cutoff date for training data is September 2021), with all of history's biases and misconceptions part of its learning experience. To give you a flavor of GPT4's arguments:
The interests of online platforms and respect for human dignity are not mutually exclusive, and a balanced approach to internet regulation should seek to uphold both. Moreover, many believe that online platforms should take more responsibility for ensuring their services respect human dignity and foster a healthy online environment … while there could potentially be benefits to repealing Section 230 and shifting more power over online content to governments, there are also significant risks and challenges to consider. It's a complex issue that requires careful consideration and debate.
Translation: allow online platforms to continue to self-regulate, which is the standard argument of the legions of lobbyists for Big Tech, whose voluminous texts the large language model powering GPT4's pronouncements has ingested. If we have learned anything, it is that this is a recipe for disaster. The makers of generative AI are themselves acknowledging this is the case. Limited to knowledge of the past alone and deprived of the human capacity for imagination and transgressive frontier thinking, GPT4's defense of Section 230 fell flat. We humans can do better.
More generally, government needs to take action to level the playing field for human participation in our virtual public square, a realm where bots should have no place. Repealing Section 230 would move us in that direction. As cryptographer Bruce Schneier and data scientist Nathan Sanders have argued, “the best way to fight an AI that can lobby for monied interests is to help the little guy lobby for theirs.”51
In the end, the First Amendment remains a stroke of genius, but it needs additional guardrails for the information age. Section 230 succeeded in encouraging innovation in human communication, but it has outlived its purpose. Fortunately, bipartisan legislation to sunset Section 230 as of December 31, 2025, was introduced in the House in May 2024.52 The cosponsors of the legislation, Republican Congressional Representative for Washington Cathy Rogers and Democrat Congressional Representative for New Jersey Frank Pallone, deserve our unwavering support and gratitude for their courage to join forces for the sake of our children in challenging times.
Repealing Section 230 would move the United States in the right direction, and with it the entire free world. Taiwan's former digital minister at-large Audrey Tang has perhaps put it best:
When we see “internet of things,” let's make it an internet of beings.
When we see “virtual reality,” let's make it a shared reality.
When we see “machine learning,” let's make it collaborative learning.
When we see “user experience,” let's make it about human experience.
When we hear “the singularity is near,” let us remember: the plurality is here.53
The First Amendment originates in the interests of plurality. We honor it by restoring the balance of power between Big Tech and the disempowered.
ENDNOTES
“Hobson's choice” refers to a situation in which there is an illusion of choice, but only one option is offered.
J. C. R. Licklider and Robert W. Taylor, “The Computer as a Communication Device,” Science and Technology 76 (2) (1968): 1-3.
See J. C. R. Licklider, “Computers and Government,” in The Computer Age: A Twenty-Year View, ed. Michael L. Dertouzos and Joel Moses (Cambridge, Mass.: The MIT Press, 1979), 87-126. Thank you, Glen Weyl, for bringing this piece to my attention.
Though the phrase “information superhighway” is frequently associated with Gore, due to a speech he gave in Buenos Aires in 1994 that brought it into the popular lexicon, many used it before him, dating back to the 1960s and 1970s.
Michael Scherer and Sarah Ellison, “How a Billionaires Boys’ Club Came to Dominate the Public Square,” The Washington Post, May 1, 2022, https://www.washingtonpost.com/politics/2022/o5/o1/billionaires-politics.
Jillian C. York, Silicon Values: The Future of Free Speech Under Surveillance Capitalism (London: Verso Books, 2021), 129-130.
Aynne Kokas, Trafficking Data: How China Is Winning the Battle for Digital Sovereignty (Oxford: Oxford University Press, 2022).
Matt Richtel, Catherine Pearson, and Michael Levenson, “Surgeon General Warns that Social Media May Harm Children and Adolescents,” The New York Times, May 23, 2023, https://www.nytimes.com/2023/05/23/well/family/social-media-mental-health-surgeon-general.html.
Center for AI Safety, “Statement on AI Risk,” https://www.safe.ai/statement-on-ai-risk (accessed May 24, 2024); and Future of Life Institute, “Pause Giant AI Experiments: An Open Letter,” https://futureoflife.org/open-letter/pause-giant-ai-experiments (accessed May 24, 2024).
“Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization,” Lex Fridman Podcast #368, March 30, 2023, https://www.youtube.com/watch?v=AaTRHFaaPG8; and Cecilia Kang, “OpenAI's Sam Altman Urges A.I. Regulation in Senate Hearing,” The New York Times, May 16, 2023, https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html.
Afia Asantewaa Asare-Kyei, Evelyn Aswad, Endy Bayuni, et al., “Oversight Board Upholds Former President Trump's Suspension, Finds Facebook Failed to Impose Proper Penalty,” Oversight Board, May 5, 2021, https://oversightboard.com/news/226612455899839-oversight-board-upholds-former-president-trump-s-suspension-finds-facebook-failed-to-impose-proper-penalty.
See Steven Levy, “Why Mark Zuckerberg's Oversight Board May Kill His Political Ad Policy,” Wired, January 28, 2020, https://www.wired.com/story/facebook-oversight-board-bylaws; and Jeff Horwitz, “Facebook Forms Independent Board to Oversee Content Decisions,” The Wall Street Journal, September 17, 2019, https://www.wsj.com/articles/facebook-forms-independent-board-to-oversee-content-decisions-11568752897. Zuckerberg has said that the Oversight Board is intended to protect free expression, since he is “not always going to be here.” See Tony Romm, “Zuckerberg: Standing for Voice and Free Expression,” The Washington Post, October 17, 2019, https://www.washingtonpost.com/technology/2019/10/17/zuckerberg-standing-voice-free-expression.
According to psychologist Jeremy Bailenson, Facebook owns all the data generated by Oculus (Meta's virtual reality headset) users, including camera views of the surrounding room that are necessary to render realistic virtual movements. Oculus runs on a computer in the headset, so all that private data belongs to Zuckerberg. Facebook put $40 billion into virtual reality research in 2022 alone. Author interview with Jeremy Bailenson and Brian Beems in Bailenson's Stanford lab on June 9, 2022; and several off-the-record conversations by the author with current Facebook Oversight Board members. See also Afia Asantewaa Asare-Kyei, Evelyn Aswad, Endy Bayuni, et al., “Oversight Board Demands More Transparency from Facebook,” Oversight Board, October 21, 2021, https://www.oversightboard.com/news/215139350722703-oversight-board-demands-more-transparency-from-facebook.
Cat Zakrzewski, Cristiano Lima-Strong, and Drew Harwell, “What the January 6 Probe Found Out about Social Media, but Didn’t Report,” The Washington Post, January 17, 2023, https://www.washingtonpost.com/technology/2023/01/17/jan6-committee-report-social-media.
House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, “Social Media & the January 6th Attack on the U.S. Capitol: Summary of Investigative Findings,” draft publication, https://www.washingtonpost.com/documents/5bfed332-d350-47c0-8562-0137a4435c68.pdf (accessed May 24, 2024).
Ibid., 58.
Ibid., 65.
Ellen Cushing, “How Facebook Fails 90 Percent of Its Users,” The Atlantic, October 25, 2021, https://www.theatlantic.com/ideas/archive/2021/10/facebook-failed-the-world/620479.
Elizabeth Dwoskin and Craig Timberg, “Like Whistleblower Frances Haugen, These Facebook Employees Warned about the Company's Problems for Years. No One Listened,” The Washington Post, October 8, 2021, https://www.washingtonpost.com/technology/2021/10/08/facebook-whistleblowers-public-integrity-haugen.
President Trump was supposed to be banned for two years and then reconsidered. The two-year mark would have been in June 2023, not February.
Steven Lee Myers and Nico Grant, “Combating Disinformation Wanes at Social Media Giants,” The New York Times, February 14, 2023, https://www.nytimes.com/2023/02/14/technology/disinformation-moderation-social-media.html.
Bruce Schneier, A Hacker's Mind: How the Powerful Bend Society's Rules, and How to Bend Them Back (New York: W. W. Norton, 2023).
Meta, “Introducing LLaMA: A Foundational 65-Billion-Parameter Large Language,” February 24, 2023, https://ai.facebook.com/blog/large-language-model-llama-meta-ai.
Jack Moore, “Donald Trump's Twitter Account Is Very Much in Violation of Twitter's Terms of Service,” GQ August 11, 2017, https://www.gq.com/story/donald-trump-twitter-account-violation-of-twitter-terms-of-service; and Ashley Feinberg, “Twitter's Never Going to Ban Donald Trump,” Wired, July 17, 2017, https://www.wired.com/story/twitter-ban-donald-trump.
@SuspendThePres, X, January 9, 2021, 1:33 a.m., https://twitter.com/SuspendThePres/status/1347702888970084354; and Carlie Porterfield, “Twitter Suspends Account Copying Trump's Tweets For ‘Glorifying Violence,’” Forbes, June 3, 2020, https://www.forbes.com/sites/carlieporterfield/2020/06/03/twitter-suspends-account-copying-trumps-tweets-for-glorifying-violence.
Elizabeth Dwoskin, “Twitter CEO Jack Dorsey said the Trump Ban Reflected ‘A Failure’ to Police Online Discourse,” The Washington Post, January 13, 2021, https://www.washingtonpost.com/technology/2021/01/13/twitter-trump-ban.
House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, “Social Media & the January 6th Attack on the U.S. Capitol,” 21-22.
House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, “Deposition of: Anika Collier Navaroli,” September 1, 2022, https://www.govinfo.gov/content/pkg/GPO-J6-TRANSCRIPT-CTRL0000917176/pdf/GPO-J6-TRANSCRIPT-CTRL0000917176.pdf.
House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, “Social Media & the January 6th Attack on the U.S. Capitol,” 6.
Ibid., 22.
Ibid., 32.
Ibid., 45.
Ibid., 135-136.
Ibid., 91.
Ibid., 92.
Drew Harwell, “Jan. 6 Twitter Witness: Failure to Curb Trump Spurred ‘Terrifying’ Choice,” The Washington Post, September 22, 2022, https://www.washingtonpost.com/technology/2022/09/22/jan6-committee-twitter-witness-navaroli.
House Select Committee to Investigate the January 6th Attack on the U.S. Capitol, “Deposition of: Anika Collier Navaroli,” 11.
Ira Glass, “What I Was Thinking as We Were Sinking,” This American Life, https://www.thisamericanlife.org/797/transcript.
Ibid.
Ibid.
Shannon Bond, “Elon Musk Is Using the Twitter Files to Discredit Foes and Push Conspiracy Theories,” NPR, December 14, 2022, https://www.npr.org/2022/12/14/1142666067/elon-musk-is-using-the-twitter-files-to-discredit-foes-and-push-conspiracy-theor.
Adam Gabbat, “Value of X Has Fallen 71% Since Purchase by Musk and Name Change from Twitter,” The Guardian, January 2, 2024, https://www.theguardian.com/technology/2024/jan/02/x-twitter-stock-falls-elon-musk#:~:text=Musk%20acquired%20Twitter%20for%20%2444,X%20at%20about%20%2412.5bn.
Licklider, “Computers and Government,” 118.
Ibid., 125.
Katherine Adams, Martin Baron, Lee C. Bollinger, et al., “Report of the Commission,” in Social Media, Freedom of Speech, and the Future of our Democracy, ed. Lee C. Bollinger and Geoffrey R. Stone (Oxford: Oxford University Press, 2022), 316-317.
Ibid., 323.
Hannah Arendt, “Hannah Arendt: From an Interview,” interview by Roger Errera, The New York Review of Books, October 26, 1978, https://www.nybooks.com/articles/1978/10/26/hannah-arendt-from-an-interview.
For those interested in my argument for Section 230 repeal, see Jaron Lanier and Allison Stanger, “The One Internet Hack That Could Save Everything,” Wired, February 13, 2024, https://www.wired.com/story/the-one-internet-hack-that-could-save-everything-section-230. See also “Hearings on ‘Where Are We Now? Section 230 of the Communications Decency Act of 1996,’” Before the House Committee on Energy and Commerce, April 11, 2024, 118th Cong. (2023-2024), testimony of Allison Stanger, Russell Leng ’60 Professor of International Politics and Economics, Middlebury College, https://www.congress.gov/118/meeting/house/117099/witnesses/HHRG-118-IF16-Wstate-StangerA-20240411.pdf. For one promising proposal to accomplish building a new public internet, see Project Liberty and Frank McCourt's effort to assemble the public financing to purchase TikTok. See also Frank McCourt, Our Biggest Fight: Reclaiming Liberty, Humanity, and Dignity in the Digital Age (New York: Crown, 2024).
Christopher Cox and Ron Wyden, “Buy This Legislation or We’ll Kill the Internet,” The Wall Street Journal, May 21, 2024, https://www.wsj.com/articles/buy-this-legislation-or-well-kill-the-internet-9184202b.
Aviv Ovadya, “‘Platform Democracy’ - A Very Different Way to Govern Powerful Tech,” Reimagining Technology, November 16, 2022, https://aviv.substack.com/p/platform-democracy-a-different-way-to-govern.
Bruce Schneier and Nathan Sanders, “We Don’t Need to Reinvent our Democracy to Save It from AI,” Belfer Center for Science and International Affairs at Harvard University, February 9, 2023, https://www.belfercenter.org/publication/we-dont-need-reinvent-our-democracy-save-it-ai.
House Energy and Commerce Committee, “Energy and Commerce Leaders Unveil Bipartisan Draft Legislation to Sunset Section 230,” May 12, 2024, https://energycommerce.house.gov/posts/energy-and-commerce-leaders-unveil-bipartisan-draft-legislation-to-sunset-section-230.
Audrey Tang, “We Have to Keep Defining What Is the Inter in Internet,” Framer Framed, November 20, 2019, https://framerframed.nl/en/dossier/audrey-tang-we-have-to-keep-defining-what-is-the-inter-in-internet.