Quantcast
Channel: The New Republic
Viewing all articles
Browse latest Browse all 15299

No Law Can Ban White Supremacy From the Internet

$
0
0

It wasn’t long after mass shootings in El Paso and Dayton killed more than 30 people that a chorus of commentators, reporters, and even the president himself offered a plaintive response. These near-simultaneous tragedies, they all said, force the country to consider reining in the web sites that have, as the chorus sees it, permitted hateful content to flourish and inspire such massacres. What will likely follow are calls to pass legislation so that online platforms—from social media giants who trade data to third parties without meaningful consent, to whatever obscure internet forum supplants 8chan, the current hate cauldron of choice—can be held liable for the content published by their users.

To do this would mean changing the law that has thus far been credited with making an open internet possible: Section 230 of the Communications Decency Act.

That paradoxical name comes with an instructive history. A panic over online porn in the mid-1990s prompted the Communications Decency Act (CDA), a set of amendments to the more sweeping Telecommunications Act. Meanwhile, then-Representative Ron Wyden, along with (now former) Representative Chris Cox, worried about efforts to hold platforms like the early email and bulletin board behemoth Prodigy accountable for anything their users published. The congressmen feared all this effort to rein in platforms would end up hindering such companies from moderating content altogether. In response to these issues, Wyden and Cox proposed what became “Section 230” of the CDA.

Section 230 states that an “interactive computer service” (to use the parlance of the time) cannot be held liable for content posted by its users, who the law considers to be “publishers” and therefore responsible for their own content. Section 230 also protects platforms from being considered “publishers” should any try to moderate or remove content published by users. Since passage in 1996, the CDA’s “decency” provisions have been ruled unconstitutional by the Supreme Court, but Section 230 remains.

Section 230 has often been misrepresented as a mandate for platforms to remain “neutral”—which it isn’t—and this has made much recent public debate about 230 next to impossible. Take Republicans like Missouri Senator Josh Hawley, who is on a quest to gut 230 to “protect” conservative speech from phantom censors. As New York Times editorial board member Sarah Jeong wrote this July, it’s tough to take on such proposed legislation in any serious way because the law its foes are angry about does not exist. “The debate is not focused on the real issues with C.D.A. 230,” Jeong wrote. “Indeed, it is not focused on the actual text of C.D.A. 230.”

Nor is Section 230 license for an internet-user free-for-all, in which platforms’ hands are both tied and blameless—though Hawley is not alone in making these kinds of claims. Some advocates against revenge porn have also taken aim at 230, like one prominent lawyer fighting such cases, who called 230 “the single greatest enabler of every asshole, troll, psycho, and perv on the internet.” The New York Times—in the wake of Gilroy, El Paso, and Dayton—even called 230 the law that protects “hate speech” on the internet. (It is not, as the Times later acknowledged in a correction: “The First Amendment, not Section 230 of the Communications Decency Act, protects it.”) But the paper of record was not incorrect when describing 230 as a “legal shield.”

Still, having a shield doesn’t mean platforms can’t do anything. Late Sunday night, the content delivery network Cloudflare decided to deny 8chan use of their services, and 230 didn’t stand in the way. In fact, 230 can also serve as protection from potential lawsuits from those upset over moderation decisions by online platforms and service providers. (See, for example, the suit from Representative Devin Nunes against his perceived tormenters, Twitter, @DevinNunesMom, @DevinCow, et al.)

“If platforms couldn’t enforce content policies while retaining immunity, communications today would look a lot like they did in 1965,” argued Daphne Keller, who works on platform regulation and internet users’ rights at Stanford University’s Center for Internet and Society, in The Washington Post. If platforms were responsible for everything their users post, then platforms would either have to vet all of that content or bar it entirely. As some have said, the immunity 230 affords platforms is not only a shield, but also a sword.

Social media companies already do a lot of policing. They also aren’t doing a great job at it. Just like the work carried out by actual law enforcement agencies, platform policing often increases mistrust and divisions in the community, doesn’t keep people safe, and tends to fall hardest on people who have the least power to push back when authorities get it wrong.

In fact, as can be the case with much of the American law enforcement system, platform policing seems designed to make people who are already marginalized even more so. Kim Kardashian gets to keep her nudes up on Instagram, but when queer and trans magazines use the app to promote their cover stars, Instagram refuses their paid posts with the terse and inaccurate message, “we don’t allow ads for escort services.” Instagram acknowledges it uses an algorithm to bury posts they deem “sexually suggestive,” allowing them to remain on the app, just making them significantly harder to find. But few moderation decisions come with clear communication or an appeals process. (Take the Twitter users banned after they were the ones targeted by a harassment campaign, or the ones silenced after promoting their own work investigating the alt-right.)

This current post-massacre moment will not be the first time someone has tried to hold websites accountable for gun violence. In 2012, Radcliffe Haughton purchased a gun from a website called Armslist and used it to kill his wife Zina, two bystanders, and himself—just two days after Zina was granted a restraining order against her husband. Like Craigslist, the website didn’t sell anything directly; it was a platform for buyers and sellers to arrange their own sales. Haughton’s daughter, Yasmeen, brought multiple suits against Armslist, arguing in part that the site was culpable for being a platform where Haughton could purchase a gun he would otherwise be barred from buying. The case made it to the Wisconsin Supreme Court, where Armslist triumphed: Under Section 230, according to the decision, Armslist was a platform, not a publisher, and was therefore immune.

There is no 230 carveout for guns. Neither is there a 230 carveout for hate crimes, white supremacist extremism, or domestic terrorism (or however we describe this terrifying moment). But this is what Congress would have to inevitably broach should they respond to calls to “hold accountable” websites like 8chan, where in the last six months alone, three men have posted their sadly now-unremarkable racist screeds before perpetrating mass killings. To make such content an exception to 230, lawmakers would also have to decide: What do we call this?


There is a 230 exception for prostitution, which President Trump signed last year. That is the law known as SESTA-FOSTA (acronyms for the “Stop Enabling Sex Traffickers Act” and the “Allow States and Victims to Fight Online Sex Trafficking Act”). For online platforms, SESTA makes real the threat of civil suits for content related to prostitution and trafficking—and few stopped to ask questions before quickly hitting the “ban” button. One example: Cloudflare, which took longer to decide to jettison the white supremacist website Daily Stormer than it deliberated over kicking off Switter, a social networking space created by sex workers and running on the open-source Mastodon platform, after fears arose that they’d be banned from Twitter.

Now, as a response to white supremacist violence, what might be sold as simple fixes to Section 230 will likely have similarly immense collateral consequences.

Look back to SESTA. Billed as a way to fight human trafficking, it codified an exception to Section 230 for prostitution. Within minutes of the Senate passing SESTA, websites that sex workers used to advertise, along with forums where they shared concerns about dangerous clients and workplace safety, began shutting down. This resulted in a reported increase in exploitation and violence for this community. Ad sites once allowed many sex workers to work independently; now those who sought to control or profit from them could rush in to take advantage of the absence. People who were actually trafficked also lost the digital evidence trail online ads and bad client reports generate, which could have helped them hold their traffickers accountable. This was just one reason why both sex workers and people who have been trafficked opposed SESTA.

Whether SESTA has decreased trafficking—allegedly, the whole point—remains to be seen. What’s known is that the number of child sex-trafficking cases prosecuted in the U.S. has dropped considerably since SESTA became law. The only thing that’s certain is trafficking cases involving the ad site Backpage have gone down, but SESTA didn’t kill off Backpage; the Department of Justice knocked backpage.com offline, seizing their servers and other assets, before SESTA was signed into law.


This is not the first time in American history that racist terror and new technology have converged. Jessie Daniels, a sociologist who has studied white supremacy extensively, looks back to the early part of the 20th century, when the Ku Klux Klan saw opportunity in motion pictures. “Capitalizing on this new technology, the KKK created film companies and produced their own feature films, with titles like The Toll of Justice (1923) and The Traitor Within (1924), screening them at outdoor events, churches, and schools,” wrote Daniels. “By the middle of the 1920s, the Klan had an estimated five million members. This growth was aided by White supremacists’ recognition of the opportunity to use the new technology of motion pictures to spread their message.” In this vein, Daniels calls the alt-right “innovation opportunists.”

In the face of profound legal challenges, very powerful new media, and daily threats to people of color, immigrants, Muslims (and anyone else Trump may have targeted with his Twitter feed), what can be done? One potential answer, according to Daniels, lies with Virginia v. Black. In that 2003 case, the Supreme Court established that laws prohibiting burning crosses “with the intent to intimidate” were not a violation of the First Amendment. “Burning a cross in the United States is inextricably intertwined with the history of the Ku Klux Klan,” the Court wrote, “which, following its formation in 1866, imposed a reign of terror throughout the South, whipping, threatening, and murdering blacks, southern whites who disagreed with the Klan, and ‘carpetbagger’ northern whites.”

The Court clarified that “intent to intimidate” is the relevant test as to whether or not cross burning is protected expression: “While cross burning does not inevitably convey a message of intimidation, often the cross burner intends that the recipients of the message fear for their lives. And when a cross burning is used to intimidate, few if any messages are more powerful.” Some lawyers see this decision as a way to deal with the online aspects of terrorism and revenge porn.

For caution and guidance, it is wise to also cite one of the authors of Section 230, now-Senator Ron Wyden, who has been clear over the last 20-plus years about what removing this law means. “Tech companies certainly need to continue to be far more vigorous about identifying, fingerprinting and blocking content and individuals who incite hate and violence,” he said in a statement in March. “If politicians want to restrict the First Amendment or eliminate the tools with which much of the world communicates in real time they should understand they are also taking away the tools that bear witness to government brutality, war crimes, corporate lawlessness, and incidents of racial bias.”

Wyden’s statement continued, as if anticipating this moment, “So often in the wake of horrible events politicians grasp for knee-jerk responses that won’t solve real problems, and may even make them worse. Focusing on restricting speech only deflects from the core rot of white supremacy that our country and the world needs to address.” And the senator added this, in a statement to The New Republic: “Right now politicians are desperate to blame anything besides Republicans’ blockade of any and all gun safety legislation for the spate of mass shootings. Video games and social media didn’t cause this violence, and neither did Section 230.”

White supremacy, not a carved-up amendment to a 23-year-old law, is the fundamental problem. Daniels, the sociologist, is hopeful politicians won’t just turn to easy fixes, and that this moment could guide the nation to a more measured response. And that’s going to require platforms to consider much more challenging questions—and for our sake, doing their considering and questioning in public. As Daniels told The New Republic on Tuesday, the day the House Homeland Security Committee called on 8chan’s owner to testify before Congress, “The question becomes, what’s a burning cross in the digital era?”


Viewing all articles
Browse latest Browse all 15299

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>