Supreme Court to Weigh if YouTube, Twitter, Facebook Are Liable for Users’ Content

This content has been archived. It may no longer be accurate or relevant.

From The Wall Street Journal:

The Supreme Court has agreed to decide whether social-media platforms can be held liable for terrorist propaganda uploaded by users, opening a new challenge to the broad legal immunity provided to internet companies by the law known as Section 230.

The court on Monday took up a set of cases in which families of terrorism victims allege Twitter, Facebook and YouTube bear some responsibility for attacks by Islamic State, based on content posted on those sites.

Section 230 of the Communications Decency Act has come under intense scrutiny from lawmakers in recent years, but this is the first time the Supreme Court has moved to weigh in on the foundational internet law.

The eventual ruling could have repercussions for businesses and internet users worldwide, said Anupam Chander, a professor at Georgetown University Law Center.

At issue are the “algorithmic processes for information dissemination that all internet platforms use,” Mr. Chander said.

The court agreed to take up Gonzalez v. Google, an appeal by the family of Nohemi Gonzalez, a young woman killed in an ISIS attack in Paris in 2015. Ms. Gonzalez’s family alleges YouTube, a subsidiary of Google owner Alphabet Inc., aided ISIS by recommending the terrorist group’s videos to users.

The court also agreed to hear a similar appeal, Twitter Inc. v. Taamneh, brought by family members of Nawras Alassaf, who was killed in an ISIS attack at an Istanbul nightclub in 2017. Mr. Alassaf’s relatives allege Twitter, Google and Facebook parent company Meta Platforms Inc. all provided material support to ISIS and are “the vehicle of choice in spreading propaganda.”

Lawyers for Google, Twitter and Facebook have said in court filings that they have made extensive efforts to remove ISIS content and that there is no direct causal link between the websites and the Paris and Istanbul attacks.

. . . .

Section 230 helped build the modern-day internet. The statute acts as a shield, saying that internet companies generally aren’t liable for harmful content user posts on their sites. Section 230 also allows companies to remove content they deem objectionable without liability, as long as they act in good faith.

In the Gonzalez case, the plaintiffs alleged Google knowingly allowed its algorithms to recommend and target ISIS recruitment videos to users, allowing the group to spread its message.

The case raises the question of whether Section 230 grants immunity for recommendations made by algorithms or if it only applies to editorial decisions—like removing content—made by representatives of internet companies.

“[W]hether Section 230 applies to these algorithm-generated recommendations is of enormous practical importance,” the family argued in their petition to the high court. “Interactive computer services constantly direct such recommendations, in one form or another, at virtually every adult and child in the United States who uses social media.”

Link to the rest at The Wall Street Journal

4 thoughts on “Supreme Court to Weigh if YouTube, Twitter, Facebook Are Liable for Users’ Content”

  1. I wonder how the composition of the current and highly controversial Supreme Court has affected the timing of these suits – since the bias of this Court is so visible and broad.

    As an ordinary citizen, I don’t trust them to handle anything any more. Sad. Another thing destroyed by a previous administration.

    • What is the bias of Sotomayer, Kagan, and Jackson?

      But how about some fun with alternative history?

      McConnell allows a vote on Obama’s 2016 nomination of Garland, and Garland is approved.
      Clinton wins in 2016.
      Clinton appoints liberal justices to replace Ginsburg and Kennedy.
      Who would trust such a court to handle anything any more? The bias would be visible and broad. Sad.

    • Meanwhile, I, as an ordinary citizen, trust this court far more than previously “unbiased” SCOTUSes that gave us such legal travesties as Kelo v. New London.

  2. As usual, the WSJ oversimplifies matters.

    These cases have nothing to do with “liability.” That’s a complicated, intensely fact-driven inquiry. That’s the legal “thingy” that comes down to “which side’s lawyer tells a more-convincing story to the jury?”

    These cases are about absolute immunity — that is, whether they should be cut off before anyone looks at the facts at all. (And it’s not a “qualified immunity” because it’s a statutory provision not turning on whether a “reasonable person” would have known there might be something wrong.)

    That sounds like a too-subtle distinction, but it matters. There’s a yuuuuuuuuuuuuuuuuuuuge difference, going forward (and the guy best known for “yuuuuuuuuuuuuuuuuuuuuge” has just filed a defamation case against CNN that will be affected by the result), in whether one can even get the opportunity to present a case that “ISP X hurt me by excessive/lack of ‘censorship’ on its services.” If there’s immunity, no court will ever hear that argument; will ever rule on what kind of evidence is admissible to evaluate that argument; will ever charge a jury, or take a jury verdict, on the merits of a particular instance. Instead, everything will be cut off at the pleading stage: Plaintiffs will file suit, defendants will say “That’s a very sad story but we’re immune,” and the court will agree with the defendants and end the lawsuit. All of which is actually very bad for business, bad for the ‘net, and bad for everyone else except intentional bad actors… not just because it will result in kneejerk overreactions later, but because immunity means never having to say you’re sorry, and that’s what’s bad because it means mistakes (intentional or otherwise) get perpetuated.

Comments are closed.