“the Internet’s Not Written in Pencil, It’s Written in Ink” … yet Content Removal Can Be Done on a Worldwide Basis, Says Ag Szpunar

This content has been archived. It may no longer be accurate or relevant.

From IP Kat:

When it comes to content removal in the context of an injunction, how is this to be done in order to comply with the prohibition of a general monitoring obligation, as per Article 15 of the E-commerce Directive?

This, in a nutshell, is the issue at stake in Facebook, C-18/18, a referral for a preliminary ruling from the Austrian Supreme Court made in the context of national proceedings concerning defamatory comments published on Facebook.

Yesterday, Advocate General (AG) Szpunar delivered his Opinion, which opens with a quote from The Social Network (the film about the beginning of Facebook): “The internet’s not written in pencil, it’s written in ink”. Indeed, as the AG effectively summed up, this case concerns:
whether a host which operates an online social network platform may be required to delete, with the help of a metaphorical ink eraser, certain content placed online by users of that platform.

. . . .

In his Opinion, AG Szpunar advised the Court of Justice of the European Union (CJEU) to rule that Article 15 of the E-commerce Directive does not preclude a host provider from being ordered – by means of an injunction – to seek and identify, among all the information disseminated by users of its service, the information identical to the information that has been found to be illegal by a court that issued that injunction. With regard to equivalent information, the duty the host provider to search and identify such information is only in relation to the information disseminated by the user that disseminated that illegal information. To this end, the effects of the relevant injunction must be clear, precise and foreseeable, and the authority that issues such injunction must also take into account and balance different fundamental rights, as well as complying with the principle of proportionality.

In relation to the territorial scope of an injunction, the AG noted that the E-commerce Directive is silent on this point. Hence, an injunction can also impose removal of information on a worldwide basis.

. . . .

The background national proceedings relate to an application for an injunction that an Austrian politician sought against Facebook, following failure by the latter to remove information posted by a user and containing disparaging comments relating the politician. The Vienna Commercial Court issued the requested injunction, and ordered Facebook to remove the relevant content. Facebook complied with the injunction, but only disabled access to the content in Austria.

On appeal, the Vienna Higher Regional Court upheld the order made at first instance as regards identical allegations, and dismissed Facebook’s request that the injunction be limited to Austria. However, that court ordered the removal of equivalent content should be only done in relation to content notified by the applicant to Facebook.

. . . .

These are the questions referred to the highest EU court:
(1) Does Article 15(1) of Directive [2000/31] generally preclude any of the obligations listed below of a host provider which has not expeditiously removed illegal information, specifically not just this illegal information within the meaning of Article 14(1)(a) of [that] directive, but also other identically worded items of information:

(a) worldwide?
(b) in the relevant Member State?
(c) of the relevant user worldwide?
(d) of the relevant user in the relevant Member State?

(2) In so far as Question 1 is answered in the negative: Does this also apply in each case for information with an equivalent meaning?
(3) Does this also apply for information with an equivalent meaning as soon as the operator has become aware of this circumstance?

. . . .

Facebook may be ordered to:

  • Seek and remove information identical to the information characterized as illegal when this is also disseminated by other users of the platform;
  • Seek and remove information equivalent to the information characterized as illegal only when this is disseminated by the user who disseminated said information. Holding otherwise, and extending Facebook’s obligation to information disseminated by other users, would entail a general monitoring on the side of the provider (that could no longer be considered neutral), and would also fail to achieve a fair balance of different rights and interests.

Link to the rest at IP Kat

PG says this matter illustrates the substantial tension between freedom of speech and defamation online. It also illustrates the substantial difficulty of providing a means of returning an individual’s reputation to a state which it enjoyed (rightly or wrongly) prior to the publication of defamatory information.

PG is not an expert on EU laws relating to freedom of speech either on or offline.

In the US, however, speaking generally, in the first instance, an individual is free to say anything she/he desires about another individual. That’s basic First Amendment stuff.

Most speech is protected by the First Amendment to the United States Constitution:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

Defamatory speech is, however, an exception to this protected status of speech. “Speech” doesn’t mean only vocal statements to others, but also applies to writings, including writings that are widely disseminated.

Nolo.com provides a good high-level summary:

“Defamation of character” is a catch-all term for any statement that hurts someone’s reputation. Written defamation is called “libel,” while spoken defamation is called “slander.” Defamation is not a crime, but it is a “tort” (a civil wrong, rather than a criminal wrong). A person who has been defamed can sue the person who did the defaming for damages.

Defamation law tries to balance competing interests: On the one hand, people should not ruin others’ lives by telling lies about them; but on the other hand, people should be able to speak freely without fear of litigation over every insult, disagreement, or mistake. Political and social disagreement is important in a free society, and we obviously don’t all share the same opinions or beliefs. For instance, political opponents often reach opposite conclusions from the same facts, and editorial cartoonists often exaggerate facts to make their point.

Link to the rest at Nolo

Facebook certainly has the ability to remove a specific posting that is found to be defamatory and removes postings that violate its terms of service or Community Standards on a regular basis (although PG understands FB is more likely to deactivate a Facebook account rather than removing only a handful of messages).

However, in the US, there are different defamation standards for a “public figure” (elected officials, movie and sports stars, etc) vs. a private citizen.

Generally speaking, a public figure has to prove actual malice by the speaker/writer knowing their statements were false or by the speaker/writer demonstrating a reckless disregard for the truth of the defamatory statement.

A private citizen need not prove any particular malicious intention or reckless disregard of the truth, but rather negligence in considering or confirming that a statement is false prior to publication.

Here’s a portion of Facebook’s Community Standards:

We distinguish between public figures and private individuals because we want to allow discussion, which often includes critical commentary of people who are featured in the news or who have a large public audience. For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment. For private individuals, our protection goes further: we remove content that’s meant to degrade or shame, including, for example, claims about someone’s sexual activity. We recognize that bullying and harassment can have more of an emotional impact on minors, which is why our policies provide heightened protection for users between the ages of 13 and 18.

PG isn’t certain whether Facebook has the ability to search all messages or other parts of its content on a world-wide basis to locate and remove specific words about an individual, however.

According to some quick and dirty online research, there are about 3.5 billion social media users around the world.

If each of these users posted on Facebook an average of once per week, Facebook would have to screen 182 billion posts per year.

If Facebook ran its search for defamatory information once per week, objectionable posts might be publicly available on Facebook for up to 7 days.

PG doesn’t know if Facebook can screen its posts by the date the posts were created or not. Additionally, PG doesn’t know if Facebook can screen new posts only or if Facebook can scan new and modified posts in a single or related operation.

In any case, performing a phrase search on a large database consisting of all Facebook postings is a very large job and would require a lot of computing power.

If 100 politicians complained and Facebook was required to search for defamatory language for each one, you can see how giant this task would become.

If online defamers posted a new defamation or a version of a prior defamation that was slightly modified so it would avoid Facebooks identical language searches and the politicians wanted Facebook to also search for all the modified versions of the defamatory language, PG says you could begin to take up a larger and larger portion of the world’s computing capacity.

PG thinks it wouldn’t be very difficult for someone to create a defamation message assembly program that would allow a user to enter the name of the politician or another person they wished to defame and would thereatfer generate a wide range of defamatory messages that were different enough from one another to evade an identical word search Facebook was obligated to undertake.

For example:

Joe Blow is a crook.
Joe Blow is a racketeer.
Joe Blow is a rogue.
Joe Blow is a swindler.
Joe Blow is a villain.
Joe Blow is a shyster.

Congressman Joe Blow is a crook.
Congressman Joe Blow is a racketeer.
Congressman Joe Blow is a rogue.
Congressman Joe Blow is a swindler.
Congressman Joe Blow is a villain.
Congressman Joe Blow is a shyster.

Illinois Congressman Joe Blow is a crook.
Illinois Congressman Joe Blow is a racketeer.
Illinois Congressman Joe Blow is a rogue.
Illinois Congressman Joe Blow is a swindler.
Illinois Congressman Joe Blow is a villain.
Illinois Congressman Joe Blow is a shyster.

Illinois Congressman Joe Blow is a big crook.
Illinois Congressman Joe Blow is a big racketeer.
Illinois Congressman Joe Blow is a big rogue.
Illinois Congressman Joe Blow is a big swindler.
Illinois Congressman Joe Blow is a big villain.
Illinois Congressman Joe Blow is a big shyster.

On the other hand, the victim of defamation would not want Facebook deleting posts like the following:

Joe Blow is not a crook.

 

6 thoughts on ““the Internet’s Not Written in Pencil, It’s Written in Ink” … yet Content Removal Can Be Done on a Worldwide Basis, Says Ag Szpunar”

  1. One issue that I rarely see brought up is that of permitting one group of people to control what is seen by all people around the world.

    We might agree that the postings about this Austrian politician are falsehoods and therefore should be removed. However – what is to prevent a pliable court from demanding that truths be removed? A court in Red China could easily demand that any references to “Tiananmen Massacre” be removed – a Russian court could demand removal of references to “Holodomor,” a Turkish court could demand removal of “Armenian Genocide,” etc.

    Or, closer to home for most PG readers, a sufficiently pliable US court could have references to “Monica’s dress” or “Grab them by the…” suppressed. (One could hope that such would be overturned on appeal – but not in the foreign cases I listed – and not before the desired suppression of public discourse was achieved, as in the Citizens United case.)

  2. I dislike when a major online media presence decides to mute someone, relying upon their market share to cause a person to lose the ability to communicate as effectively as others.

    That said, this might be an exception. I also dislike, strongly, one party telling another party how they MUST manage their service. Maybe Facebook should wrap these easily bruised politicians in a blanket so they can’t see or respond to their constituents, or be hurt by their words.
    I’m sure these incumbent public figures don’t need Facebook. Surely no one would miss their messages. (Unfortunately, they probably don’t. If anyone can get their message out, without relying upon Facebook and others, it would be those with the reins of government.)

    In any case. If I were Facebook, I’d be looking for ways to deep six the careers of these meddlesome politicians. Some way to leverage what I had to advantage new blood over the old.

    • “I dislike when a major online media presence decides to mute someone, relying upon their market share to cause a person to lose the ability to communicate as effectively as others.”

      And that’s where they made their mistake because if they can mute what they don’t like then there’s no reason they can’t mute what any government tells them to mute.

  3. Another problematic example would be…

    I do not agree with the proposition that Joe Blow is a crook.

  4. I suspect the offending post would trigger thousands of defamatory posts about the defamed. Or maybe just thousands saying, “Joe Blow is not an idiot.”

Comments are closed.