EU and Article 13: the Dystopia That Never Was and Never Will Be

This content has been archived. It may no longer be accurate or relevant.

From The Trichordist:

The “Declaration of the Independence of Cyberspace“ published in 1996 by John Perry Barlow begins with the words “Governments of the Industrial World I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.” One reading of this text entirely rejects the possibility that processes of making and enforcing collectively binding decisions – political processes – apply on the Internet. Another possible reading sees the Internet as a public space governed by rules that must be established through democratic process while also holding that certain sub-spaces belong to the private rather than the public sphere. The distinction between public and private affairs, res publicae und res privata, is essential for the functioning of social spaces. The concept of the “res publicae” as “space concerning us all”  led – and not only etymologically – to the idea of the republic as a form of statehood and, later, as a legitimate space for democratic policymaking.

On the Internet, this essential separation of private and public space has been utterly undermined, and the dividing lines between public and private spaces are becoming ever more blurred. We now have public spaces lacking in enforcement mechanisms and transparency and private spaces inadequately protected from surveillance and the misuse of data. Data protection is one obvious field this conflict is playing out on, and copyright is another.

The new EU Directive on Copyright seeks to establish democratic rules governing the public dissemination of works. Its detractors have not only been vociferous – they have also resorted to misleading forms of framing. The concepts of upload filters, censorship machines and link taxes have been injected into the discussion. They are based on false premises.

. . . .

What campaigners against copyright reform term “upload filters” are not invariably filters with a blocking function; they can be simple identification systems. Content can be scanned at the time of uploading to compare it to patterns from other known content. Such a system could, for example, recognize Aloe Blacc’s retro-soul hit “I need a Dollar.” Such software systems can be compared to dictation software capable of identifying the spoken words in audio files. At this point in time, systems that can identify music tracks on the basis of moderately noisy audio signals can be programed as coursework projects by fourth-semester students drawing on open-source code libraries. Stylizing such systems as prohibitively expensive or as a kind of “alien technology” underestimates both the dystopian potential of advanced pattern recognition systems (in common parlance: artificial intelligence) in surveillance software and similar use cases while also underestimating the feasibility of programming legitimate and helpful systems. The music discovery app “Shazam,” to take a specific example, was created by a startup with only a handful of developers and a modest budget and is now available on millions of smartphones and tablets – for free. The myth that only tech giants can afford such systems is false, as the example of Shazam or of enterprises like Audible Magic shows. Identifying works is a basic prerequisite for a reformed copyright regime, and large platforms will not be able to avoid doing so. Without an identification process in place, the use of licensed works cannot be matched to license holders. Such systems are, however, not filters.

. . . .

The principal argument of critics intent on frustrating digital copyright reforms that had already appeared to be on the home stretch is their charge that the disproportionate blocking of uploads would represent a wholesale assault on freedom of speech or, indeed, a form of censorship. Here, too, it is necessary to look more closely at the feasibility and potential of available options for monitoring uploads – and especially to consider the degree of efficiency that can be achieved by linking human and automated monitoring. In a first step, identification systems could automatically block secure matches or allow them to pass by comparing them against data supplied by collecting societies. Licensed content could readily be uploaded and its use would be electronically registered. Collecting societies would distribute license revenue raised to originators and artists. Non-licensed uses could automatically be blocked.

. . . .

Humans can recognize parodies or incidental uses such as purely decorative uses of works in ways that that do not constitute breaches of copyright.

The process of analysis could be simplified further by uploaders stating the context of use at the time works are uploaded. Notes such as “This video contains a parody and/or uses a copyrighted work for decorative purposes” could be helpful to analysts. The Network Enforcement Act (NetzDG) in Germany provides a good example of how automatic recognition and human analysis can work in tandem to analyze vast volumes of information. A few hundred people in Germany are currently tasked with deciding whether statements made on Facebook constitute incitement to hatred and violence against certain groups or are otherwise in breach of community rules. These judgments are significantly more complex than detecting impermissible uses of copyrighted works.

. . . .

Being obliged to implement human monitoring will, of course, impose certain demands on platforms. But those most affected will be the platforms with the largest number of uploads. These major platforms will have the highest personnel requirements because they can host content of almost every kind: music, texts, video etc. Protecting sites like a small photo forum will be much simpler. If only a modest number of uploads is involved, the forum operator can easily check them personally at the end of the working day. In that case, uploaders will simply have to wait for a brief period for their content to appear online. Or operators can opt to engage a service center like Acamar instead of adding these checks to their own workloads. Efficient monitoring is possible.

Link to the rest at The Trichordist

PG understands and sympathizes with the concerns of copyright owners about improper use of their property.

However, every online use of copyrighted material does not represent a loss of income to the copyright owner. Assuming there was a price tag associated with the use of such material, it could be omitted entirely or a substitute without a price tag could be selected.

While some uses of copyrighted material can be harmful, a great many of such uses may be viewed by 25 people who are unlikely to be paying consumers of that material.

Under US copyright law, the protected fair use of copyrighted material is often not a clear-cut matter. Reasonable people can disagree about whether a use is covered by fair use or not.

A significant number of owners of large catalogs of copyrighted material are extremely aggressive in their interpretation of what is protected by those copyrights. Disney and Mickey Mouse are but one example.

A couple of statements in the OP raised further concerns:

  • If only a modest number of uploads is involved, the forum operator can easily check them personally at the end of the working day.
  • In that case, uploaders will simply have to wait for a brief period for their content to appear online.

Exactly how is a forum operator who operates a small online site and supports it by working at a day job supposed to conduct an analysis of say 30 uploads to determine whether they may be subject to anyone’s copyright and, if they are, whether the use of the works was fair use or not? If a photo shows up in the uploads, how is the operator to determine who the creator of the photo is/was? If a photo has been modified by the person posting it, how is the operator to determine who the creator of the original photo was?

As far as “uploaders” waiting “for a brief period for their content to appear online”, PG suggests such delays may well adversely impact the quality of the online discussion. If an original post triggers a lot of responses, but those responses are held in moderation, are visitors to the online forum going to assume the post is irrelevant or is of no interest and perhaps leave the forum for good.

The killer among the breezy thoughts in the OP is, “Being obliged to implement human monitoring will, of course, impose certain demands on platforms.”

It will impose a serious and significant demand on platforms. If one were designing regulations to substantially reduce the amount of online dialogue about a wide range of subjects and the number of places where that dialogue occurs, imposing “certain demands” on those who sponsor such communities is a perfect way to make anything other than standard mainstream destinations and opinions to go away and rob the Internet of much of its innovative energy and independent thought.

If one were designing a system to ensure corporate control of online interaction, one might certainly do so on the pretense of protecting the words and pictures of copyright holders.

8 thoughts on “EU and Article 13: the Dystopia That Never Was and Never Will Be”

  1. “Minor detail”: any content ID matching system needs constant access to the original content. Music licenses are accessed through a handful of compulsory license organizations.

    Books aren’t. There is neither compulsory licensing nor unified licensing agency. The same is true for video. Games. News.

    In fact, it is only true for…music.

    Details, details.

    Who cares how easy or hard to code the matching software might be! It’s the backend database that matters!

    Yeah, lets see:
    1- force a need to license the matching content
    2- refuse to license said content
    3- it’s 1968 all over again! No online content to compete against!

    • “3- it’s 1968 all over again! No online content to compete against!”

      At least in the EU. 😉

      “Content can be scanned at the time of uploading to compare it to patterns from other known content.”

      Really? The EU thinks that’s even possible? Google might be the closest to getting there but not even they have ‘every’ piece of content. (And if they did – does that mean all EU internet companies will now need to pay Google to check to see if their content is anywhere else?)

      And if ‘copies’ of any content are found – can Google tell who actually owns the rights to them? I only ask after hearing of NASA pictures being taken down because someone else used them and claimed them as their own.

      If they were hoping to turn the EU internet into a black hole where nothing can reach the light of day I think they’re on the right track.

      And I thank the EU for making it easier for me to sell my ebooks by hiding/blocking their own. 😉

  2. In a first step, identification systems could automatically block secure matches or allow them to pass by comparing them against data supplied by collecting societies.

    That is indeed a filter.

    • The first part of any big lie is to change what words mean. Then you can use them to form your lie, people see the words saying one thing as the new meaning of the same words spell out something else.

  3. Even content owners can’t identify legally uploaded content with reliability. They have had one arm issue take down orders (or attempt to via the courts) against content put up by another arm.

    If the rights owners can’t tell if something as put up with permission (completely ignoring the fair use issue), how should the poor site owner be able to tell?

    And then there are the Prenda Law folks who put up content for the purpose of suing people who donload it.

Comments are closed.