From Hugh Stephens Blog:
My wife and I had just been visiting our daughter in her new home when we turned on the car radio. It was an interview on CBC with Andy Parker, whose daughter Alison had been murdered, live on TV, by a deranged former employee, back in 2015. The killer recorded and later uploaded video of Parker’s death to the internet, in addition to the live broadcast of the tragedy. The radio story was about the trials of a father who was being trolled by hate-mongers and conspiracy theorists, and about his ongoing efforts to get the videos of his daughter’s murder taken down by YouTube.
. . . .
One wonders why a corporation of this size and influence, one with such reach and the power to influence people’s lives for the better, doesn’t get it. When Parker first learned that videos of Alison’s death were circulating on YouTube, he contacted them and was informed that according to the company’s “moment of death” policy, the content could be removed. There is an online form available that states;
“If you’ve identified content showing your family member during moment of death or critical injury, and you wish to request removal of this content, please fill in the information below. We carefully review each request, but please note that we take public interest and newsworthiness into account when determining if content will be removed. Once you’ve submitted your report, we’ll investigate and take action as appropriate.”
So far, so good. But then Parker found out that he would have to flag each and every posting of the atrocity in order to get YouTube to act. Videos taken down today could be up again tomorrow, posted by people ranging from conspiracy theorists to plain vindictive sociopaths. YouTube refused to institute a blanket ban on the video, even though it had the technical means to do so. Moreover the algorithms that recommend content to viewers continue to serve up content related to the video. In frustration, Parker is now bringing a lawsuit against YouTube.
One has to ask why YouTube could not take the necessary steps to police its own content. Under pressure from copyright owners it has instituted a system of sorts that will take down all videos of a proven copyrighted work. While the system is unsatisfactory to many, at least there is a functioning system to take down copyright infringing works, as YouTube is required to do under the DMCA in order to keep its safe harbour. And there is other content that YouTube is required by law to block, and by and large it manages to do so, such as child porn, and sex trafficking. In addition, there are other forms of undesirable content that the platforms, YouTube among them, ought to block, as a matter of common sense, but here they do a poor job. Facebook’s slow-off- the-mark response to block the dissemination of the filmed violence against the mosque and worshippers in Christchurch, New Zealand, is but one example, as is the ongoing issue of hate speech and incitement to violence and terrorism as witnessed on the website 8Chan.
What really upsets Mr. Parker is that not only does YouTube require him to constantly police its site to identify postings of his daughter’s death (just as copyright owners have to spend the time to notify YouTube of infractions, although some of this is automated through ContentID), the clicks that it attracts enable YouTube to monetize the hateful video.
Link to the rest at Hugh Stephens Blog
PG understands that digital fingerprinting has reached the point where it can accurately identify specific digital content – video, audio, ebooks, etc. – with a high degree of accuracy and that, at least some providers of this service do so in a way that can’t be defeated with changes to the file hash, length, additions and deletions, etc., digital characteristics of the file. He understands that Audible Magic is a major player in digital fingerprinting.
Undoubtedly, licensing digital fingerprinting software and solutions costs money, but, particularly in situations like the one described in the OP, PG would think that YouTube might take action, if only to avoid future legislative or regulatory supervision over the company’s policies and business practices.
While many indie authors understand the Digital Millenium Copyright Act allows them to send take-down notices when someone offers their book online without the author’s consent. However, one of the significant benefits the DMCA provided online information providers like YouTube, Google, Facebook, etc., was freedom from claims by copyright holders if the online giants published copyright-protected content so long as the online publishers responded to takedown notices with reasonable speed.
For reasons that seemed sensible at the time (1998 for the US version of the DMCA), the onus of discovering copyright infringement placed on the owners of the copyright included no requirement that the online information provider notify or make it easy for the copyright holder to discover the publication of infringing copies by YouTube, etc.
PG will note that the fast-moving pace of innovation and change typical of the online world is a terrible match for the ponderous and sometimes witless process of getting legislation passed in the US and elsewhere.
Despite that unfortunate fact, PG suggests an amendment to the DMCA that permits efficient and accurate technical methods of identifying copyrighted works to be used by creators as and when they become commercially available. This would include methods for creating digital fingerprints or other tamper-resistant means of identifying protected works.
When such methods become reasonably available, online information providers would be required to check uploaded content against databases of digital fingerprints generally published or made available by third parties for the purpose of identifying infringing uses. Alternatively, copyright owners could submit digital fingerprints in a standard form directly to the online information providers so such organizations could create and host such databases themselves.