This content has been archived. It may no longer be relevant
From Hugh Stephens Blog:
My wife and I had just been visiting our daughter in her new home when we turned on the car radio. It was an interview on CBC with Andy Parker, whose daughter Alison had been murdered, live on TV, by a deranged former employee, back in 2015. The killer recorded and later uploaded video of Parker’s death to the internet, in addition to the live broadcast of the tragedy. The radio story was about the trials of a father who was being trolled by hate-mongers and conspiracy theorists, and about his ongoing efforts to get the videos of his daughter’s murder taken down by YouTube.
. . . .
One wonders why a corporation of this size and influence, one with such reach and the power to influence people’s lives for the better, doesn’t get it. When Parker first learned that videos of Alison’s death were circulating on YouTube, he contacted them and was informed that according to the company’s “moment of death” policy, the content could be removed. There is an online form available that states;
“If you’ve identified content showing your family member during moment of death or critical injury, and you wish to request removal of this content, please fill in the information below. We carefully review each request, but please note that we take public interest and newsworthiness into account when determining if content will be removed. Once you’ve submitted your report, we’ll investigate and take action as appropriate.”
So far, so good. But then Parker found out that he would have to flag each and every posting of the atrocity in order to get YouTube to act. Videos taken down today could be up again tomorrow, posted by people ranging from conspiracy theorists to plain vindictive sociopaths. YouTube refused to institute a blanket ban on the video, even though it had the technical means to do so. Moreover the algorithms that recommend content to viewers continue to serve up content related to the video. In frustration, Parker is now bringing a lawsuit against YouTube.
One has to ask why YouTube could not take the necessary steps to police its own content. Under pressure from copyright owners it has instituted a system of sorts that will take down all videos of a proven copyrighted work. While the system is unsatisfactory to many, at least there is a functioning system to take down copyright infringing works, as YouTube is required to do under the DMCA in order to keep its safe harbour. And there is other content that YouTube is required by law to block, and by and large it manages to do so, such as child porn, and sex trafficking. In addition, there are other forms of undesirable content that the platforms, YouTube among them, ought to block, as a matter of common sense, but here they do a poor job. Facebook’s slow-off- the-mark response to block the dissemination of the filmed violence against the mosque and worshippers in Christchurch, New Zealand, is but one example, as is the ongoing issue of hate speech and incitement to violence and terrorism as witnessed on the website 8Chan.
What really upsets Mr. Parker is that not only does YouTube require him to constantly police its site to identify postings of his daughter’s death (just as copyright owners have to spend the time to notify YouTube of infractions, although some of this is automated through ContentID), the clicks that it attracts enable YouTube to monetize the hateful video.
Link to the rest at Hugh Stephens Blog
PG understands that digital fingerprinting has reached the point where it can accurately identify specific digital content – video, audio, ebooks, etc. – with a high degree of accuracy and that, at least some providers of this service do so in a way that can’t be defeated with changes to the file hash, length, additions and deletions, etc., digital characteristics of the file. He understands that Audible Magic is a major player in digital fingerprinting.
Undoubtedly, licensing digital fingerprinting software and solutions costs money, but, particularly in situations like the one described in the OP, PG would think that YouTube might take action, if only to avoid future legislative or regulatory supervision over the company’s policies and business practices.
While many indie authors understand the Digital Millenium Copyright Act allows them to send take-down notices when someone offers their book online without the author’s consent. However, one of the significant benefits the DMCA provided online information providers like YouTube, Google, Facebook, etc., was freedom from claims by copyright holders if the online giants published copyright-protected content so long as the online publishers responded to takedown notices with reasonable speed.
For reasons that seemed sensible at the time (1998 for the US version of the DMCA), the onus of discovering copyright infringement placed on the owners of the copyright included no requirement that the online information provider notify or make it easy for the copyright holder to discover the publication of infringing copies by YouTube, etc.
PG will note that the fast-moving pace of innovation and change typical of the online world is a terrible match for the ponderous and sometimes witless process of getting legislation passed in the US and elsewhere.
Despite that unfortunate fact, PG suggests an amendment to the DMCA that permits efficient and accurate technical methods of identifying copyrighted works to be used by creators as and when they become commercially available. This would include methods for creating digital fingerprints or other tamper-resistant means of identifying protected works.
When such methods become reasonably available, online information providers would be required to check uploaded content against databases of digital fingerprints generally published or made available by third parties for the purpose of identifying infringing uses. Alternatively, copyright owners could submit digital fingerprints in a standard form directly to the online information providers so such organizations could create and host such databases themselves.
5 thoughts on “Google Is Monetizing Human Tragedy: Why Aren’t They Held Accountable?”
Does a digital fingerprint have to be placed into a file, or is it developed from the existing file content?
My understanding (which may be faulty) is that a digital fingerprint is separate from the original file.
Of its video fingerprinting, Audible Magic says:
Positive identification rates exceed 99% with false positive rates of less than 10-6. Accurate identification occurs with clips as small as 5 seconds. Precision is within 25ms, which is frame accurate for video media. Identification is possible with most audio and video formats and codecs, when the original media has been transcoded or manipulated, or when audio is captured via a microphone with background noise.
The company describes its process for fingerprinting audio files as follows:
Content identification is based on the perceptual characteristics of the audio, which allows it to accurately identify content across file formats, codecs, bit rates, and compression techniques. This approach is highly accurate and requires no dependence on metadata, watermarks or file hashes. It is also immune to many typical transformations, compression techniques and background noise.
“Google Is Monetizing Human Tragedy: Why Aren’t They Held Accountable?”
So’s every large TV so-called ‘news’ channel, why are we even talking about Google?
Why are the doing it? Because it sells advertising – in other words ‘any’ news you get blasted at you is – wait for it – monetized.
As to making take-downs even easier or force Google to somehow do it; just consider how it could be misused by those that wish to. DMCA is bad enough because there’s no penalty for false take-downs. As for an automatic magical Google take-down tool? Forget it, there are too many ways to do any number of minor edits of something and make it not look like the file they’ve been told to take-down.
Err, no. Fingerprinting of files has come a long way since simple checksums and hashing. PG noted the Audible Magic system.
The money angle is not valid, either, to my mind. Even if they have to license the algorithm patents (it would surprise me if they do not own many of the patents themselves) – Google, Amazon, Facebook, etc. have the deep pockets to do so. They just don’t have the legal liability to provide the motivation.
I would add one thing to Passive Guy’s comments. The solution he outlines is perfectly good for IP material (although I do think the onus should be on the distributor, not the creator, unless it becomes really inexpensive). However, it does nothing for the poor man in the OP. What needs to be added in any potential legislation is a requirement for the distributor to take a fingerprint of the material that is taken down and ban any attempt to upload a file with that fingerprint for perpetuity (or until the takedown is successfully challenged).
I listen to an appalling amount of podcasts, many on YouTube. Most of the podcasters live in terror of YouTube’s various algorithyms of content which will demonitize them for mild swear words, words that may be used in violent content even if not used in violent content, and words that may in some context be sexual. One poor guy changed “asphalt” to “gravel” because the word started with “ass.” And still YouTube takes away their income. So, YouTube claiming their inability to patrol their content is absolute nonsense.
Comments are closed.