PG’s Thoughts (such as they are)

‘Restoring the Promise’ Review: High Cost, Low Yield

25 June 2019

Not exactly about books, but PG would bet that over 80% of those who read the books written by regular visitors to TPV (excepting authors of children’s and YA books) are college graduates.

From The Wall Street Journal:

We are at the end of an era in American higher education. It is an era that began in the decades after the Civil War, when colleges and universities gradually stopped being preparatory schools for ministers and lawyers and embraced the ideals of research and academic professionalism. It reached full bloom after World War II, when the spigots of public funding were opened in full, and eventually became an overpriced caricature of itself, bloated by a mix of irrelevance and complacency and facing declining enrollments and a contracting market. No one has better explained the economics of this decline—and its broad cultural effects—than Richard Vedder.

Mr. Vedder is an academic lifer—a Ph.D. from the University of Illinois and a long career teaching economic history at Ohio University. In 2004 he brought out “Going Broke by Degree: Why College Costs Too Much,” and in 2005 he was appointed to the Commission on the Future of Higher Education, a group convened by Margaret Spellings, the U.S. education secretary. “Restoring the Promise: Higher Education in America” is a summary of the arguments he has been making since then as the Cassandra of American colleges and universities. Despite the optimistic tilt of the book’s title, Mr. Vedder has little to offer in the way of comfort.

As late as 1940, American higher education was a modest affair. Less than 5% of adults held a college degree, and the collegiate population amounted to about 1.5 million students. This scale changed with the first federal subsidies, Mr. Vedder notes, beginning in 1944 with the Servicemen’s Readjustment Act (the “GI Bill”). Within three years, veterans accounted for 49% of all undergraduate enrollment—some 2.2 million students. Having earned degrees themselves, the veterans expected their own children to do likewise.

Such expectations were supported by still further subsidies, through the National Defense Education Act (1958) and the Higher Education Act of 1965. By the 1970s, there would be 12 million students in the American college and university system; by 2017, there would be 20 million. Meanwhile, more and more federal research dollars poured into campus budgets—reaching some $50 billion in direct funding by 2016—and set off infrastructure binges. To pay for them, as Mr. Vedder documents, tuition and fees vaulted upward, while the federal programs that were intended to ease the financial burden—especially low-interest student loans—only enticed institutions to jack up their prices still higher and spend the increased revenue on useless but attention-getting flimflam (from lavish facilities to outsize athletic programs). At Mr. Vedder’s alma mater, Northwestern, tuition rose from 16% of median family income in 1958 to almost 70% in 2016. Over time, armies of administrators wrested the direction of their institutions away from the hands of faculties and trustees.

Today a college degree has become so common that 30% of adult Americans hold one. Its role as a bridge to middle-class success is assumed—though bourgeois comfort is rather hard to achieve these days with a B.A. in English literature or a degree in, say, sociology. The modern economy, says Mr. Vedder, simply doesn’t possess the number of jobs commensurate with the expectations of all the degree-holders.

The over-educated barista is one of the standing jokes of American society, but the laughter hasn’t eased the loan burden that the barista probably took on to get his degree. Mr. Vedder says that student loans have injected a kind of social acid into a generation of young adults who, over time, manifest a “decline in household formation, birth rates, and . . . the purchase of homes.” Pajama Boy was born, and took up residence in his parents’ basement.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

And a quote from economist Herbert Stein:

What economists know seems to consist entirely of a list of things that cannot go on forever . . . . But if it can’t go on forever it will stop.

PG suspects that this practice may have become impolite or illegal, but when he was interviewing for his first job out of college (before he went to law school) one of the last questions he was asked by the final interviewer, the head of the department in which the job opening existed, was, “What were your SAT scores?”

Evidentally PG’s answer was satisfactory because he was hired for the position despite having absolutely no training or education that might lead a reasonable person to conclude he was prepared for the specific tasks involved in carrying out his job responsibilities.

What the interviewer was trying to ascertain was whether PG might be smart enough to learn how to do the job if he was hired. (PG was, and received a promotion after about a year, but left the company when a better job beckoned.)

PG has read that the SAT and ACT tests (for visitors to TPV from outside of the United States, these are standardized tests required for entry into virtually any college or university in the country) are effectively proxies for IQ tests.

IQ tests were first developed during the early part of the 20th Century for the purpose of identifying retardation in school children. During World War I an intelligence test was devised to help quickly screen soldiers coming into the US Army for assignment to either basic training or officers training. (At the start of the war, the US ground forces included about 9,000 officers. At the end of the war, there were over 200,000 officers.)

After World War I, IQ testing became much more widespread in both education and business. Unfortunately, it also became entangled with the eugenics movement during the 1920’s and 1930’s.

On a general basis, there is a correlation between educational attainment and IQ – MDs, JDs, and PhDs have higher IQ’s on average than college graduates who, in turn have higher IQ’s than those who attended college but did not graduate and those individuals have higher average IQ’s than those who graduated from high school, but received no additional education.

In this as in countless other things, correlation is not causation. There are plenty of people who possess the inherent intelligence and ability to become MDs, JDs and PhDs who choose not to pursue that educational/occupational path. Such individuals do not, of course, become less intelligent if they go in another direction. From personal experience, PG can attest that there is no shortage of attorneys who do stupid things.

A US Supreme Court case titled Griggs v. Duke Power Co., decided in 1971, effectively forbade employers from using arbitrary tests—such as those for measuring IQ or literacy—to evaluate an employee or a potential employee, a practice that some companies at the time were using as a way to get around rules that prohibited outright racial discrimination.

Griggs began when African American workers at the Duke Power Company in North Carolina sued the company because of a rule that required employees who wished to transfer between different departments to have a high-school diploma or pass an intelligence test.

By a unanimous decision, the Supreme Court held that the tests given by Duke Power were artificial and unnecessary and that the requirements for transfer had a disparate impact on African-Americans. Furthermore, the court ruled that, even if the motive for the requirements had nothing to do with racial discrimination, they were nonetheless discriminatory and therefore illegal. In its ruling, the Supreme Court held that employment tests must be “related to job performance.”

Griggs and resulting civil rights laws notwithstanding, prospective employers still want the best evidence available that a job applicant possesses the abilities (including intelligence) to succeed in the position that needs to be filled.

Given the regulatory environment in which employers operate, post-high school education is a common (and legal) requirement specified in a great many job descriptions. In the US business world, a bachelor’s or advanced degree is often a hard and fast must-have. Written or online job applications always include a section for the applicant to list undergraduate and post-graduate degrees and the institutions that granted such degree(s).

In addition to a degree, the identity of the college/university the applicant attended is often regarded as a proxy for the applicant’s intelligence and ability. The holder of a bachelor’s degree from Harvard will generally be assumed to be more intelligent than someone who graduated from Northeast Hickville State College and Welding School regardless of the personal abilities, talents, work ethic and raw intelligence of the latter.

So, back to the OP,

  • A college degree from an institution generally known for its selective nature is becoming more and more and more expensive because there is no indication that increased tuition and other costs will have any adverse impact on the number and general qualifications (intelligence) of its applicants; and
  • A college degree from some institution, high-quality or not-so-high-quality, as a proxy for intelligence, regardless of the field of study, is a requirement for obtaining a job with a reasonable salary or even getting a foot in the door at a very large number of employers; and
  • Government and other loans are available to any student who wishes to attend almost any college, regardless of a student’s field of study or ability to pay; and
  • As a general proposition under US bankruptcy laws, it is difficult or impossible to avoid the obligation to repay student loans, especially for recent college graduates or graduates who have obtained jobs, regardless of the amount of their current income.

PG wonders one of the ways to address this problem would be to permit employers to receive the results of an IQ test or quasi-IQ test like the SAT or ACT from a job applicant without risking litigation or other penalties for doing so.

Memorial Day

27 May 2019

Repost from Memorial Day, 2013:

For readers outside the United States, today is Memorial Day in the US.

While for many, the holiday is only a long weekend marking the beginning of summer, Memorial Day, originally called Decoration Day because flowers were used to decorate gravesites, was established in 1868, following the American Civil War to commemorate men and women who died while in military service.

PG took this photograph of the American military cemetery in rural Tuscany near Florence. Most soldiers buried there died in World War II, fighting in Italy.

Are You Self-Publishing Audio Books?

21 May 2019

From Just Publishing Advice:

It takes total concentration to read a book or an ebook. But with an audio book, a listener can multitask.

This is the key attraction for so many younger readers in particular, as it allows for the consumption of a book while driving, commuting and playing a game on a smartphone, knitting or even while grinding out the hours at work.

The popularity is on the move and according to recent statistics, audiobooks are now a multi-billion dollar industry in the US alone.

. . . .

In another report, it estimates that one in ten readers are now listening to audiobooks.

While the data helps to gain a small insight into the market, it is still easy to draw an assumption that it is the next logical step for self-publishing authors and small press.

Ebook publishing is now the number one form of self-publishing. Many Indie authors then take the next step and publish a paperback version.

. . . .

An audio version offers an opportunity for self-publishing authors to extend their sales potential, and at the same time, diversify revenue streams.

Well, only a little at present as it is really an Amazon Audible and Apple iTunes dominated retail market. However, in the future, this may change.

. . . .

If you live in the US, you are in luck.

Amazon offers production and publishing through Audio Creation Exchange, ACX.

For authors outside of the US, things are not quite so easy.

. . . .

If you live in the US, you are in luck.

Amazon offers production and publishing through Audio Creation Exchange, ACX.

For authors outside of the US, things are not quite so easy.

This is a very common complaint about Amazon and its US-centric approach, which creates so many hurdles for non-US self-publishers.

The following quote is taken from Amazon’s help topic regarding ACX.

At this time, ACX is open only to residents of the United States and United Kingdom who have a US or UK mailing address, and a valid US or UK Taxpayer Identification Number (TIN). For more information on Taxpayer Identification Numbers (TIN), please visit the IRS website. We hope to increase our availability to a more global audience in the future.

If you live in the UK, Amazon can help you, but you will need to have a TIN. If you are already publishing with KDP, you probably have one.

For the rest of the world, well, Amazon, as it so often does, leaves you out of the cold.

. . . .

There are a growing number of small press and independent publishers who offer to produce and publish audio books.

Distribution is most often on Amazon Audible and iTunes.

Do your research and look for publishers who accept submissions or offer a production service using professional narrators and producers.

As with any decision to use a small publisher, be careful, do your background research and don’t rush into signing a contract until you are totally convinced it is a fair arrangement concerning your audio rights.

While some may charge you for the service, it is worth looking for a publisher that offers a revenue split. This is usually 50-50 of net audio royalty earnings.

It might seem a bit steep, but Amazon ACX offers between 20 and 40% net royalties, so 50-50 is not too bad.

Link to the rest at Just Publishing Advice

As with any publishing contract, PG suggests you check out the contract terms carefully before you enter into a publishing agreement for audiobooks.

Speaking generally (and, yes, there are a few exceptions), the traditional publishing industry has fallen into a bad habit (in PG’s persistently humble opinion) of using standard agreements that last longer than any other business contracts with which PG is familiar (and he has seen a lot).

He refers, of course to publishing contracts that continue “for the full term of the copyright.”

Regular visitors to TPV will know that, in the United States, for works created after January 1, 1978, the full term of the copyright is the rest of the author’s life plus 70 years. Due to their participation in The Berne Convention (an international copyright treaty), the copyright laws of many other nations provide for copyright protections of similar durations — the author’s life plus 50 years is common.

PG can’t think of any other types of business agreements involving individuals that last for the life of one of the parties without any obvious exit opportunities. The long period of copyright protection was sold to the US Congress as a great boon to creators. However, under the terms of typical publishing contracts, the chief beneficiaries are corporate publishers.

While it is important for authors to read their publishing agreements thoroughly (Yes, PG knows it’s not fun. He has read far more publishing agreements than you have or ever will and understands what it is like.), if you are looking for a method of performing a quick, preliminary check for provisions that means you will die before your publishing agreement does, search for phrases like:

  • “full term of the copyright”
  • “term”
  • “copyright”
  • “continue”

Those searches may help you immediately locate objectionable provisions that allow you to put the publisher into the reject pile without looking for other nasties. However, if the searches don’t disclose anything, you will most definitely have to read the whole thing. The quoted terms are not magic incantations which must be used. Other language can accomplish the same thing.

Until the advent of ebooks, book publishing contracts used Out of Print clauses to give the author the ability to retrieve rights to his/her book if the publisher wasn’t doing anything with it.

With printed books, even dribs and drabs of sales would eventually deplete the publisher’s stock of physical books. At this point, the publisher would likely consider whether the cost it would pay for another printing of an author’s book was economically justified or not. If the publisher was concerned about ending up with a pile of unsold printed books in its warehouse for a long time, the publisher might decide not to print any more.

Once the publisher’s existing stock was sold, the book was out of print – it was not for sale in any normal trade channels. The author (or the author’s heirs) could then retrieve her/his rights to the book and do something else with them.

Of course, once an electronic file is created, an ebook costs the publisher nothing to offer for sale on Amazon or any other online bookstore with which PG is familiar.

The disk space necessary to store an individual epub or mobi file is essentially free for Amazon and it doesn’t charge anything to maintain the listing almost forever. (There may be a giant digital housecleaning in Seattle at some time in the distant future, but don’t count on it happening during your lifetime.) Print on demand hardcopy books are just another kind of file that’s stored on disk.

So, in 2019 and into the foreseeable future, an infinite number of an author’s ebooks are for sale and not “out of print”.

So, the traditional exit provision for an author – the out of print clause – remains in existence in almost all publishing contracts PG has reviewed, but it provides no opportunity for the author to exercise it to get out of a publishing agreement that has not paid more than $5.00 in annual royalties in over ten years.

 

Public Knowledge Wants to Solve the Misinformation Problem

9 May 2019

From The Illusion of More:

On Tuesday, Meredith Filak Rose of Public Knowledge posted a blog suggesting that a solution to rampant misinformation is to “bring libraries online.” Not surprisingly, she identifies copyright law as the barrier currently preventing access to quality information that could otherwise help solve the problem …

“High-quality, vetted, peer-reviewed secondary sources are, unfortunately, increasingly hard to come by, online or off. Scientific and medical research is frequently locked behind paywalls and in expensive journals; legal documents are stuck in the pay-per-page hell that is the PACER filing system; and digital-only information can be erased, placing it out of public reach for good (absent some industrious archivists).”

Really?  We’re just a few peer-reviewed papers away from addressing the social cancer of misinformation?

. . . .

The funny thing is that Rose does a pretty decent job of summing up how misinformation can be effectively deployed online, but her description could easily be the Public Knowledge Primer for Writing About Copyright Law:

Misinformation exploits this basic fact of human nature — that no one can be an expert in everything — by meeting people where they naturally are, and filling in the gaps in their knowledge with assertions that seem “plausible enough.” Sometimes, these assertions are misleading, false, or flatly self-serving.  In aggregate, these gap-fillers add up to construct a totally alternate reality whose politics, science, law, and history bear only a passing resemblance to our own.

. . . .

Having said all that, Meredith Rose’s article does not say anything categorically false. It is a sincere editorial whose main flaw is that it is sincerely naïve.  “…in the absence of accessible, high-quality, primary source information, it’s next to impossible to convince people that what they’ve been told isn’t true,” she writes.

Yeah. That psychological human frailty is not going to be cured by putting even more information online, regardless of how “good” it may be, or how copyright figures in the equation.  To the contrary, more information is exactly why we’re wandering in a landscape of free-range ignorance in the first place.

. . . .

Speaking as someone schooled in what we might call traditional liberal academia, I believe Rose reiterates a classically liberal, academic fallacy, which assumes that if just enough horses are led to just enough water, then reason based on empirical evidence will prevail over ignorance.  That’s not even true among the smartest horses who choose to drink. Humans tend to make decisions based on emotion more than information, and it is axiomatic that truth is in the eye of the beholder.

But if galloping bullshit is the disease, the catalyst causing it to spread is not copyright law keeping content off the internet, but the nature of the internet platforms themselves.  By democratizing information with a billion soapboxes it was inevitable that this would foster bespoke realities occupied by warrens of subcultures that inoculate themselves against counter-narratives (i.e. facts) with an assortment of talismanic phrases used to dismiss the peer-reviewed scientist, journalist, doctor, et al, as part of a conspiracy who “don’t want us to know the truth.”

Link to the rest at The Illusion of More

While PG didn’t particularly like the tone of the OP, if you’re going to have an open Internet and if you’re going to have freedom of speech, it is all but certain that some people who operate their own blogs, participate in online discussion groups, write for newspapers, appear on television, publish books, have a Twitter account, etc., etc., are going to communicate ideas that either are wrong or seem wrong.

Ever since cave persons of various genders collected around an open fire to drink and talk, some incorrect information was passed from one person to at least one other person, then disseminated from there.

“If Rockie kills a brontosaurus and examines its entrails, he can tell whether it will rain in three days or not.”

Pretty soon, everyone is harassing Rockie to go dinosaur hunting so they could know whether to schedule the prom for next Thursday or not.

From that day until this, regardless of their political persuasion, someone is passing on false information, believing it to be the truth. Someone else is passing on false information for the greater good, knowing it is false. Someone else is creating false information because they have just discovered a great truth which isn’t.

A large majority of Americans regard Adolph Hitler and Nazism as an obvious and indisputable evil. However, this was not always so.

Charles Lindbergh was one of the greatest American heroes of the 1920’s.  He gained even more public stature and enormous public sympathy in 1932, when his 20-month-old son was kidnapped. The most prominent journalist of the period, H. L. Mencken called the kidnapping and trial “the biggest story since the Resurrection.”

Responding to the kidnapping, the United States Congress passed the Federal Kidnapping Act, commonly called the “Lindbergh Law.” In the middle of the Great Depression, rewards equivalent to more than one million dollars in 2018 currency were offered for information leading to the safe return of the child.

A ransom of $50,000 (the equivalent of nearly $1 million today) was demanded for the safe return of the child and was paid. Unfortunately, the Lindbergh baby was killed before he could be found.

Back to the certainty of public opinion, in 1940, the America First Committee was established for the purpose of supporting Adolph Hitler and the Nazis by keeping the United States out of the war in Europe. It quickly gained more than 800,000 members, including a large number of prominent business figures. The pressure of the organization caused President Franklin Roosevelt to pledge that he would keep America out of war.

Lindbergh was greatly admired in Germany and, at the invitation of Hermann Göring, took a high-profile trip to Germany in 1936 where he was treated as a great hero and shown the highly-sophisticated airplanes developed for the German air force. Lindbergh was a high-profile visitor to the 1936 Olympic Games in Berlin, a huge Nazi propaganda exercise.

The visit was a press sensation with daily articles covering Lindbergh’s activities published in The New York Times. On his return, Lindbergh met with President Roosevelt to report on his observations and opinions. Lindbergh would return to Germany on two more occasions prior to the entry into the war by the United States.

Here’s a short video account of the America First movement and Lindbergh’s opposition to war with Germany from The Smithsonian

Circling back to the OP, had the Internet existed in 1936, what would “high-quality, peer-reviewed” articles have said about Germany and America’s best path forward? What would prominent academics, the owners of major media conglomerates and other prominent world leaders, have posted about Hitler and his supporters?

Prior to the outbreak of hostilities with Germany and Japan, the New York Times, Christian Science Monitor, Chicago Tribune, New York Herald Tribune, Philadelphia Evening Bulletin and many more publications reported the great economic progress Hitler-lead Germany was making as it pulled itself out of the Depression and downplayed the extent and nature of the nation’s attacks on the Jews. Indeed, Hitler was providing the West with important benefits by vigorously attacking Bolshevism and imprisoning Communist supporters.

In Britain, The Daily Mail was a strong supporter of Germany. Harold Harmsworth, the first Viscount Rothermere, was the founder of the Daily Mail and owned 14 other papers. His influence was on a par with Lord Beaverbrook’s.

Rothermere was a strong supporter of Mussolini’s version of fascism, “He is the greatest figure of the age,” Rothermere proclaimed in 1928. “Mussolini will probably dominate the history of the 20th century as Napoleon dominated that of the early 19th.”

“[The Nazis] represent the rebirth of Germany as a nation,” Rothermere wrote in the Mail. The election, he correctly prophesied, would come to be seen as “a landmark of this time.”

The Nazis’ “Jew-baiting,” Rothermere warned, was “a stupid survival of medieval prejudice.” Of course, he also added, the Jews had brought the Nazis’ displeasure on themselves, having shown “conspicuous political unwisdom since the war.”

Germany had been “falling under the control of alien elements,” Rothermere argued. There were 20 times as many Jews in government positions than there had been before the war.

“Israelites of international attachments were insinuating themselves into key positions in the German administrative machine,” he noted darkly. “It is from such abuses that Hitler has freed Germany.”

The Jews were not just a problem in Germany. The menace they posed was much more widespread, he felt.

“The Jews are everywhere, controlling everything,” Rothermere wrote in private correspondence.

See The Times of Israel for more.

Back to the “problem” with fake news on the Internet, PG suggests that the online disputes between right and left are a feature, not a bug, in a free society.

An Appeal to Authority (“experts agree” “science says” “academic publications clearly demonstrate”) is a classic logical fallacy.

Whether in the form of “bringing libraries online,” “High-quality, vetted, peer-reviewed secondary sources,” or “keeping content off the internet,” PG is very much a supporter of free and open disputes, arguments as the best way of preserving the rights of all individuals, debunking fallacy and ensuring that no one group can control and limit the spread of information, whether fake news or real news.

The Golden Age of Youtube Is Over

13 April 2019

From The Verge:

The platform was built on the backs of independent creators, but now YouTube is abandoning them for more traditional content.

. . . .

Aanny Philippou is mad.

He’s practically standing on top of his chair as his twin brother and fellow YouTube creator Michael stares on in amusement. Logan Paul, perhaps YouTube’s most notorious character, laughs on the other side of the desk that they’re all sitting around for an episode of his popular podcast Impaulsive. Anyone who’s watched the Philippous’ channel, RackaRacka, won’t be surprised by Danny’s antics. This is how he gets when he’s excited or angry. This time, he’s both.

“It’s not fair what they’re doing to us,” Danny yells. “It’s just not fair.”

Danny, like many other creators, is proclaiming the death of YouTube — or, at least, the YouTube that they grew up with. That YouTube seemed to welcome the wonderfully weird, innovative, and earnest, instead of turning them away in favor of late-night show clips and music videos.

The Philippou twins hover between stunt doubles and actors, with a penchant for the macabre. But YouTube, the platform where they built their audience base, doesn’t seem to want them anymore.

. . . .

The Philippous’ story is part of a long-brewing conflict between how creators view YouTube and how YouTube positions itself to advertisers and press. YouTube relies on creators to differentiate itself from streaming services like Netflix and Hulu, it tells creators it wants to promote their original content, and it hosts conferences dedicated to bettering the creator community. Those same creators often feel abandoned and confused about why their videos are buried in search results, don’t appear on the trending page, or are being quietly demonetized.

At the same time, YouTube’s pitch decks to advertisers increasingly seem to feature videos from household celebrity names, not creative amateurs. And the creators who have found the most success playing into the platform’s algorithms have all demonstrated profound errors in judgment, turning themselves into cultural villains instead of YouTube’s most cherished assets.

. . . .

YouTube was founded on the promise of creating a user-generated video platform, but it was something else that helped the site explode in popularity: piracy.

When Google bought YouTube in 2006 for $1.6 billion, the platform had to clean up its massive piracy problems. It was far too easy to watch anything and everything on YouTube, and movie studios, television conglomerates, and record labels were seething. Under Google, YouTube had to change. So YouTube’s executives focused on lifting up the very content its founders designed the platform with in mind: original videos.

The focus on creator culture defined YouTube culture from its earliest days. The platform was a stage for creators who didn’t quite fit into Hollywood’s restrictions.

. . . .

Between 2008 and 2011, the volume of videos uploaded to YouTube jumped from 10 hours every minute to 72 hours a minute. By 2011, YouTube had generated more than 1 trillion views; people were watching over 3 billion hours of video every month, and creators were earning real money via Google AdSense — a lot of money. Jenna Marbles was making more than six figures by late 2011. (In 2018, a select group of creators working within YouTube’s top-tier advertising platform would make more than $1 million a month.)

By 2012, creators like Kjellberg were leaving school or their jobs to focus on YouTube full-time. He told a Swedish news outlet that he was getting more than 2 million views a month, boasting just over 300,000 subscribers.

. . . .

Between 2011 and 2015, YouTube was a haven for comedians, filmmakers, writers, and performers who were able to make the work they wanted and earn money in the process. It gave birth to an entirely new culture that crossed over into the mainstream: Issa Rae’s Awkward Black Girl series would eventually lead to HBO’s Insecure. Creators like the Rooster Teeth team and Tyler Oakley went on tour to meet fans after generating massive followings online. YouTube had reached mainstream success, but in many ways, it still felt wide open. Anyone could still upload almost anything they wanted without much input from YouTube itself.

. . . .

Behind the scenes, things were changing. YouTube had begun tinkering with its algorithm to increase engagement and experimenting with ways to bring flashier, produced content to the platform to keep up with growing threats like Netflix.

In October 2012, YouTube announced that its algorithm had shifted to prefer videos with longer watch times over higher view counts. “This should benefit your channel if your videos drive more viewing time across YouTube,” the company wrote in a blog post to creators.

This meant viral videos like “David After Dentist” and “Charlie Bit My Finger,” which defined YouTube in its earliest days, weren’t going to be recommended as much as longer videos that kept people glued to the site. In response, the YouTube community began creating videos that were over 10 minutes in length as a way to try to appease the system.

. . . .

In 2011, YouTube invested $100 million into more than 50 “premium” channels from celebrities and news organizations, betting that adding Hollywood talent and authoritative news sources to the platform would drive up advertising revenue and expand YouTube to an even wider audience. It failed less than two years later, with what appeared to be a clear lesson: talent native to YouTube was far more popular than any big names from the outside.

. . . .

Then, suddenly, creators started encountering problems on the platform. In 2016, personalities like Philip DeFranco, comedians like Jesse Ridgway, and dozens of other popular creators started noticing that their videos were being demonetized, a term popularized by the communityto indicate when something had triggered YouTube’s system to remove advertisements from a video, depriving them of revenue. No one was quite sure why, and it prompted complaints about bigger algorithm changes that appeared to be happening.

Kjellberg posted a video detailing how changes had dropped his viewership numbers. He’d been getting 30 percent of his traffic from YouTube’s suggested feed, but after the apparent algorithm update, the number fell to less than 1 percent. Kjellberg jokingly threatened to delete his channel as a result, which was enough to get YouTube to issue a statementdenying that anything had changed. (The denial sidestepped questions of the algorithm specifically, and spoke instead to subscriber counts.)

These perceived, secretive changes instilled creators with a distrust of the platform. It also led to questions about their own self-worth and whether the energy they were spending on creating and editing videos — sometimes north of 80 hours a week — was worth it.

. . . .

YouTube was exerting more control over what users saw and what videos would make money. Once again, the community would adapt. But how it adapted was far more problematic than anyone would have guessed.

. . . .

By the beginning of 2017, YouTube was already battling some of its biggest problems in more than a decade. YouTube’s founders didn’t prepare for the onslaught of disturbing and dangerous content that comes from people being able to anonymously share videos without consequence. Add in a moderation team that couldn’t keep up with the 450 hours of video that were being uploaded every minute, and it was a house of cards waiting to fall.

YouTube had come under fire in Europe and the United States for letting extremists publish terrorism recruitment videos to its platform and for letting ads run on those videos. In response, YouTube outlined the steps it was taking to remove extremist content, and it told advertisers it would be careful about where their ads were placed. It highlighted many creators as a safe option.

But neither YouTube nor Google was prepared for what Felix “PewDiePie” Kjellberg — one of YouTube’s wealthiest independently made creators — would do.

. . . .

In mid-February 2017, The Wall Street Journal discovered an older video from Kjellberg that included him reacting to a sign held up by two kids that said, “Death to all Jews.” The anti-Semitic comment was included in one of his “react” videos about Fiverr, after having pivoted to more of a variety channel instead of focusing just on games.

His video, along with reports of ads appearing on terrorist content, led to advertisers abandoning YouTube. Kjellberg was dropped from Disney’s Maker Studios, he lost his YouTube Red series, Scare PewDiePie, and he was removed from his spot in Google Preferred, the top-tier ad platform for YouTube’s most prominent creators.

“A lot of people loved the video and a lot of people didn’t, and it’s almost like two generations of people arguing if this is okay or not,” Kjellberg said in an 11-minute video about the situation. “I’m sorry for the words that I used, as I know they offended people, and I admit the joke itself went too far.”

The attention Kjellberg brought to YouTube kickstarted the first “adpocalypse,” a term popularized within the creator community that refers to YouTube aggressively demonetizing videos that might be problematic, in an effort to prevent companies from halting their ad spending.

Aggressively demonetizing videos would become YouTube’s go-to move.

. . . .

The January 2017 closure of Vine, a platform for looping six-second videos, left a number of creators and influencers without a platform, and many of those stars moved over to YouTube. David Dobrik, Liza Koshy, Lele Pons, Danny Gonzalez, and, of course, Jake and Logan Paul became instant successes on YouTube — even though many of them had started YouTube channels years before their success on Vine.

YouTube’s biggest front-facing stars began following in the footsteps of over-the-top, “bro” prank culture. (Think: Jackass but more extreme and hosted by attractive 20-somethings.) Logan Paul pretended to be shot and killed in front of young fans; Jake Paul rode dirt bikes into pools; David Dobrik’s friends jumped out of moving cars. The antics were dangerous, but they caught people’s attention.

. . . .

Jake and Logan Paul became the biggest stars of this new wave, performing dangerous stunts, putting shocking footage in their vlogs, and selling merchandise to their young audiences. Although they teetered on the edge of what was acceptable and what wasn’t, they never really crossed the line into creating totally reprehensible content.

. . . .

It wasn’t a sustainable form of entertainment, and it seemed like everyone understood that except for YouTube. The Paul brothers were on their way to burning out; all it would take was one grand mistake. Even critics of the Pauls, like Kjellberg, empathized with their position. Kjellberg, who faced controversy after controversy, spoke about feeling as though right or wrong ceased to exist when trying to keep up with the YouTube machine.

“The problem with being a YouTuber or an online entertainer is that you constantly have to outdo yourself,” Kjellberg said in a 2018 video. “I think a lot of people get swept up in that … that they have to keep outdoing themselves, and I think it’s a good reflection of what happened with Logan Paul. If you make videos every single day, it’s really tough to keep people interested and keep them coming back.”

Still, Logan Paul was small potatoes compared to YouTube’s bigger problems, including disturbing children’s content that had been discovered by The New York Times and more terrorism content surfacing on the site. Who cared about what two brothers from Ohio were doing? The breaking point would be when Logan Paul visited Japan.

. . . .

Logan Paul’s “suicide forest” video irrevocably changed YouTube.

In it, Paul and his friends tour Japan’s Aokigahara forest, where they encountered a man’s body. Based on the video, it appears that he had recently died by suicide. Instead of turning the camera off, Paul walks up to the body. He doesn’t stop there. He zooms in on the man’s hands and pockets. In post-production, Paul blurred the man’s face, but it’s hard to see the video as anything but an egregious gesture of disrespect.

Within hours of posting the video, Paul’s name began trending. Actors like Aaron Paul (no relation), influencers like Chrissy Teigen, and prominent YouTubers called out Paul for his atrocious behavior.

YouTube reacted with a familiar strategy: it imposed heavy restrictions on its Partner Program (which recognizes creators who can earn ad revenue on their videos), sharply limiting the number of videos that were monetized with ads. In a January 2018 blog post announcing the changes, Robert Kyncl, YouTube’s head of business, said the move would “allow us to significantly improve our ability to identify creators who contribute positively to the community,” adding that “these higher standards will also help us prevent potentially inappropriate videos from monetizing which can hurt revenue for everyone.”

. . . .

The only people who didn’t receive blame were YouTube executives themselves — something that commentators like Philip DeFranco took issue with after the controversy first occurred. “We’re talking about the biggest creator on YouTube posting a video that had over 6 million views, was trending on YouTube, that no doubt had to be flagged by tons of people,” DeFranco said.

“The only reason it was taken down is Logan or his team took it down, and YouTube didn’t do a damn thing. Part of the Logan Paul problem is that YouTube is either complicit or ignorant.”

. . . .

[B]y the middle of 2018, lifestyle vloggers like Carrie Crista, who has just under 40,000 subscribers, were proclaiming how the community felt: forgotten. “YouTube seems to have forgotten who made the platform what it is,” Crista told PR Week. In its attempt to compete with Netflix, Hulu, and Amazon, she said, YouTube is “pushing content creators away instead of inviting them to a social platform that encourages them to be creative in a way that other platforms can’t.”

Even people outside of YouTube saw what was happening. “YouTube is inevitably heading towards being like television, but they never told their creators this,” Jamie Cohen, a professor of new media at Molloy College, toldUSA Today in 2018.

By promoting videos that meet certain criteria, YouTube tips the scales in favor of organizations or creators — big ones, mostly — that can meet those standards. “Editing, creating thumbnails, it takes time,” Juliana Sabo, a creator with fewer than 1,000 subscribers, said in 2018 after the YouTube Partner Program changes. “You’re just prioritizing a very specific type of person — the type of person that has the time and money to churn out that content.”

Individual YouTube creators couldn’t keep up with the pace of YouTube’s algorithm set. But traditional, mainstream outlets could: late-night shows began to dominate YouTube, along with music videos from major labels. The platform now looked the way it had when it started, but with the stamp of Hollywood approval.

. . . .

The RackaRacka brothers are tired.

“We loved it before when it was like, ‘Oh, you guys are doing something unique and different. Let’s help you guys so you can get views and get eyes on it,’” Danny says. “I’d love to go back to that. We have so many big, awesome ideas that we’d love to do, but there’s no point in doing it on YouTube.”

Link to the rest at The Verge

The OP is a very long article. PG has excerpted more than he might have from an article with a different topic, however.

While reading the article, PG was struck by parallels between how dependent indy videographers were on YouTube and how dependent indy authors are on Amazon.

A year ago, PG doesn’t believe he would have had the same response. The amateurism and arrogance demonstrated by YouTube management in the OP contrasted greatly with the maturity and steady hand at the top levels of Amazon. Amazon has not made many dumb mistakes. Amazon has also treated indy authors with respect and generosity beyond that shown by any other publisher/distributor/bookstore in the US (and probably elsewhere).

This is not to say Amazon is a perfect company or that it hasn’t made some mistakes, but Amazon has demonstrated good business judgment, done a pretty good job of fixing its errors and hasn’t changed the way it operates in a manner that has harmed indie authors in a serious way.

Obviously, Jeff Bezos, his attitudes, judgment and approach to dealing with others has imprinted itself up and down the corporate hierarchy at Amazon. That sure hand on the corporate helm has caused PG to trust Amazon more than he does any other large tech company.

Additionally, Amazon has been leagues beyond any other organization in the book publishing and bookselling business in attracting smart adults as managers, making intelligent business decisions, treating partners well and managing the business as if it wanted long-term success as a publisher and bookseller (see, as only one example of business as usual in the publishing world, Barnes & Noble).

However.

PG admits his faith in Jeff Bezos’ solid judgment took a big hit with the disclosure of Bezos’ marital misconduct and divorce.

This struck him as an immature example of the runaway hubris that has brought down quite a few large companies, particularly in the tech world.

PG is old-fashioned in his belief that the behavior of a virtuous individual will manifest itself in all parts of that individual’s life. He understands the common explanation for such behavior in terms of a person being able to segment his life into business and personal spheres and continue in public excellence while making serious mistakes in private behavior.

PG also understands that marriages can fail for a wide variety of reasons and assigning blame for such failure (if there is blame to be assigned) is impossible for someone who is not privy to the personal lives of each party. That said, PG suggests at least a separation, if not a divorce, would be a more standup approach by a mature adult exercising good judgment to a marriage that has declined to the point of a breakup.

A secret affair that is leaked to the press is not, in PG’s admittedly traditional eyes, up to the standards he has come to expect from Bezos. The general reaction PG has seen in the press leads PG to believe he is not alone in his opinion.

Apple Felt like a Totally Different Company Today

26 March 2019

From Fast Company:

While I sat inside the Steve Jobs Theater watching Big Bird talk to a hand puppet on the stage, I realized Apple was not the same company I knew not long ago.

No new devices were announced. There were no slides filled with impressive specs or performance metrics. No oohs and ahhs. No “one more thing.”

Yeah, yeah, I know: Apple, under CEO Tim Cook, is becoming a services company to account for flagging iPhone sales growth. What we saw today, at Apple’s “It’s show time” event in Cupertino–maybe for the first time–is the public face of that new company.

Part of the reason the presentation felt so different is because it was as much about other companies as it was about Apple. It was about Apple putting an Apple wrapper on a bunch of content and services made by third parties.

. . . .

All these announcements came in the first hour of the presentation. With that much time left I wondered if Apple had some tricks up its sleeve after all. But no: It had simply reserved an entire hour to talk about its original video content, which it has branded “TV+,” and which won’t be available until next fall.

What followed was a string of Hollywood people talking about the shows and movies they’re making for Apple. The uneasy mix of Hollywood and Silicon Valley cultures was on full display. Reese Witherspoon, Jennifer Aniston, and Steve Carrell were there to boost a show they’re making about TV news personalities, but they came off like they were trapped under glass.

Steven Spielberg came out to a warm welcome and talked about his reboot of the Amazing Stories series for television. A dramatic video came on about how we desperately need more conversation among people with different viewpoints. Then the lights went down, and when they came up Oprah Winfrey was there.

. . . .

The question is the company’s identity. At Apple events we’re used to seeing people like Kevin Lynch (Apple Watch) and Craig Federighi (iOS) who you know live and breathe core “Designed in California” products.

Today the company made a big deal of announcing a bunch of third-party content and services, with only passing references to the hardware that made it famous. Should Apple really identify itself with products that its own creative hand never really gets close to?

Link to the rest at Fast Company

TPV isn’t a tech blog, but PG has worked with a variety of tech companies in the past and, although he’s a Windows guy, has always admired Apple’s sense of mission and used iPhones almost forever.

The successor of a talented and creative CEO has a tough job in Silicon Valley. After a quick mental review, PG thinks far more successors at significant tech companies have failed than have succeeded.

Steve Jobs took Apple through some perilous times, but he always pushed the envelope and announced interesting new products. Under Jobs, Apple certainly had some product failures, but it never seemed like a company that was resorting to lame strategies. When things got tough, Apple thought big.

As the OP reflected, after stumbling with the pricing/features of its latest iPhones, yesterday’s announcement seemed to represent, “We’ve got to do something! Let’s copy what other companies are doing, but use Apple branding. Apple has a great brand that we need to exploit.”

PG suggests that brand equity is a precious commodity that needs to be preserved and cultivated with impressive new accomplishments, fostering the assurance that customers can continue to receive great benefits from the company and its products. It needs to feel cool by the standards of its industry.

In the tech world, where real technology talent is always in short supply, newly-graduated engineers from top universities are often attracted to employers who promise the opportunity to work on the cutting edge.

For all of Tesla’s financial ups and downs and Elon Musk, its frenetic CEO, engineers working there feel like they’re inventing the future. Amazon has felt like a serious innovator for a long time and can attract tech and marketing talent based upon that reputation and the opportunity to work on something new and different. (PG hopes Bezos’ marital problems aren’t Amazon’s version of Jobs’ pancreatic cancer.)

If Apple’s reputation becomes, “The company is not what it used to be and shows no signs of turning around,” adverse consequences will appear from many different directions.

 

How Printers Can Capitalize on Book Publishing Trends in 2019

20 March 2019

From Printing Impressions:

As technology continues to disrupt and transform the book market, publishers are responding by changing business models that affect how media is produced, distributed and consumed in the book publishing industry. As dramatic technology shifts continue, book publishers, authors and printers need to adapt to benefit from new opportunities.

With the start of another year, book publishers and manufacturers are evaluating what the future might hold.

. . . .

For those in the printing industry, Walter highlighted that there was modest growth in print book sales in 2018 with volume climbing 1.3% — in a year where there were no major blockbuster bestsellers like “Fifty Shades of Grey” or “Harry Potter.” Walter expects the market to remain relatively flat but stable. The key is the migration to more and more digitally printed books.

. . . .

The Book Industry Study Group (BISG) is a leading book industry trade association that offers standardized industry best practices, research and information. O’Leary said one of the biggest issues facing the book market is the management of the supply chain and shared results of BISG’s year-end “State of the Supply Chain” survey. O’Leary highlighted that the three top priorities respondents were focused on in 2019 when it came to supply chain management were:

  1. Making data-driven decisions
  2. Timely, high-quality metadata to improve discovery and sales (At its most basic level, metadata is how people find your book. This includes the ISBN, keywords, the author name, pub date, BISAC code, reviews, author bios and more. )
  3. Keeping up with new technologies to improve workflow and supply chain management

. . . .

IBPA CEO Angela Bole explained that three publishing models continue to exist: traditional publishing; self-publishing, where authors can be assisted or unassisted by vanity press organizations; and hybrid or partner publishing.

Bole says that in 2019, the industry will experience the rise in hybrid publishing — a gray zone between traditional publishing and self-publishing that is still being defined. Bole described hybrid publishing as publishing companies behaving like traditional publishing companies in all respects, except that they publish books using an author-subsidized business model, as opposed to financing all costs themselves, and in exchange return, a higher-than-standard share of sales proceeds to the author. In other words, a hybrid publisher makes income from a combination of publishing services and book sales. Hybrid publishers provide a range of services for the author such as:

  • Vet submissions.
  • Publish under its own imprint(s) and ISBN(s).
  • Publish to industry standards.
  • Ensure editorial, design and production quality.
  • Pursue and manage a range of publishing rights.
  • Provide distribution services.
  • Demonstrate respectable sales.
  • Pay authors

Link to the rest at Printing Impressions

PG won’t spend time venting, but he will suggest that traditional publishing is already author-subsidized in that authors receive only a small percentage of the money generated by their books while publishers receive a significantly larger share.

EU and Article 13: the Dystopia That Never Was and Never Will Be

15 March 2019

From The Trichordist:

The “Declaration of the Independence of Cyberspace“ published in 1996 by John Perry Barlow begins with the words “Governments of the Industrial World I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.” One reading of this text entirely rejects the possibility that processes of making and enforcing collectively binding decisions – political processes – apply on the Internet. Another possible reading sees the Internet as a public space governed by rules that must be established through democratic process while also holding that certain sub-spaces belong to the private rather than the public sphere. The distinction between public and private affairs, res publicae und res privata, is essential for the functioning of social spaces. The concept of the “res publicae” as “space concerning us all”  led – and not only etymologically – to the idea of the republic as a form of statehood and, later, as a legitimate space for democratic policymaking.

On the Internet, this essential separation of private and public space has been utterly undermined, and the dividing lines between public and private spaces are becoming ever more blurred. We now have public spaces lacking in enforcement mechanisms and transparency and private spaces inadequately protected from surveillance and the misuse of data. Data protection is one obvious field this conflict is playing out on, and copyright is another.

The new EU Directive on Copyright seeks to establish democratic rules governing the public dissemination of works. Its detractors have not only been vociferous – they have also resorted to misleading forms of framing. The concepts of upload filters, censorship machines and link taxes have been injected into the discussion. They are based on false premises.

. . . .

What campaigners against copyright reform term “upload filters” are not invariably filters with a blocking function; they can be simple identification systems. Content can be scanned at the time of uploading to compare it to patterns from other known content. Such a system could, for example, recognize Aloe Blacc’s retro-soul hit “I need a Dollar.” Such software systems can be compared to dictation software capable of identifying the spoken words in audio files. At this point in time, systems that can identify music tracks on the basis of moderately noisy audio signals can be programed as coursework projects by fourth-semester students drawing on open-source code libraries. Stylizing such systems as prohibitively expensive or as a kind of “alien technology” underestimates both the dystopian potential of advanced pattern recognition systems (in common parlance: artificial intelligence) in surveillance software and similar use cases while also underestimating the feasibility of programming legitimate and helpful systems. The music discovery app “Shazam,” to take a specific example, was created by a startup with only a handful of developers and a modest budget and is now available on millions of smartphones and tablets – for free. The myth that only tech giants can afford such systems is false, as the example of Shazam or of enterprises like Audible Magic shows. Identifying works is a basic prerequisite for a reformed copyright regime, and large platforms will not be able to avoid doing so. Without an identification process in place, the use of licensed works cannot be matched to license holders. Such systems are, however, not filters.

. . . .

The principal argument of critics intent on frustrating digital copyright reforms that had already appeared to be on the home stretch is their charge that the disproportionate blocking of uploads would represent a wholesale assault on freedom of speech or, indeed, a form of censorship. Here, too, it is necessary to look more closely at the feasibility and potential of available options for monitoring uploads – and especially to consider the degree of efficiency that can be achieved by linking human and automated monitoring. In a first step, identification systems could automatically block secure matches or allow them to pass by comparing them against data supplied by collecting societies. Licensed content could readily be uploaded and its use would be electronically registered. Collecting societies would distribute license revenue raised to originators and artists. Non-licensed uses could automatically be blocked.

. . . .

Humans can recognize parodies or incidental uses such as purely decorative uses of works in ways that that do not constitute breaches of copyright.

The process of analysis could be simplified further by uploaders stating the context of use at the time works are uploaded. Notes such as “This video contains a parody and/or uses a copyrighted work for decorative purposes” could be helpful to analysts. The Network Enforcement Act (NetzDG) in Germany provides a good example of how automatic recognition and human analysis can work in tandem to analyze vast volumes of information. A few hundred people in Germany are currently tasked with deciding whether statements made on Facebook constitute incitement to hatred and violence against certain groups or are otherwise in breach of community rules. These judgments are significantly more complex than detecting impermissible uses of copyrighted works.

. . . .

Being obliged to implement human monitoring will, of course, impose certain demands on platforms. But those most affected will be the platforms with the largest number of uploads. These major platforms will have the highest personnel requirements because they can host content of almost every kind: music, texts, video etc. Protecting sites like a small photo forum will be much simpler. If only a modest number of uploads is involved, the forum operator can easily check them personally at the end of the working day. In that case, uploaders will simply have to wait for a brief period for their content to appear online. Or operators can opt to engage a service center like Acamar instead of adding these checks to their own workloads. Efficient monitoring is possible.

Link to the rest at The Trichordist

PG understands and sympathizes with the concerns of copyright owners about improper use of their property.

However, every online use of copyrighted material does not represent a loss of income to the copyright owner. Assuming there was a price tag associated with the use of such material, it could be omitted entirely or a substitute without a price tag could be selected.

While some uses of copyrighted material can be harmful, a great many of such uses may be viewed by 25 people who are unlikely to be paying consumers of that material.

Under US copyright law, the protected fair use of copyrighted material is often not a clear-cut matter. Reasonable people can disagree about whether a use is covered by fair use or not.

A significant number of owners of large catalogs of copyrighted material are extremely aggressive in their interpretation of what is protected by those copyrights. Disney and Mickey Mouse are but one example.

A couple of statements in the OP raised further concerns:

  • If only a modest number of uploads is involved, the forum operator can easily check them personally at the end of the working day.
  • In that case, uploaders will simply have to wait for a brief period for their content to appear online.

Exactly how is a forum operator who operates a small online site and supports it by working at a day job supposed to conduct an analysis of say 30 uploads to determine whether they may be subject to anyone’s copyright and, if they are, whether the use of the works was fair use or not? If a photo shows up in the uploads, how is the operator to determine who the creator of the photo is/was? If a photo has been modified by the person posting it, how is the operator to determine who the creator of the original photo was?

As far as “uploaders” waiting “for a brief period for their content to appear online”, PG suggests such delays may well adversely impact the quality of the online discussion. If an original post triggers a lot of responses, but those responses are held in moderation, are visitors to the online forum going to assume the post is irrelevant or is of no interest and perhaps leave the forum for good.

The killer among the breezy thoughts in the OP is, “Being obliged to implement human monitoring will, of course, impose certain demands on platforms.”

It will impose a serious and significant demand on platforms. If one were designing regulations to substantially reduce the amount of online dialogue about a wide range of subjects and the number of places where that dialogue occurs, imposing “certain demands” on those who sponsor such communities is a perfect way to make anything other than standard mainstream destinations and opinions to go away and rob the Internet of much of its innovative energy and independent thought.

If one were designing a system to ensure corporate control of online interaction, one might certainly do so on the pretense of protecting the words and pictures of copyright holders.

Next Page »