PG’s Thoughts (such as they are)

Nike Nixes ‘Betsy Ross Flag’ Sneaker

3 July 2019

Tomorrow, July 4, is a major American holiday, Independence Day.

The holiday commemorates the Declaration of Independence of the United States on July 4, 1776.

The Continental Congress declared that the thirteen American colonies were no longer subject (and subordinate) to the monarch of Britain and were now united, free, and independent states. The Congress had voted to declare independence two days earlier, on July 2, but it was not declared until July 4.

From The Wall Street Journal:

Nike Inc. is yanking a U.S.A.-themed sneaker featuring an early American flag after NFL star-turned-activist Colin Kaepernick told the company it shouldn’t sell a shoe with a symbol that he and others consider offensive, according to people familiar with the matter.

The sneaker giant created the Air Max 1 USA in celebration of the July Fourth holiday, and it was slated to go on sale this week. The heel of the shoe featured a U.S. flag with 13 white stars in a circle, a design created during the American Revolution and commonly referred to as the Betsy Ross flag.

After shipping the shoes to retailers, Nike asked for them to be returned without explaining why, the people said. The shoes aren’t available on Nike’s own apps and websites.

“Nike has chosen not to release the Air Max 1 Quick Strike Fourth of July as it featured the old version of the American flag,” a Nike spokeswoman said.

After images of the shoe were posted online, Mr. Kaepernick, a Nike endorser, reached out to company officials saying that he and others felt the Betsy Ross flag is an offensive symbol because of its connection to an era of slavery, the people said. Some users on social media responded to posts about the shoe with similar concerns. Mr. Kaepernick declined to comment.

The design was created in the 1770s to represent the 13 original colonies, though there were many early versions of the America flag, according to the Smithsonian. In 1795, stars were added to reflect the addition of Vermont and Kentucky as states.

In 2016, the superintendent of a Michigan school district apologized after students waved the Betsy Ross flag at a high-school football game, saying that for some it is a symbol of white supremacy and nationalism, according to Mlive.com, a local news outlet. While the flag’s use isn’t widespread, the local chapter of the National Association for the Advancement of Colored People said at the time that it has been appropriated by some extremist groups opposed to America’s increasing diversity.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Here’s a replica of the offending flag:

Here’s the story behind this flag per The History Channel:

Perhaps the best-known figure from the American Revolutionary era who wasn’t a president, general or statesman, Betsy Ross (1752-1836) became a patriotic icon in the late 19th century when stories surfaced that she had sewn the first “stars and stripes” U.S. flag in 1776. Though that story is likely apocryphal, Ross is known to have sewn flags during the Revolutionary War.

. . . .

Elizabeth Griscom was born on January 1, 1752, in the bustling colonial city of Philadelphia. She was the eighth of 17 children. Her parents, Rebecca James Griscom and Samuel Griscom were both Quakers. The daughter of generations of craftsman (her father was a house carpenter), young Betsy attended a Quaker school and was then apprenticed to William Webster, an upholsterer. In Webster’s workshop she learned to sew mattresses, chair covers and window blinds.

. . . .

In 1773, at age 21, Betsy crossed the river to New Jersey to elope with John Ross, a fellow apprentice of Webster’s and the son of an Episcopal rector—a double act of defiance that got her expelled from the Quaker church. The Rosses started their own upholstery shop, and John joined the militia. He died after barely two years of marriage. Though family legend would attribute John’s death to a gunpowder explosion, illness is a more likely culprit.

. . . .

In the summer of 1776 (or possibly 1777) Betsy Ross, newly widowed, is said to have received a visit from General George Washington regarding a design for a flag for the new nation. Washington and the Continental Congress had come up with the basic layout, but, according to legend, Betsy allegedly finalized the design, arguing for stars with five points (Washington had suggested six) because the cloth could be folded and cut out with a single snip.

The tale of Washington’s visit to Ross was first made public in 1870, nearly a century later, by Betsy Ross’s grandson. However, the flag’s design was not fixed until later than 1776 or 1777. Charles Wilson Peale’s 1779 painting of George Washington following the 1777 Battle of Princeton features a flag with six-pointed stars.

Betsy Ross was making flags around that time—a receipt shows that the Pennsylvania State Navy Board paid her 15 pounds for sewing ship’s standards. But similar receipts exist for Philadelphia seamstresses Margaret Manning (from as early as 1775), Cornelia Bridges (1776) and Rebecca Young, whose daughter Mary Pickersgill would sew the mammoth flag that later inspired Francis Scott Key to write “The Star-Spangled Banner.”

. . . .

In June 1777, Betsy married Joseph Ashburn, a sailor, with whom she had two daughters. In 1782 Ashburn was apprehended while working as a privateer in the West Indies and died in a British prison. A year later, Betsy married John Claypoole, a man who had grown up with her in Philadelphia’s Quaker community and had been imprisoned in England with Ashburn. A few months after their wedding, the Treaty of Paris was signed, ending the Revolutionary War. They went on to have five daughters.

Link to the rest at The History Channel

PG will note that Ms. Ross’ connection with the Quakers is particularly ironic. Per Wikipedia:

The Religious Society of Friends (Quakers) played a major role in the abolition movement against slavery in both the United Kingdom and in the United States of America. Quakers were among the first white people to denounce slavery in the American colonies and Europe, and the Society of Friends became the first organization to take a collective stand against both slavery and the slave trade, later spearheading the international and ecumenical campaigns against slavery.

. . . .

Quaker colonists began questioning slavery in Barbados in the 1670s, but first openly denounced it in 1688. In that year, four German settlers (the Lutheran Francis Daniel Pastorius and three Quakers) issued a protest from Germantown, close to Philadelphia in the newly founded American colony of Pennsylvania. This action, although seemingly overlooked at the time, ushered in almost a century of active debate among Pennsylvanian Quakers about the morality of slavery which saw energetic anti-slavery writing and direct action from several Quakers, including William Southeby, John Hepburn, Ralph Sandiford, and Benjamin Lay.

During the 1740s and 50s, anti-slavery sentiment took a firmer hold. A new generation of Quakers, including John Woolman, Anthony Benezet and David Cooper, protested against slavery, and demanded that Quaker society cut ties with the slave trade. They were able to carry popular Quaker sentiment with them and, beginning in the 1750s, Pennsylvanian Quakers tightened their rules, by 1758 making it effectively an act of misconduct to engage in slave trading. The London Yearly Meeting soon followed, issuing a ‘strong minute’ against slave trading in 1761. On paper at least, global politics would intervene. The American Revolution would divide Quakers across the Atlantic.

. . . .

In the United Kingdom, Quakers would be foremost in the Society for Effecting the Abolition of the Slave Trade in 1787 which, with some setbacks, would be responsible for forcing the end of the British slave trade in 1807 and the end of slavery throughout the British Empire by 1838.

Link to the rest at Wikipedia

So Kaepernick and Nike managed a woke twofer, smearing one of the most famous women in the history of the early United States and a religious group (during an era when religious groups were quite influential in American public life) that was the single most prominent early religious force urging the abolition of slavery.

 

Amazon Gets Bulk of Complaint in AAP Filing with US Trade Commission

27 June 2019

From Publishing Perspectives:

For years, many in the publishing industry of the United States and other parts of the world have wanted to see Amazon examined by American governmental regulators for potential anti-competitive practices.

And, as various elements of Washington’s apparatus now address issues in terms of the major tech platforms, the Association of American Publishers (AAP today (June 27) is filing a 12-page statement with the Federal Trade Commission (FTC), urging the commission to more closely scrutinize the behavior of dominant online platforms that “pervade every aspect of the economy.”

. . . .

And while we find 12 references to Google in AAP’s commentary, it will surprise few in the book business that Amazon is mentioned 33 times.

Today’s filing from the Washington-based AAP, in fact, references that Streitfeld article from the Times’ June 23 edition, though not the Amazon answer, and is responsive to the FTC’s hearings near the close of a long cycle called “Competition and Consumer Protection in the 21st Century”

. . . .

A distinctively international element is engaged at points in which AAP relies on the European Commission’s investigations and action on Amazon’s use of “most favored nation” clauses (MFNs)and the May 2017 acceptance by the EU of Amazon’s commitment to stop using those clauses in distribution agreements with book publishers in Europe.

. . . .

AAP president and CEO Maria A. Pallante is quoted, saying, ““Unfortunately, the marketplace of ideas is now at risk for serious if not irreparable damage because of the unprecedented dominance of a very small number of technology platforms.

“In order to mitigate this crisis and protect the public interest, AAP urges the FTC to exercise much-needed oversight and regulation, particularly as to circumstances where technology platforms stifle competition and manipulate consumer outcomes.”

. . . .

The formulation used by AAP in setting up its commentary rests in two key areas: book distribution and search.

Regarding search, Google is naturally the key interest and AAP’s messaging to the media flags this, saying, “AAP notes that Google’s complete and untouchable dominance is highly problematic [quoting now from its own FTC filing] ‘because its business model is largely indifferent to whether consumers arrive at legitimate or pirated goods.’”

But in reference to book distribution, of course, it’s Amazon that comes in for the lion’s share of complaint. The association in its media announcements finds something of a thesis statement in its commentary to be “No publisher can avoid distributing through Amazon and, for all intents and purposes, Amazon dictates the economic terms, with publishers paying more for Amazon’s services each year and receiving less in return.”

The association delineates five “primary areas of concern” for structuring its commentary this way, we’re quoting here:

  • “Platforms exercising extraordinary market power in the markets for book distribution and Internet search
  • “The threat to competition when platforms act as both producers and suppliers to the marketplaces they operate
  • “Platforms’ imposition of most-favored nation clauses and other parity provisions that stifle competition, market entry, and innovation
  • “Platforms’ use of non-transparent search algorithms and manipulated discovery tools that facilitate infringement and deceive consumers
  • “Platforms’ tying of distribution services to the purchase of advertising services.”

. . . .

In its introductory comments, AAP asks the FTC to consider ways in which tech platforms differ from other players in dominant market operation. It’s here that the association starts grappling with the traditional idea that if prices are low, then anti-competitive harm to the consumer isn’t a factor.

“First,” the association writes, “the assumptions that consumers will purchase goods at the lowest available price and that competition for market share will exert downward pressure on market prices depend on consumers receiving timely and accurate information about prices and quality. … That is often not the case in markets in which one or a handful of platforms use proprietary search algorithms and manipulated discovery tools to tilt the playing field toward particular suppliers or their own distribution channels or products.

“Second, modern technology platforms benefit from—and in some cases depend on—network effects. The larger the network, the greater the competitive advantage over rivals and potential rivals and, once entrenched, the platform has a greater ability to preserve and extend its market power in ways that are not available in markets that are not characterized by network effects.

“Third, in markets dominated by modern technology platforms, an analysis of consumer welfare must not overemphasize retail price levels relative to other critically important factors. The analysis of consumer welfare also must account appropriately for factors such as decreases in quality, consumer choice, and innovation, and a corresponding rise in consumer deception. Nowhere are these considerations more important than in the marketplace for information and ideas.”

Link to the rest at Publishing Perspectives

.

 

‘Restoring the Promise’ Review: High Cost, Low Yield

25 June 2019

Not exactly about books, but PG would bet that over 80% of those who read the books written by regular visitors to TPV (excepting authors of children’s and YA books) are college graduates.

From The Wall Street Journal:

We are at the end of an era in American higher education. It is an era that began in the decades after the Civil War, when colleges and universities gradually stopped being preparatory schools for ministers and lawyers and embraced the ideals of research and academic professionalism. It reached full bloom after World War II, when the spigots of public funding were opened in full, and eventually became an overpriced caricature of itself, bloated by a mix of irrelevance and complacency and facing declining enrollments and a contracting market. No one has better explained the economics of this decline—and its broad cultural effects—than Richard Vedder.

Mr. Vedder is an academic lifer—a Ph.D. from the University of Illinois and a long career teaching economic history at Ohio University. In 2004 he brought out “Going Broke by Degree: Why College Costs Too Much,” and in 2005 he was appointed to the Commission on the Future of Higher Education, a group convened by Margaret Spellings, the U.S. education secretary. “Restoring the Promise: Higher Education in America” is a summary of the arguments he has been making since then as the Cassandra of American colleges and universities. Despite the optimistic tilt of the book’s title, Mr. Vedder has little to offer in the way of comfort.

As late as 1940, American higher education was a modest affair. Less than 5% of adults held a college degree, and the collegiate population amounted to about 1.5 million students. This scale changed with the first federal subsidies, Mr. Vedder notes, beginning in 1944 with the Servicemen’s Readjustment Act (the “GI Bill”). Within three years, veterans accounted for 49% of all undergraduate enrollment—some 2.2 million students. Having earned degrees themselves, the veterans expected their own children to do likewise.

Such expectations were supported by still further subsidies, through the National Defense Education Act (1958) and the Higher Education Act of 1965. By the 1970s, there would be 12 million students in the American college and university system; by 2017, there would be 20 million. Meanwhile, more and more federal research dollars poured into campus budgets—reaching some $50 billion in direct funding by 2016—and set off infrastructure binges. To pay for them, as Mr. Vedder documents, tuition and fees vaulted upward, while the federal programs that were intended to ease the financial burden—especially low-interest student loans—only enticed institutions to jack up their prices still higher and spend the increased revenue on useless but attention-getting flimflam (from lavish facilities to outsize athletic programs). At Mr. Vedder’s alma mater, Northwestern, tuition rose from 16% of median family income in 1958 to almost 70% in 2016. Over time, armies of administrators wrested the direction of their institutions away from the hands of faculties and trustees.

Today a college degree has become so common that 30% of adult Americans hold one. Its role as a bridge to middle-class success is assumed—though bourgeois comfort is rather hard to achieve these days with a B.A. in English literature or a degree in, say, sociology. The modern economy, says Mr. Vedder, simply doesn’t possess the number of jobs commensurate with the expectations of all the degree-holders.

The over-educated barista is one of the standing jokes of American society, but the laughter hasn’t eased the loan burden that the barista probably took on to get his degree. Mr. Vedder says that student loans have injected a kind of social acid into a generation of young adults who, over time, manifest a “decline in household formation, birth rates, and . . . the purchase of homes.” Pajama Boy was born, and took up residence in his parents’ basement.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

And a quote from economist Herbert Stein:

What economists know seems to consist entirely of a list of things that cannot go on forever . . . . But if it can’t go on forever it will stop.

PG suspects that this practice may have become impolite or illegal, but when he was interviewing for his first job out of college (before he went to law school) one of the last questions he was asked by the final interviewer, the head of the department in which the job opening existed, was, “What were your SAT scores?”

Evidentally PG’s answer was satisfactory because he was hired for the position despite having absolutely no training or education that might lead a reasonable person to conclude he was prepared for the specific tasks involved in carrying out his job responsibilities.

What the interviewer was trying to ascertain was whether PG might be smart enough to learn how to do the job if he was hired. (PG was, and received a promotion after about a year, but left the company when a better job beckoned.)

PG has read that the SAT and ACT tests (for visitors to TPV from outside of the United States, these are standardized tests required for entry into virtually any college or university in the country) are effectively proxies for IQ tests.

IQ tests were first developed during the early part of the 20th Century for the purpose of identifying retardation in school children. During World War I an intelligence test was devised to help quickly screen soldiers coming into the US Army for assignment to either basic training or officers training. (At the start of the war, the US ground forces included about 9,000 officers. At the end of the war, there were over 200,000 officers.)

After World War I, IQ testing became much more widespread in both education and business. Unfortunately, it also became entangled with the eugenics movement during the 1920’s and 1930’s.

On a general basis, there is a correlation between educational attainment and IQ – MDs, JDs, and PhDs have higher IQ’s on average than college graduates who, in turn have higher IQ’s than those who attended college but did not graduate and those individuals have higher average IQ’s than those who graduated from high school, but received no additional education.

In this as in countless other things, correlation is not causation. There are plenty of people who possess the inherent intelligence and ability to become MDs, JDs and PhDs who choose not to pursue that educational/occupational path. Such individuals do not, of course, become less intelligent if they go in another direction. From personal experience, PG can attest that there is no shortage of attorneys who do stupid things.

A US Supreme Court case titled Griggs v. Duke Power Co., decided in 1971, effectively forbade employers from using arbitrary tests—such as those for measuring IQ or literacy—to evaluate an employee or a potential employee, a practice that some companies at the time were using as a way to get around rules that prohibited outright racial discrimination.

Griggs began when African American workers at the Duke Power Company in North Carolina sued the company because of a rule that required employees who wished to transfer between different departments to have a high-school diploma or pass an intelligence test.

By a unanimous decision, the Supreme Court held that the tests given by Duke Power were artificial and unnecessary and that the requirements for transfer had a disparate impact on African-Americans. Furthermore, the court ruled that, even if the motive for the requirements had nothing to do with racial discrimination, they were nonetheless discriminatory and therefore illegal. In its ruling, the Supreme Court held that employment tests must be “related to job performance.”

Griggs and resulting civil rights laws notwithstanding, prospective employers still want the best evidence available that a job applicant possesses the abilities (including intelligence) to succeed in the position that needs to be filled.

Given the regulatory environment in which employers operate, post-high school education is a common (and legal) requirement specified in a great many job descriptions. In the US business world, a bachelor’s or advanced degree is often a hard and fast must-have. Written or online job applications always include a section for the applicant to list undergraduate and post-graduate degrees and the institutions that granted such degree(s).

In addition to a degree, the identity of the college/university the applicant attended is often regarded as a proxy for the applicant’s intelligence and ability. The holder of a bachelor’s degree from Harvard will generally be assumed to be more intelligent than someone who graduated from Northeast Hickville State College and Welding School regardless of the personal abilities, talents, work ethic and raw intelligence of the latter.

So, back to the OP,

  • A college degree from an institution generally known for its selective nature is becoming more and more and more expensive because there is no indication that increased tuition and other costs will have any adverse impact on the number and general qualifications (intelligence) of its applicants; and
  • A college degree from some institution, high-quality or not-so-high-quality, as a proxy for intelligence, regardless of the field of study, is a requirement for obtaining a job with a reasonable salary or even getting a foot in the door at a very large number of employers; and
  • Government and other loans are available to any student who wishes to attend almost any college, regardless of a student’s field of study or ability to pay; and
  • As a general proposition under US bankruptcy laws, it is difficult or impossible to avoid the obligation to repay student loans, especially for recent college graduates or graduates who have obtained jobs, regardless of the amount of their current income.

PG wonders one of the ways to address this problem would be to permit employers to receive the results of an IQ test or quasi-IQ test like the SAT or ACT from a job applicant without risking litigation or other penalties for doing so.

Memorial Day

27 May 2019

Repost from Memorial Day, 2013:

For readers outside the United States, today is Memorial Day in the US.

While for many, the holiday is only a long weekend marking the beginning of summer, Memorial Day, originally called Decoration Day because flowers were used to decorate gravesites, was established in 1868, following the American Civil War to commemorate men and women who died while in military service.

PG took this photograph of the American military cemetery in rural Tuscany near Florence. Most soldiers buried there died in World War II, fighting in Italy.

Are You Self-Publishing Audio Books?

21 May 2019

From Just Publishing Advice:

It takes total concentration to read a book or an ebook. But with an audio book, a listener can multitask.

This is the key attraction for so many younger readers in particular, as it allows for the consumption of a book while driving, commuting and playing a game on a smartphone, knitting or even while grinding out the hours at work.

The popularity is on the move and according to recent statistics, audiobooks are now a multi-billion dollar industry in the US alone.

. . . .

In another report, it estimates that one in ten readers are now listening to audiobooks.

While the data helps to gain a small insight into the market, it is still easy to draw an assumption that it is the next logical step for self-publishing authors and small press.

Ebook publishing is now the number one form of self-publishing. Many Indie authors then take the next step and publish a paperback version.

. . . .

An audio version offers an opportunity for self-publishing authors to extend their sales potential, and at the same time, diversify revenue streams.

Well, only a little at present as it is really an Amazon Audible and Apple iTunes dominated retail market. However, in the future, this may change.

. . . .

If you live in the US, you are in luck.

Amazon offers production and publishing through Audio Creation Exchange, ACX.

For authors outside of the US, things are not quite so easy.

. . . .

If you live in the US, you are in luck.

Amazon offers production and publishing through Audio Creation Exchange, ACX.

For authors outside of the US, things are not quite so easy.

This is a very common complaint about Amazon and its US-centric approach, which creates so many hurdles for non-US self-publishers.

The following quote is taken from Amazon’s help topic regarding ACX.

At this time, ACX is open only to residents of the United States and United Kingdom who have a US or UK mailing address, and a valid US or UK Taxpayer Identification Number (TIN). For more information on Taxpayer Identification Numbers (TIN), please visit the IRS website. We hope to increase our availability to a more global audience in the future.

If you live in the UK, Amazon can help you, but you will need to have a TIN. If you are already publishing with KDP, you probably have one.

For the rest of the world, well, Amazon, as it so often does, leaves you out of the cold.

. . . .

There are a growing number of small press and independent publishers who offer to produce and publish audio books.

Distribution is most often on Amazon Audible and iTunes.

Do your research and look for publishers who accept submissions or offer a production service using professional narrators and producers.

As with any decision to use a small publisher, be careful, do your background research and don’t rush into signing a contract until you are totally convinced it is a fair arrangement concerning your audio rights.

While some may charge you for the service, it is worth looking for a publisher that offers a revenue split. This is usually 50-50 of net audio royalty earnings.

It might seem a bit steep, but Amazon ACX offers between 20 and 40% net royalties, so 50-50 is not too bad.

Link to the rest at Just Publishing Advice

As with any publishing contract, PG suggests you check out the contract terms carefully before you enter into a publishing agreement for audiobooks.

Speaking generally (and, yes, there are a few exceptions), the traditional publishing industry has fallen into a bad habit (in PG’s persistently humble opinion) of using standard agreements that last longer than any other business contracts with which PG is familiar (and he has seen a lot).

He refers, of course to publishing contracts that continue “for the full term of the copyright.”

Regular visitors to TPV will know that, in the United States, for works created after January 1, 1978, the full term of the copyright is the rest of the author’s life plus 70 years. Due to their participation in The Berne Convention (an international copyright treaty), the copyright laws of many other nations provide for copyright protections of similar durations — the author’s life plus 50 years is common.

PG can’t think of any other types of business agreements involving individuals that last for the life of one of the parties without any obvious exit opportunities. The long period of copyright protection was sold to the US Congress as a great boon to creators. However, under the terms of typical publishing contracts, the chief beneficiaries are corporate publishers.

While it is important for authors to read their publishing agreements thoroughly (Yes, PG knows it’s not fun. He has read far more publishing agreements than you have or ever will and understands what it is like.), if you are looking for a method of performing a quick, preliminary check for provisions that means you will die before your publishing agreement does, search for phrases like:

  • “full term of the copyright”
  • “term”
  • “copyright”
  • “continue”

Those searches may help you immediately locate objectionable provisions that allow you to put the publisher into the reject pile without looking for other nasties. However, if the searches don’t disclose anything, you will most definitely have to read the whole thing. The quoted terms are not magic incantations which must be used. Other language can accomplish the same thing.

Until the advent of ebooks, book publishing contracts used Out of Print clauses to give the author the ability to retrieve rights to his/her book if the publisher wasn’t doing anything with it.

With printed books, even dribs and drabs of sales would eventually deplete the publisher’s stock of physical books. At this point, the publisher would likely consider whether the cost it would pay for another printing of an author’s book was economically justified or not. If the publisher was concerned about ending up with a pile of unsold printed books in its warehouse for a long time, the publisher might decide not to print any more.

Once the publisher’s existing stock was sold, the book was out of print – it was not for sale in any normal trade channels. The author (or the author’s heirs) could then retrieve her/his rights to the book and do something else with them.

Of course, once an electronic file is created, an ebook costs the publisher nothing to offer for sale on Amazon or any other online bookstore with which PG is familiar.

The disk space necessary to store an individual epub or mobi file is essentially free for Amazon and it doesn’t charge anything to maintain the listing almost forever. (There may be a giant digital housecleaning in Seattle at some time in the distant future, but don’t count on it happening during your lifetime.) Print on demand hardcopy books are just another kind of file that’s stored on disk.

So, in 2019 and into the foreseeable future, an infinite number of an author’s ebooks are for sale and not “out of print”.

So, the traditional exit provision for an author – the out of print clause – remains in existence in almost all publishing contracts PG has reviewed, but it provides no opportunity for the author to exercise it to get out of a publishing agreement that has not paid more than $5.00 in annual royalties in over ten years.

 

Public Knowledge Wants to Solve the Misinformation Problem

9 May 2019

From The Illusion of More:

On Tuesday, Meredith Filak Rose of Public Knowledge posted a blog suggesting that a solution to rampant misinformation is to “bring libraries online.” Not surprisingly, she identifies copyright law as the barrier currently preventing access to quality information that could otherwise help solve the problem …

“High-quality, vetted, peer-reviewed secondary sources are, unfortunately, increasingly hard to come by, online or off. Scientific and medical research is frequently locked behind paywalls and in expensive journals; legal documents are stuck in the pay-per-page hell that is the PACER filing system; and digital-only information can be erased, placing it out of public reach for good (absent some industrious archivists).”

Really?  We’re just a few peer-reviewed papers away from addressing the social cancer of misinformation?

. . . .

The funny thing is that Rose does a pretty decent job of summing up how misinformation can be effectively deployed online, but her description could easily be the Public Knowledge Primer for Writing About Copyright Law:

Misinformation exploits this basic fact of human nature — that no one can be an expert in everything — by meeting people where they naturally are, and filling in the gaps in their knowledge with assertions that seem “plausible enough.” Sometimes, these assertions are misleading, false, or flatly self-serving.  In aggregate, these gap-fillers add up to construct a totally alternate reality whose politics, science, law, and history bear only a passing resemblance to our own.

. . . .

Having said all that, Meredith Rose’s article does not say anything categorically false. It is a sincere editorial whose main flaw is that it is sincerely naïve.  “…in the absence of accessible, high-quality, primary source information, it’s next to impossible to convince people that what they’ve been told isn’t true,” she writes.

Yeah. That psychological human frailty is not going to be cured by putting even more information online, regardless of how “good” it may be, or how copyright figures in the equation.  To the contrary, more information is exactly why we’re wandering in a landscape of free-range ignorance in the first place.

. . . .

Speaking as someone schooled in what we might call traditional liberal academia, I believe Rose reiterates a classically liberal, academic fallacy, which assumes that if just enough horses are led to just enough water, then reason based on empirical evidence will prevail over ignorance.  That’s not even true among the smartest horses who choose to drink. Humans tend to make decisions based on emotion more than information, and it is axiomatic that truth is in the eye of the beholder.

But if galloping bullshit is the disease, the catalyst causing it to spread is not copyright law keeping content off the internet, but the nature of the internet platforms themselves.  By democratizing information with a billion soapboxes it was inevitable that this would foster bespoke realities occupied by warrens of subcultures that inoculate themselves against counter-narratives (i.e. facts) with an assortment of talismanic phrases used to dismiss the peer-reviewed scientist, journalist, doctor, et al, as part of a conspiracy who “don’t want us to know the truth.”

Link to the rest at The Illusion of More

While PG didn’t particularly like the tone of the OP, if you’re going to have an open Internet and if you’re going to have freedom of speech, it is all but certain that some people who operate their own blogs, participate in online discussion groups, write for newspapers, appear on television, publish books, have a Twitter account, etc., etc., are going to communicate ideas that either are wrong or seem wrong.

Ever since cave persons of various genders collected around an open fire to drink and talk, some incorrect information was passed from one person to at least one other person, then disseminated from there.

“If Rockie kills a brontosaurus and examines its entrails, he can tell whether it will rain in three days or not.”

Pretty soon, everyone is harassing Rockie to go dinosaur hunting so they could know whether to schedule the prom for next Thursday or not.

From that day until this, regardless of their political persuasion, someone is passing on false information, believing it to be the truth. Someone else is passing on false information for the greater good, knowing it is false. Someone else is creating false information because they have just discovered a great truth which isn’t.

A large majority of Americans regard Adolph Hitler and Nazism as an obvious and indisputable evil. However, this was not always so.

Charles Lindbergh was one of the greatest American heroes of the 1920’s.  He gained even more public stature and enormous public sympathy in 1932, when his 20-month-old son was kidnapped. The most prominent journalist of the period, H. L. Mencken called the kidnapping and trial “the biggest story since the Resurrection.”

Responding to the kidnapping, the United States Congress passed the Federal Kidnapping Act, commonly called the “Lindbergh Law.” In the middle of the Great Depression, rewards equivalent to more than one million dollars in 2018 currency were offered for information leading to the safe return of the child.

A ransom of $50,000 (the equivalent of nearly $1 million today) was demanded for the safe return of the child and was paid. Unfortunately, the Lindbergh baby was killed before he could be found.

Back to the certainty of public opinion, in 1940, the America First Committee was established for the purpose of supporting Adolph Hitler and the Nazis by keeping the United States out of the war in Europe. It quickly gained more than 800,000 members, including a large number of prominent business figures. The pressure of the organization caused President Franklin Roosevelt to pledge that he would keep America out of war.

Lindbergh was greatly admired in Germany and, at the invitation of Hermann Göring, took a high-profile trip to Germany in 1936 where he was treated as a great hero and shown the highly-sophisticated airplanes developed for the German air force. Lindbergh was a high-profile visitor to the 1936 Olympic Games in Berlin, a huge Nazi propaganda exercise.

The visit was a press sensation with daily articles covering Lindbergh’s activities published in The New York Times. On his return, Lindbergh met with President Roosevelt to report on his observations and opinions. Lindbergh would return to Germany on two more occasions prior to the entry into the war by the United States.

Here’s a short video account of the America First movement and Lindbergh’s opposition to war with Germany from The Smithsonian

Circling back to the OP, had the Internet existed in 1936, what would “high-quality, peer-reviewed” articles have said about Germany and America’s best path forward? What would prominent academics, the owners of major media conglomerates and other prominent world leaders, have posted about Hitler and his supporters?

Prior to the outbreak of hostilities with Germany and Japan, the New York Times, Christian Science Monitor, Chicago Tribune, New York Herald Tribune, Philadelphia Evening Bulletin and many more publications reported the great economic progress Hitler-lead Germany was making as it pulled itself out of the Depression and downplayed the extent and nature of the nation’s attacks on the Jews. Indeed, Hitler was providing the West with important benefits by vigorously attacking Bolshevism and imprisoning Communist supporters.

In Britain, The Daily Mail was a strong supporter of Germany. Harold Harmsworth, the first Viscount Rothermere, was the founder of the Daily Mail and owned 14 other papers. His influence was on a par with Lord Beaverbrook’s.

Rothermere was a strong supporter of Mussolini’s version of fascism, “He is the greatest figure of the age,” Rothermere proclaimed in 1928. “Mussolini will probably dominate the history of the 20th century as Napoleon dominated that of the early 19th.”

“[The Nazis] represent the rebirth of Germany as a nation,” Rothermere wrote in the Mail. The election, he correctly prophesied, would come to be seen as “a landmark of this time.”

The Nazis’ “Jew-baiting,” Rothermere warned, was “a stupid survival of medieval prejudice.” Of course, he also added, the Jews had brought the Nazis’ displeasure on themselves, having shown “conspicuous political unwisdom since the war.”

Germany had been “falling under the control of alien elements,” Rothermere argued. There were 20 times as many Jews in government positions than there had been before the war.

“Israelites of international attachments were insinuating themselves into key positions in the German administrative machine,” he noted darkly. “It is from such abuses that Hitler has freed Germany.”

The Jews were not just a problem in Germany. The menace they posed was much more widespread, he felt.

“The Jews are everywhere, controlling everything,” Rothermere wrote in private correspondence.

See The Times of Israel for more.

Back to the “problem” with fake news on the Internet, PG suggests that the online disputes between right and left are a feature, not a bug, in a free society.

An Appeal to Authority (“experts agree” “science says” “academic publications clearly demonstrate”) is a classic logical fallacy.

Whether in the form of “bringing libraries online,” “High-quality, vetted, peer-reviewed secondary sources,” or “keeping content off the internet,” PG is very much a supporter of free and open disputes, arguments as the best way of preserving the rights of all individuals, debunking fallacy and ensuring that no one group can control and limit the spread of information, whether fake news or real news.

The Golden Age of Youtube Is Over

13 April 2019

From The Verge:

The platform was built on the backs of independent creators, but now YouTube is abandoning them for more traditional content.

. . . .

Aanny Philippou is mad.

He’s practically standing on top of his chair as his twin brother and fellow YouTube creator Michael stares on in amusement. Logan Paul, perhaps YouTube’s most notorious character, laughs on the other side of the desk that they’re all sitting around for an episode of his popular podcast Impaulsive. Anyone who’s watched the Philippous’ channel, RackaRacka, won’t be surprised by Danny’s antics. This is how he gets when he’s excited or angry. This time, he’s both.

“It’s not fair what they’re doing to us,” Danny yells. “It’s just not fair.”

Danny, like many other creators, is proclaiming the death of YouTube — or, at least, the YouTube that they grew up with. That YouTube seemed to welcome the wonderfully weird, innovative, and earnest, instead of turning them away in favor of late-night show clips and music videos.

The Philippou twins hover between stunt doubles and actors, with a penchant for the macabre. But YouTube, the platform where they built their audience base, doesn’t seem to want them anymore.

. . . .

The Philippous’ story is part of a long-brewing conflict between how creators view YouTube and how YouTube positions itself to advertisers and press. YouTube relies on creators to differentiate itself from streaming services like Netflix and Hulu, it tells creators it wants to promote their original content, and it hosts conferences dedicated to bettering the creator community. Those same creators often feel abandoned and confused about why their videos are buried in search results, don’t appear on the trending page, or are being quietly demonetized.

At the same time, YouTube’s pitch decks to advertisers increasingly seem to feature videos from household celebrity names, not creative amateurs. And the creators who have found the most success playing into the platform’s algorithms have all demonstrated profound errors in judgment, turning themselves into cultural villains instead of YouTube’s most cherished assets.

. . . .

YouTube was founded on the promise of creating a user-generated video platform, but it was something else that helped the site explode in popularity: piracy.

When Google bought YouTube in 2006 for $1.6 billion, the platform had to clean up its massive piracy problems. It was far too easy to watch anything and everything on YouTube, and movie studios, television conglomerates, and record labels were seething. Under Google, YouTube had to change. So YouTube’s executives focused on lifting up the very content its founders designed the platform with in mind: original videos.

The focus on creator culture defined YouTube culture from its earliest days. The platform was a stage for creators who didn’t quite fit into Hollywood’s restrictions.

. . . .

Between 2008 and 2011, the volume of videos uploaded to YouTube jumped from 10 hours every minute to 72 hours a minute. By 2011, YouTube had generated more than 1 trillion views; people were watching over 3 billion hours of video every month, and creators were earning real money via Google AdSense — a lot of money. Jenna Marbles was making more than six figures by late 2011. (In 2018, a select group of creators working within YouTube’s top-tier advertising platform would make more than $1 million a month.)

By 2012, creators like Kjellberg were leaving school or their jobs to focus on YouTube full-time. He told a Swedish news outlet that he was getting more than 2 million views a month, boasting just over 300,000 subscribers.

. . . .

Between 2011 and 2015, YouTube was a haven for comedians, filmmakers, writers, and performers who were able to make the work they wanted and earn money in the process. It gave birth to an entirely new culture that crossed over into the mainstream: Issa Rae’s Awkward Black Girl series would eventually lead to HBO’s Insecure. Creators like the Rooster Teeth team and Tyler Oakley went on tour to meet fans after generating massive followings online. YouTube had reached mainstream success, but in many ways, it still felt wide open. Anyone could still upload almost anything they wanted without much input from YouTube itself.

. . . .

Behind the scenes, things were changing. YouTube had begun tinkering with its algorithm to increase engagement and experimenting with ways to bring flashier, produced content to the platform to keep up with growing threats like Netflix.

In October 2012, YouTube announced that its algorithm had shifted to prefer videos with longer watch times over higher view counts. “This should benefit your channel if your videos drive more viewing time across YouTube,” the company wrote in a blog post to creators.

This meant viral videos like “David After Dentist” and “Charlie Bit My Finger,” which defined YouTube in its earliest days, weren’t going to be recommended as much as longer videos that kept people glued to the site. In response, the YouTube community began creating videos that were over 10 minutes in length as a way to try to appease the system.

. . . .

In 2011, YouTube invested $100 million into more than 50 “premium” channels from celebrities and news organizations, betting that adding Hollywood talent and authoritative news sources to the platform would drive up advertising revenue and expand YouTube to an even wider audience. It failed less than two years later, with what appeared to be a clear lesson: talent native to YouTube was far more popular than any big names from the outside.

. . . .

Then, suddenly, creators started encountering problems on the platform. In 2016, personalities like Philip DeFranco, comedians like Jesse Ridgway, and dozens of other popular creators started noticing that their videos were being demonetized, a term popularized by the communityto indicate when something had triggered YouTube’s system to remove advertisements from a video, depriving them of revenue. No one was quite sure why, and it prompted complaints about bigger algorithm changes that appeared to be happening.

Kjellberg posted a video detailing how changes had dropped his viewership numbers. He’d been getting 30 percent of his traffic from YouTube’s suggested feed, but after the apparent algorithm update, the number fell to less than 1 percent. Kjellberg jokingly threatened to delete his channel as a result, which was enough to get YouTube to issue a statementdenying that anything had changed. (The denial sidestepped questions of the algorithm specifically, and spoke instead to subscriber counts.)

These perceived, secretive changes instilled creators with a distrust of the platform. It also led to questions about their own self-worth and whether the energy they were spending on creating and editing videos — sometimes north of 80 hours a week — was worth it.

. . . .

YouTube was exerting more control over what users saw and what videos would make money. Once again, the community would adapt. But how it adapted was far more problematic than anyone would have guessed.

. . . .

By the beginning of 2017, YouTube was already battling some of its biggest problems in more than a decade. YouTube’s founders didn’t prepare for the onslaught of disturbing and dangerous content that comes from people being able to anonymously share videos without consequence. Add in a moderation team that couldn’t keep up with the 450 hours of video that were being uploaded every minute, and it was a house of cards waiting to fall.

YouTube had come under fire in Europe and the United States for letting extremists publish terrorism recruitment videos to its platform and for letting ads run on those videos. In response, YouTube outlined the steps it was taking to remove extremist content, and it told advertisers it would be careful about where their ads were placed. It highlighted many creators as a safe option.

But neither YouTube nor Google was prepared for what Felix “PewDiePie” Kjellberg — one of YouTube’s wealthiest independently made creators — would do.

. . . .

In mid-February 2017, The Wall Street Journal discovered an older video from Kjellberg that included him reacting to a sign held up by two kids that said, “Death to all Jews.” The anti-Semitic comment was included in one of his “react” videos about Fiverr, after having pivoted to more of a variety channel instead of focusing just on games.

His video, along with reports of ads appearing on terrorist content, led to advertisers abandoning YouTube. Kjellberg was dropped from Disney’s Maker Studios, he lost his YouTube Red series, Scare PewDiePie, and he was removed from his spot in Google Preferred, the top-tier ad platform for YouTube’s most prominent creators.

“A lot of people loved the video and a lot of people didn’t, and it’s almost like two generations of people arguing if this is okay or not,” Kjellberg said in an 11-minute video about the situation. “I’m sorry for the words that I used, as I know they offended people, and I admit the joke itself went too far.”

The attention Kjellberg brought to YouTube kickstarted the first “adpocalypse,” a term popularized within the creator community that refers to YouTube aggressively demonetizing videos that might be problematic, in an effort to prevent companies from halting their ad spending.

Aggressively demonetizing videos would become YouTube’s go-to move.

. . . .

The January 2017 closure of Vine, a platform for looping six-second videos, left a number of creators and influencers without a platform, and many of those stars moved over to YouTube. David Dobrik, Liza Koshy, Lele Pons, Danny Gonzalez, and, of course, Jake and Logan Paul became instant successes on YouTube — even though many of them had started YouTube channels years before their success on Vine.

YouTube’s biggest front-facing stars began following in the footsteps of over-the-top, “bro” prank culture. (Think: Jackass but more extreme and hosted by attractive 20-somethings.) Logan Paul pretended to be shot and killed in front of young fans; Jake Paul rode dirt bikes into pools; David Dobrik’s friends jumped out of moving cars. The antics were dangerous, but they caught people’s attention.

. . . .

Jake and Logan Paul became the biggest stars of this new wave, performing dangerous stunts, putting shocking footage in their vlogs, and selling merchandise to their young audiences. Although they teetered on the edge of what was acceptable and what wasn’t, they never really crossed the line into creating totally reprehensible content.

. . . .

It wasn’t a sustainable form of entertainment, and it seemed like everyone understood that except for YouTube. The Paul brothers were on their way to burning out; all it would take was one grand mistake. Even critics of the Pauls, like Kjellberg, empathized with their position. Kjellberg, who faced controversy after controversy, spoke about feeling as though right or wrong ceased to exist when trying to keep up with the YouTube machine.

“The problem with being a YouTuber or an online entertainer is that you constantly have to outdo yourself,” Kjellberg said in a 2018 video. “I think a lot of people get swept up in that … that they have to keep outdoing themselves, and I think it’s a good reflection of what happened with Logan Paul. If you make videos every single day, it’s really tough to keep people interested and keep them coming back.”

Still, Logan Paul was small potatoes compared to YouTube’s bigger problems, including disturbing children’s content that had been discovered by The New York Times and more terrorism content surfacing on the site. Who cared about what two brothers from Ohio were doing? The breaking point would be when Logan Paul visited Japan.

. . . .

Logan Paul’s “suicide forest” video irrevocably changed YouTube.

In it, Paul and his friends tour Japan’s Aokigahara forest, where they encountered a man’s body. Based on the video, it appears that he had recently died by suicide. Instead of turning the camera off, Paul walks up to the body. He doesn’t stop there. He zooms in on the man’s hands and pockets. In post-production, Paul blurred the man’s face, but it’s hard to see the video as anything but an egregious gesture of disrespect.

Within hours of posting the video, Paul’s name began trending. Actors like Aaron Paul (no relation), influencers like Chrissy Teigen, and prominent YouTubers called out Paul for his atrocious behavior.

YouTube reacted with a familiar strategy: it imposed heavy restrictions on its Partner Program (which recognizes creators who can earn ad revenue on their videos), sharply limiting the number of videos that were monetized with ads. In a January 2018 blog post announcing the changes, Robert Kyncl, YouTube’s head of business, said the move would “allow us to significantly improve our ability to identify creators who contribute positively to the community,” adding that “these higher standards will also help us prevent potentially inappropriate videos from monetizing which can hurt revenue for everyone.”

. . . .

The only people who didn’t receive blame were YouTube executives themselves — something that commentators like Philip DeFranco took issue with after the controversy first occurred. “We’re talking about the biggest creator on YouTube posting a video that had over 6 million views, was trending on YouTube, that no doubt had to be flagged by tons of people,” DeFranco said.

“The only reason it was taken down is Logan or his team took it down, and YouTube didn’t do a damn thing. Part of the Logan Paul problem is that YouTube is either complicit or ignorant.”

. . . .

[B]y the middle of 2018, lifestyle vloggers like Carrie Crista, who has just under 40,000 subscribers, were proclaiming how the community felt: forgotten. “YouTube seems to have forgotten who made the platform what it is,” Crista told PR Week. In its attempt to compete with Netflix, Hulu, and Amazon, she said, YouTube is “pushing content creators away instead of inviting them to a social platform that encourages them to be creative in a way that other platforms can’t.”

Even people outside of YouTube saw what was happening. “YouTube is inevitably heading towards being like television, but they never told their creators this,” Jamie Cohen, a professor of new media at Molloy College, toldUSA Today in 2018.

By promoting videos that meet certain criteria, YouTube tips the scales in favor of organizations or creators — big ones, mostly — that can meet those standards. “Editing, creating thumbnails, it takes time,” Juliana Sabo, a creator with fewer than 1,000 subscribers, said in 2018 after the YouTube Partner Program changes. “You’re just prioritizing a very specific type of person — the type of person that has the time and money to churn out that content.”

Individual YouTube creators couldn’t keep up with the pace of YouTube’s algorithm set. But traditional, mainstream outlets could: late-night shows began to dominate YouTube, along with music videos from major labels. The platform now looked the way it had when it started, but with the stamp of Hollywood approval.

. . . .

The RackaRacka brothers are tired.

“We loved it before when it was like, ‘Oh, you guys are doing something unique and different. Let’s help you guys so you can get views and get eyes on it,’” Danny says. “I’d love to go back to that. We have so many big, awesome ideas that we’d love to do, but there’s no point in doing it on YouTube.”

Link to the rest at The Verge

The OP is a very long article. PG has excerpted more than he might have from an article with a different topic, however.

While reading the article, PG was struck by parallels between how dependent indy videographers were on YouTube and how dependent indy authors are on Amazon.

A year ago, PG doesn’t believe he would have had the same response. The amateurism and arrogance demonstrated by YouTube management in the OP contrasted greatly with the maturity and steady hand at the top levels of Amazon. Amazon has not made many dumb mistakes. Amazon has also treated indy authors with respect and generosity beyond that shown by any other publisher/distributor/bookstore in the US (and probably elsewhere).

This is not to say Amazon is a perfect company or that it hasn’t made some mistakes, but Amazon has demonstrated good business judgment, done a pretty good job of fixing its errors and hasn’t changed the way it operates in a manner that has harmed indie authors in a serious way.

Obviously, Jeff Bezos, his attitudes, judgment and approach to dealing with others has imprinted itself up and down the corporate hierarchy at Amazon. That sure hand on the corporate helm has caused PG to trust Amazon more than he does any other large tech company.

Additionally, Amazon has been leagues beyond any other organization in the book publishing and bookselling business in attracting smart adults as managers, making intelligent business decisions, treating partners well and managing the business as if it wanted long-term success as a publisher and bookseller (see, as only one example of business as usual in the publishing world, Barnes & Noble).

However.

PG admits his faith in Jeff Bezos’ solid judgment took a big hit with the disclosure of Bezos’ marital misconduct and divorce.

This struck him as an immature example of the runaway hubris that has brought down quite a few large companies, particularly in the tech world.

PG is old-fashioned in his belief that the behavior of a virtuous individual will manifest itself in all parts of that individual’s life. He understands the common explanation for such behavior in terms of a person being able to segment his life into business and personal spheres and continue in public excellence while making serious mistakes in private behavior.

PG also understands that marriages can fail for a wide variety of reasons and assigning blame for such failure (if there is blame to be assigned) is impossible for someone who is not privy to the personal lives of each party. That said, PG suggests at least a separation, if not a divorce, would be a more standup approach by a mature adult exercising good judgment to a marriage that has declined to the point of a breakup.

A secret affair that is leaked to the press is not, in PG’s admittedly traditional eyes, up to the standards he has come to expect from Bezos. The general reaction PG has seen in the press leads PG to believe he is not alone in his opinion.

Apple Felt like a Totally Different Company Today

26 March 2019

From Fast Company:

While I sat inside the Steve Jobs Theater watching Big Bird talk to a hand puppet on the stage, I realized Apple was not the same company I knew not long ago.

No new devices were announced. There were no slides filled with impressive specs or performance metrics. No oohs and ahhs. No “one more thing.”

Yeah, yeah, I know: Apple, under CEO Tim Cook, is becoming a services company to account for flagging iPhone sales growth. What we saw today, at Apple’s “It’s show time” event in Cupertino–maybe for the first time–is the public face of that new company.

Part of the reason the presentation felt so different is because it was as much about other companies as it was about Apple. It was about Apple putting an Apple wrapper on a bunch of content and services made by third parties.

. . . .

All these announcements came in the first hour of the presentation. With that much time left I wondered if Apple had some tricks up its sleeve after all. But no: It had simply reserved an entire hour to talk about its original video content, which it has branded “TV+,” and which won’t be available until next fall.

What followed was a string of Hollywood people talking about the shows and movies they’re making for Apple. The uneasy mix of Hollywood and Silicon Valley cultures was on full display. Reese Witherspoon, Jennifer Aniston, and Steve Carrell were there to boost a show they’re making about TV news personalities, but they came off like they were trapped under glass.

Steven Spielberg came out to a warm welcome and talked about his reboot of the Amazing Stories series for television. A dramatic video came on about how we desperately need more conversation among people with different viewpoints. Then the lights went down, and when they came up Oprah Winfrey was there.

. . . .

The question is the company’s identity. At Apple events we’re used to seeing people like Kevin Lynch (Apple Watch) and Craig Federighi (iOS) who you know live and breathe core “Designed in California” products.

Today the company made a big deal of announcing a bunch of third-party content and services, with only passing references to the hardware that made it famous. Should Apple really identify itself with products that its own creative hand never really gets close to?

Link to the rest at Fast Company

TPV isn’t a tech blog, but PG has worked with a variety of tech companies in the past and, although he’s a Windows guy, has always admired Apple’s sense of mission and used iPhones almost forever.

The successor of a talented and creative CEO has a tough job in Silicon Valley. After a quick mental review, PG thinks far more successors at significant tech companies have failed than have succeeded.

Steve Jobs took Apple through some perilous times, but he always pushed the envelope and announced interesting new products. Under Jobs, Apple certainly had some product failures, but it never seemed like a company that was resorting to lame strategies. When things got tough, Apple thought big.

As the OP reflected, after stumbling with the pricing/features of its latest iPhones, yesterday’s announcement seemed to represent, “We’ve got to do something! Let’s copy what other companies are doing, but use Apple branding. Apple has a great brand that we need to exploit.”

PG suggests that brand equity is a precious commodity that needs to be preserved and cultivated with impressive new accomplishments, fostering the assurance that customers can continue to receive great benefits from the company and its products. It needs to feel cool by the standards of its industry.

In the tech world, where real technology talent is always in short supply, newly-graduated engineers from top universities are often attracted to employers who promise the opportunity to work on the cutting edge.

For all of Tesla’s financial ups and downs and Elon Musk, its frenetic CEO, engineers working there feel like they’re inventing the future. Amazon has felt like a serious innovator for a long time and can attract tech and marketing talent based upon that reputation and the opportunity to work on something new and different. (PG hopes Bezos’ marital problems aren’t Amazon’s version of Jobs’ pancreatic cancer.)

If Apple’s reputation becomes, “The company is not what it used to be and shows no signs of turning around,” adverse consequences will appear from many different directions.

 

Next Page »