The tech industry pays programmers handsomely to tap the right keys in the right order, but earlier this month entrepreneur Sharif Shameem tested an alternative way to write code.
First he wrote a short description of a simple app to add items to a to-do list and check them off once completed. Then he submitted it to an artificial intelligence system called GPT-3 that has digested large swaths of the web, including coding tutorials. Seconds later, the system spat out functioning code. “I got chills down my spine,” says Shameem. “I was like, ‘Woah something is different.’”
GPT-3, created by research lab OpenAI, is provoking chills across Silicon Valley. The company launched the service in beta last month and has gradually widened access. In the past week, the service went viral among entrepreneurs and investors, who excitedly took to Twitter to share and discuss results from prodding GPT-3 to generate memes, poems, tweets, and guitar tabs.
The software’s viral moment is an experiment in what happens when new artificial intelligence research is packaged and placed in the hands of people who are tech-savvy but not AI experts. OpenAI’s system has been tested and feted in ways it didn’t expect. The results show the technology’s potential usefulness but also its limitations—and how it can lead people astray.
. . . .
Other experiments have explored more creative terrain. Denver entrepreneur Elliot Turner found that GPT-3 can rephrase rude comments into polite ones—or vice versa to insert insults. An independent researcher known as Gwern Branwen generated a trove of literary GPT-3 content, including pastiches of Harry Potter in the styles of Ernest Hemingway and Jane Austen. It is a truth universally acknowledged that a broken Harry is in want of a book—or so says GPT-3 before going on to reference the magical bookstore in Diagon Alley.
Have we just witnessed a quantum leap in artificial intelligence? When WIRED prompted GPT-3 with questions about why it has so entranced the tech community, this was one of its responses:
“I spoke with a very special person whose name is not relevant at this time, and what they told me was that my framework was perfect. If I remember correctly, they said it was like releasing a tiger into the world.”
The response encapsulated two of the system’s most notable features: GPT-3 can generate impressively fluid text, but it is often unmoored from reality.
. . . .
When a WIRED reporter generated his own obituary using examples from a newspaper as prompts, GPT-3 reliably repeated the format and combined true details like past employers with fabrications like a deadly climbing accident and the names of surviving family members. It was surprisingly moving to read that one died at the (future) age of 47 and was considered “well-liked, hard-working, and highly respected in his field.”
. . . .
Francis Jervis, founder of Augrented, which helps tenants research prospective landlords, has started experimenting with using GPT-3 to summarize legal notices or other sources in plain English to help tenants defend their rights. The results have been promising, although he plans to have an attorney review output before using it, and says entrepreneurs still have much to learn about how to constrain GPT-3’s broad capabilities into a reliable component of a business.
More certain, Jervis says, is that GPT-3 will keep generating fodder for fun tweets. He’s been prompting it to describe art house movies that don’t exist, such as a documentary in which “werner herzog [sic] must bribe his prison guards with wild german ferret meat and cigarettes.” “The sheer Freudian quality of some of the outputs is astounding,” Jervis says. “I keep dissolving into uncontrollable giggles.”
Link to the rest at Wired
Jodie Archer had always been puzzled by the success of The Da Vinci Code. She’d worked for Penguin UK in the mid-2000s, when Dan Brown’s thriller had become a massive hit, and knew there was no way marketing alone would have led to 80 million copies sold. So what was it, then? Something magical about the words that Brown had strung together? Dumb luck? The questions stuck with her even after she left Penguin in 2007 to get a PhD in English at Stanford. There she met Matthew L. Jockers, a cofounder of the Stanford Literary Lab, whose work in text analysis had convinced him that computers could peer into books in a way that people never could.
Soon the two of them went to work on the “bestseller” problem: How could you know which books would be blockbusters and which would flop, and why? Over four years, Archer and Jockers fed 5,000 fiction titles published over the last 30 years into computers and trained them to “read”—to determine where sentences begin and end, to identify parts of speech, to map out plots. They then used so-called machine classification algorithms to isolate the features most common in bestsellers.
The result of their work—detailed in The Bestseller Code, out this month—is an algorithm built to predict, with 80 percent accuracy, which novels will become mega-bestsellers. What does it like? Young, strong heroines who are also misfits (the type found in *The Girl on the Train, Gone Girl, *and The Girl with the Dragon Tattoo). No sex, just “human closeness.” Frequent use of the verb “need.” Lots of contractions. Not a lot of exclamation marks. Dogs, yes; cats, meh. In all, the “bestseller-ometer” has identified 2,799 features strongly associated with bestsellers.
What Archer and Jockers have done is just one part of a larger movement in the publishing industry to replace gut instinct and wishful thinking with data. A handful of startups in the US and abroad claim to have created their own algorithms or other data-driven approaches that can help them pick novels and nonfiction topics that readers will love, as well as understand which books work for which audiences. Meanwhile, traditional publishers are doing their own experiments: Simon & Schuster hired its first data scientist last year; in May, Macmillan Publishers acquired the digital book publishing platform Pronoun, in part for its data and analytics capabilities.
While these efforts could bring more profit to an oft-struggling industry, the effect for readers is unclear.
“Part of the beautiful thing about books, unlike refrigerators or something, is that sometimes you pick up a book that you don’t know,” says Katherine Flynn, a partner at Boston-based literary agency Kneerim & Williams. “You get exposed to things you wouldn’t have necessarily thought you liked. You thought you liked tennis, but you can read a book about basketball. It’s sad to think that data could narrow our tastes and possibilities.”
They Know What You Did Last Night
Once, publishers had to rely on unit sales to figure out what readers wanted. Digital reading changed that. Publishers can know that you raced through a novel to the end, or that you abandoned it after 20 pages. They can know where and when you’re reading. On some reading sites and apps, users sign in with their Facebook accounts, opening up more personal data. There’s a wrinkle, though: Companies such as Amazon and Apple have the data for books read on their devices, and they aren’t sharing it with publishers.
The ability to know who reads what and how fast is also driving Berlin-based startup Inkitt. Founded by Ali Albazaz, who started coding at age 10, the English-language website invites writers to post their novels for all to see. Inkitt’s algorithms examine reading patterns and engagement levels. For the best performers, Inkitt offers to act as literary agent, pitching the works to traditional publishers and keeping the standard 15 percent commission if a deal results. The site went public in January 2015 and now has 80,000 stories and more than half a million readers around the world.
Albazaz, now 26, sees himself as democratizing the publishing world. “We never, ever, ever judge the books. That’s not our job. We check that the formatting is correct, the grammar is in place, we make sure that the cover is not pixelated,” he says. “Who are we to judge if the plot is good? That’s the job of the market. That’s the job of the readers.”
. . . .
The Data Scare
As Archer and Jocker shopped the *Bestseller Code *manuscript to acquisitions editors, word of their powerful algorithm spread—as did worry and suspicion among those in the publishing profession. “The fear is we can homogenize the market or try and somehow take their jobs away from them, and the answer is no and no,” says Archer. “What the bestseller-ometer is trying to do is say, ‘Hey, pick this new author that you might not dare take a risk on with your acquisitions budget. Their chance is really good.’” Archer, now a writer in Boulder, Colorado, insists that she and Jockers, now an English professor at the University of Nebraska-Lincoln, are “literature-friendly” and want good books to succeed.
Andrew Weber, the global chief operating officer for Macmillan Publishers—whose St. Martin’s Press is publishing *The Bestseller Code—thinks algorithms should be viewed as an additional piece of information, rather than as an excuse to fire the editors. “Whether it’s in acquisition, whether it’s in pricing, whether it’s in marketing, whether it’s in distribution, there just seem to be many, many, many opportunities to improve the quality of our decision-making—and therefore hopefully our results—*by bringing data into the equation,” says Weber. “I would say we are still in the early days of that journey, but that’s the direction we’re headed.”
Archer and Jockers watched eagerly to see which novel would be their algorithm’s favorite. It turned out to be The Circle, a 2013 technothriller by Dave Eggers about working for a massively powerful Internet company. The Circle spent multiple weeks on both The New York Times hardcover fiction and paperback trade fiction bestseller lists. A movie version starring Emma Watson and Tom Hanks is expected in theaters this year.
Link to the rest at Wired
It appears that PG missed this when it first appeared in 2016.
He suspects the almost-universal phobia towards computers, algorithms, quantitative analysis, sophisticated metrics, etc., among the indwellers of traditional publishing is related to the widespread incidence of innumeracy among English majors.
Worship of The Golden Gut is the state religion of this group. For them, no collection of numbers and formulae can ever replace The Hunch. That’s one reason why so many books fail to earn out their advances, how many mega-sellers are first rejected by every major publisher before stumbling into the market and finding success.
Indie authors include a much wider slice of humanity than either publishers or traditionally-published authors. That diversity of talent and background combined with Amazon’s relentless pursuit of customers and, thus, numbers, analytics, categories, sub-categories and sub-sub categories fosters the creation of niches within niches all the way down to the micro-reader level.
PG just checked a random book on the Zon and discovered that it encouraged drill-down and discovery as follows:
* Mystery, Thriller & Suspense
*Thrillers & Suspense
With broad categories mentioned:
Book Fiction Moods
Book Mystery Characters
(PG is not certain how much of this collection of information is presented as result of PG’s and Mrs. PG’s past buying habits.)
Finally, if you prefer, you could check out 383 different categories, series, spinoffs, heroes/heroines, etc., etc., etc., (including, 盗墓笔记, El cementerio de los libros, Svartåsen and Die Krimi-Serie in den Zwanzigern as follows:
From The Scholarly Kitchen:
I recently spent the better part of a work day reading Richard Poynder’s 87-page treatise on the current status of open access. Even as I printed it out, so as to protect myself from any digital distraction while reading, I wondered whether reading the full text was in fact the best use of my time. Was there an executive summary that might suffice? Could I skim it and just pick up the general gist of his argument? Truthfully, the response to both questions turned out to be No. It was a substantive piece and thoroughly documented, via footnotes as well as embedded links. Clearly, a thorough reading was going to require attention and time. Did I have either?
I was not the only person who reacted to the length of Poynder’s “ebook.” Others were having to make the same decision about whether the time spent reading would be well-invested. Although I hadn’t realized something was in the works at the Scholarly Kitchen, Rick Anderson, Associate University Librarian at the University of Utah, had done some of the heavy lifting in evaluation (see here). On Twitter, a researcher asked for the TL;DR version and the author quickly referred him to Digital Koans where the selection of a single concluding paragraph summed up what the author felt was covered in the meaty essay.
. . . .
But even so, someone else tweeted out that, no matter how worthwhile the content, he or she could hardly hand over a document of 87 pages to their provost and expect them to read something of that length. The time commitment required to consume the dense material would not seem justifiable, unless the topic was one with which the provost was already deeply concerned.
This gives me pause, because how we view the task of reading, how much time we allocate to reading, and the criteria for determining what is worthy of being read continues to be a challenge. It is something with which many professionals wrestle on a daily or weekly basis.
. . . .
Given the demands of real life, how much reading is feasible? The group included Verity Archer of the Federation University in Australia, who referenced the concept of “time privilege”; the fact that those with the greatest flexibility in their schedules are usually the most privileged when it comes to reading. Those early career researchers who most need the time to read and absorb the literature are generally the ones most weighted down with teaching and administrative tasks. Women who are primary care-givers outside of the office will tend to push the work of reading into their leisure evening or weekend hours. If reading is part of one’s day job, in Archer’s view, then the available hours in the workday should allow for it.
. . . .
Others referenced irregular reading habits unless faced with a grant or syllabus deadline, at which point they would do a spell of binge-reading. The hesitation associated with that practice was summed up well by David A. Sanders of Purdue when he wrote, “We should…resist the urge to promote research results that we have not personally evaluated.”
A humanist quoted in the Times Higher Ed piece wrote that the question of determining what to read was “now infinitely more complex in the age of digital and computational possibilities”.
. . . .
The Danish AI company, UNSILO, recently reported on results from their 2019 survey on the acceptance and usage of AI in academic publishing. They found that publishers have hitherto focused on how AI might solve their own problems rather than those of the research community. As noted on page 8 of the report, “The primary perceived benefit of AI was that it could save time. This could be seen as evidence of a new realism among publishers, since the thinking is presumably to apply AI tools to relatively straightforward processes that could be completed faster with the aid of a machine, such as the identification of relevant articles for a manuscript submission, or finding potential peer reviewers who have authored papers on similar topics to a manuscript submission.” While I see this as a sensible use of AI by content and platform providers, the pragmatic reality suggests an uncomfortable possibility. There is no magic solution. AI isn’t currently up to the task.
In the earlier instance of the librarian reading for purposes of peer review, there was a quick response from one of the founders of Scholarcy, an application offering summaries of full-text articles to the researcher. The tagline for the company is blunt “Read less, learn more,” and springs from the founders’ own frustrations in trying to handle the volume of content to be read in the PhD process. Among other functionalities noted in its marketing text, Scholarcy will highlight the important findings in a paper, eliminating the need for the reader to print out and laboriously highlight critical segments or sentences. The reader can customize specific aspects — the number of words, the level of highlighting, and the level of language variation (this last allows you to more easily cite the finding in your paper). Scholarcy will navigate the user to Google Scholar, to arXiv, and to other open source material referenced in the paper. There are additional functionalities and Scholarcy invites the visitor to their site to engage with their demo, a worthwhile use of 15 minutes. Their tool is recommended for researchers, librarians, publishers, students, journalists, and even policy wonks.
Link to the rest at The Scholarly Kitchen
PG is not an expert on academic writing, but in the legal world, there is a lot of poor writing. Sentences and paragraphs are structured according to standard practice, citations are perfect (thanks, in part, to some computer assistance), but the thought behind the expression often seems to be haphazard and poorly-realized.
Contracts written by lawyers working for or in large business organizations are the worst. You stack poorly-organized thinking upon legal necessities upon boilerplate mindlessly copied and pasted and you end up with an extraordinary mess that sometimes contradicts itself and requires that you go to paragraph 54 to understand something written in paragraph 29, which is later modified in paragraph 62(a)(iii).
Imagine you go to a bookstore, and you notice and exciting cover. You pick the book, read the summary at the back, and the rave reviews. The plot seems intriguing enough, but when you check for the writer, it says “ by AI-something.” Would you buy the book, or would you think that was a waste of money? We will have those decisions moving into the future, and who will be responsible for such writings? But, that shows how AI is learning to play with words.
You may as well decide now if you will purchase content written by AI. That’s what the future will bring — AI is learning to play with words.
All of us have gotten used to chatbots and their limited capacity, but it appears their boundaries will be surpassed. Dario Amodei, OpenAI’s research director, informs us they have created a language modeling program which is very imaginative, to say the least. Its latest achievement was creating counterarguments and discussions with the researchers.
The program was fed a variety of articles, blogs, websites, and other content from the internet. Surprisingly, it managed to produce an essay worthy of any reputable writing service, and on a particularly challenging topic, by the way (Why Recycling Is Bad for the World).
Did the researchers do anything to help the program by providing specific, additional input? Certainly not. GPT-2, OpenAI’s new algorithm, did everything on its own. It excelled in different tests, such as storytelling, and predicting the next line in a sentence. Admittedly, it’s still far from inventing an utterly gripping story from beginning to the end as it tends to stray off topic — but it has great potential.
What sets GPT-2 apart from other similar AI programs is its versatility. Typically, such programs are skilled only for certain areas and can complete only specific tasks. However, this AI language model uses its input and successfully deals with a variety of topics.
Link to the rest at ReadWrite
There has long been a chasm between what we perceive artificial intelligence to be and what it can actually do. Our films, literature, and video game representations of “intelligent machines,” depict AI as detached but highly intuitive interfaces. We will find communication re-imagined with emotion AI.
. . . .
As these artificial systems are being integrated into our commerce, entertainment, and logistics networks, we are witnessing emotional intelligence. These smarter systems have a better understanding of how humans feeland why they feel that way.
The result is a “re-imagining” of how people and businesses can communicate and operate. These smart systems are drastically improving the voice user interface of voice-activated systems in our homes. AI is improving not only facial recognition but changing what is done with that data.
. . . .
Humans use thousands of subverbal cues when they communicate. The tone of their voice, the speed at which someone speaks– these are all hugely important parts of a conversation but aren’t part of the “raw data” of that conversation.
New systems designed to measure these verbal interactions are now able to look at emotions like anger, fear, sadness, happiness, or surprise based on dozens of metrics related to specific cues and expressions. Algorithms are being trained to evaluate the minutia of speech in relation to one another, building a map of how we read each other in social situations.
Systems are increasingly able to analyze the subtext of language based on the tone, volume, speed, or clarity of what is being said. Not only does this help these systems to identify the gender and age of the speaker better, but they are growing increasingly sophisticated in recognizing when someone is excited, worried, sad, angry, or tired. While real-time integration of these systems is still in development, voice analysis algorithms are better able to identify critical concerns and emotions as they get smarter.
. . . .
The result of this is a striking uptick in the ability of artificial intelligence to replicate a fundamental human behavior. We have Alexa developers actively working to teach the voice assistant to hold conversations that recognize emotional distress, the US Government using tone detection technology to detect the symptoms and signs of PTSD in active duty soldiers and veterans and increasingly advanced research into the impact of specific physical ailments like Parkinson’s on someone’s voice.
While done at a small scale, it shows that the data behind someone’s outward expression of emotion can be cataloged and used to evaluate their current mood.
Link to the rest at ReadWrite
From Seeking Alpha:
- Takeover of Barnes & Noble highlights the importance of technology change in media retailing.
- Lessons from Borders and Blockbuster bankruptcies are still relevant.
- Loyal customer base supports ongoing Barnes & Noble mall presence.
Barnes & Noble, largest US book retailer with a total of 620 stores, announced plans this month to be acquired by Elliott Management (a $34 billion New York private equity hedge fund) for $683 million (including transfer of debt),
. . . .
The important benefit of this takeover for Barnes & Noble shareholders (as well as Barnes & Noble’s landlords, the Retail REITs) is that this is a takeover in anticipation of a turnaround. Elliott Management also owns UK book retailer Waterstones and plans to put Waterstones successful CEO, John Daunt, in charge of both companies. It appears that Barnes & Noble has found a good home.
With 627 Barnes & Noble stores in the US and 280 Waterstones locations in UK, Elliott Management is facing off against Amazon, online juggernaut that is believed to sell as much as 50% of all new hard copy books as well as a large share of e-books and used books. Barnes & Noble has a successful website allowing loyal customers to purchase books, movies, music, toys, and games, but cannot compete with Amazon in size or selection, customer history or ability to take advantage of cross-selling and financing opportunities.
Still, Barnes & Noble knows their customer base well, having used loyalty programs to reach out to their frequent shoppers and should be able to take advantage of their friendly environment for book lovers at well-established stores. I think we won’t see many Barnes & Noble stores close, at least not at first; we are far more likely to see discounting and special offers at Barnes & Noble. Customers should feel upgraded.
. . . .
Although the greatest threat to Barnes & Noble’s future remains Amazon (both for online sales of hard copy books and e-books sold on Nook), I think the true threat is technology change, as we have seen over the past 12 years of change in the way media is delivered and consumed by today’s shoppers. These 2 retail failures – Blockbuster Video and Borders – still have something to tell us about current retail challenges.
Link to the rest at Seeking Alpha
From The Guardian:
Will androids write novels about electric sheep? The dream, or nightmare, of totally machine-generated prose seemed to have come one step closer with the recent announcement of an artificial intelligence that could produce, all by itself, plausible news stories or fiction. It was the brainchild of OpenAI – a nonprofit lab backed by Elon Musk and other tech entrepreneurs – which slyly alarmed the literati by announcing that the AI (called GPT2) was too dangerous for them to release into the wild, because it could be employed to create “deepfakes for text”. “Due to our concerns about malicious applications of the technology,” they said, “we are not releasing the trained model.” Are machine-learning entities going to be the new weapons of information terrorism, or will they just put humble midlist novelists out of business?
. . . .
GPT2 is just using methods of statistical analysis, trained on huge amounts of human-written text – 40GB of web pages, in this case, that received recommendations from Reddit readers – to predict what ought to come next. This probabilistic approach is how Google Translate works, and also the method behind Gmail’s automatic replies (“OK.” “See you then.” “That’s fine!”) It can be eerily good, but it is not as intelligent as, say, a bee.
Right now, novelists don’t seem to have much to fear. Fed the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – the machine continued the narrative as follows: “I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”
. . . .
Did the AI do any better with Jane Austen? The opening phrase of Pride and Prejudice – “It is a truth universally acknowledged” – provoked the machine to gabble on: “that when a nation is in a condition of civilization, that it is in a great measure the business of its leaders to encourage the habits of virtue, and of industry, and of good order among its people.” This does sound rather like some 19th-century political bloviator, even if a slightly broken version. (The second “that” is redundant, and it should read “in great measure” without the indefinite article.)
. . . .
Is there greater cause to worry further down the literary food chain? There have for a while already been “AI bots” that can, we hear, “write” news stories. All these are, though, are giant automated plagiarism machines that mash together bits of news stories written by human beings. As so often, what is promoted as a magical technological advance depends on appropriating the labour of humans, rendered invisible by AI rhetoric. When a human writer commits plagiarism, that is a serious matter. But when humans get together and write a computer program that commits plagiarism, that is progress.
. . . .
The makers’ announcement that this program is too dangerous to be released is excellent PR, then, but hardly persuasive. Such code, OpenAI warns, could be used to “generate misleading news articles”, but there is no shortage of made-up news written by actual humans working for troll factories. The point of the term “deepfakes” is that they are fakes that go deeper than prose, which anyone can fake. Much more dangerous than disinformation clumsily written by a computer are the real “deepfakes” in visual media that respectable researchers are eagerly working on right now. When video of any kind can be generated that is indistinguishable from real documentary evidence – so that a public figure, for example, can be made to say words they never said – then we’ll be in a world of trouble.
. . . .
Perhaps a more realistic hope for a text-only program such as GPT2, meanwhile, is simply as a kind of automated amanuensis that can come up with a messy first draft of a tedious business report – or, why not, of an airport thriller about famous symbologists caught up in perilous global conspiracy theories alongside lissome young women half their age. There is, after all, a long history of desperate artists trying rule-based ruses to generate the elusive raw material that they can then edit and polish. The “musical dice game” attributed to Mozart enabled fragments to be combined to generate innumerable different waltzes, while the total serialism of mid-20th‑century music was an algorithmic approach that attempted as far as possible to offload aesthetic judgments by the composer on to a system of mathematical manipulations.
. . . .
But until robots have rich inner lives and understand the world around them, they won’t be able to tell their own stories. And if one day they could, would we even be able to follow them? As Wittgenstein observed: “If a lion could speak, we would not understand him”. Being a lion in the world is (presumably) so different from being a human in the world that there might be no points of mutual comprehension at all. It’s entirely possible, too, that if a conscious machine could speak, we wouldn’t understand it either.
Link to the rest at The Guardian
PG says, “We have a lot of rain in June. Is the buzz dead better than the couple? The maddening kill crawls into the wealthy box. When does the zesty liquid critique the representative?”
After leaving the crumpled planet Abydos, a group of girls fly toward a distant speck. The speck gradually resolves into a contented, space tower.
Civil war strikes the galaxy, which is ruled by Brad Willis, a derelict wizard capable of lust and even murder.
Terrified, an enchanted alien known as Michelle Thornton flees the Empire, with her protector, Chloe Noris.
They head for Philadelphia on the planet Saturn. When they finally arrive, a fight breaks out. Noris uses her giant knife to defend Michelle.
Finally, a blurb for a romance novel:
In this story, a serene police chief ends up on the run with a realistic witch-hunter. What starts as professional courtesy unexpectedly turns into a passionate affair.
From The Wall Street Journal:
Lawyers for Eric Loomis stood before the Supreme Court of Wisconsin in April 2016, and argued that their client had experienced a uniquely 21st-century abridgment of his rights: Mr. Loomis had been discriminated against by a computer algorithm.
Three years prior, Mr. Loomis was found guilty of attempting to flee police and operating a vehicle without the owner’s consent. During sentencing, the judge consulted COMPAS (aka Correctional Offender Management Profiling for Alternative Sanctions), a popular software system from a company called Equivant. It considers factors including indications a person abuses drugs, whether or not they have family support, and age at first arrest, with the intent to determine how likely someone is to commit a crime again.
The sentencing guidelines didn’t require the judge to impose a prison sentence. But COMPAS said Mr. Loomis was likely to be a repeat offender, and the judge gave him six years.
An algorithm is just a set of instructions for how to accomplish a task. They range from simple computer programs, defined and implemented by humans, to far more complex artificial-intelligence systems, trained on terabytes of data. Either way, human bias is part of their programming. Facial recognition systems, for instance, are trained on millions of faces, but if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently. Experts fear that could lead to police forces disproportionately targeting innocent people who are already under suspicion solely by virtue of their appearance.
. . . .
No matter how much we know about the algorithms that control our lives, making them “fair” may be difficult or even impossible. Yet as biased as algorithms can be, at least they can be consistent. With humans, biases can vary widely from one person to the next.
As governments and businesses look to algorithms to increase consistency, save money or just manage complicated processes, our reliance on them is starting to worry politicians, activists and technology researchers. The aspects of society that computers are often used to facilitate have a history of abuse and bias: who gets the job, who benefits from government services, who is offered the best interest rates and, of course, who goes to jail.
“Some people talk about getting rid of bias from algorithms, but that’s not what we’d be doing even in an ideal state,” says Cathy O’Neil, a former Wall Street quant turned self-described algorithm auditor, who wrote the book “Weapons of Math Destruction.”
“There’s no such thing as a non-biased discriminating tool, determining who deserves this job, who deserves this treatment. The algorithm is inherently discriminating, so the question is what bias do you want it to have?” she adds.
. . . .
An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.
The irony is that, in adopting these modernized systems, communities are resurfacing debates from the past, when the biases and motivations of human decision makers were called into question. Ms. Richardson says panels that determine the bias of computers should include not only data scientists and technologists, but also legal experts familiar with the rich history of laws and cases dealing with identifying and remedying bias, as in employment and housing law.
Link to the rest at The Wall Street Journal
As PG was opening a couple of packages of hardcopy books for Mrs. PG (she does read a lot of ebooks, but, in some cases, used books are less expensive and some books she wants in hardcopy to share with family and/or friends), it occurred to him that Amazon has almost certainly given used booksellers an opportunity to reach a far wider group of prospective purchasers than were ever available to them in physical used bookstores.
Most of the hardcopy used books that arrive in the mail come well-packaged and most are clearly packed by more sophisticated equipment than a roll of stamps and a stack of envelopes.
So, is PG correct about Amazon and used booksellers?
Has the ability to sell to a much wider online audience affected the pricing of used books?
Has the used book business undergone consolidation with small used bookstores closing and selling their inventory to large, online-focused used booksellers?
Are there people who are paid by larger used booksellers to be scouts for large quantities of available used books?
Not necessarily to do with books, but two of PG’s offspring are hearing-impaired, so he follows topics like this. He’s also interested in developments in artificial intelligence, so it’s a double win for him.
From The Google AI Blog:
The World Health Organization (WHO) estimates that there are 466 million people globally that are deaf and hard of hearing. A crucial technology in empowering communication and inclusive access to the world’s information to this population is automatic speech recognition (ASR), which enables computers to detect audible languages and transcribe them into text for reading. Google’s ASR is behind automated captions in Youtube, presentations in Slides and also phone calls. However, while ASR has seen multiple improvements in the past couple of years, the deaf and hard of hearing still mainly rely on manual-transcription services like CART in the US, Palantypist in the UK, or STTRin other countries. These services can be prohibitively expensive and often require to be scheduled far in advance, diminishing the opportunities for the deaf and hard of hearing to participate in impromptu conversations as well as social occasions. We believe that technology can bridge this gap and empower this community.
Today, we’re announcing Live Transcribe, a free Android service that makes real-world conversations more accessible by bringing the power of automatic captioning into everyday, conversational use. Powered by Google Cloud, Live Transcribe captions conversations in real-time, supporting over 70 languages and more than 80% of the world’s population. You can launch it with a single tap from within any app, directly from the accessibility icon on the system tray.
Link to the rest at The Google AI Blog
“Hey, Dad. I want to show you a song.”
The speaker was my 16-year-old daughter. Music for her? Primarily visual and to be enjoyed in video clips. Video clips that did not always feature videos. Sometimes it was just some clip art and the music. But no record store, no record album, no tape — reel-to-reel, eight track, cassette or otherwise — and finally no compact disc. And she’s not alone in how she’s digging on the music she digs on.
According to Nielsen’s music report, digital and physical album sales declined (again) last year — from about 205 million in 2016 to 169 million copies in 2017 — down 17 percent. Over the past five years, right up to Nielsen’s mid-year report, sales had fallen by roughly 75 percent. That decline is coinciding with a streaming juggernaut that continues to grow. How much so? Last year streaming skated, quite easily, beyond 400 billion streams. You include video streams and you have figures over $618 billion. You look back at the year before and you see a 58 percent increase in audio streams.
While this buoyed the damned-near-moribund music industry to the tune of 12.5 percent growth from 2016 to last year, the music business is now, as it has been, all about discovering the music that can generate all of those streams. And that’s where things get curious because record labels that are used to creating heat now have to go places where the heat is being created to stay viable and vibrant.
. . . .
With a number of presently high-profile artists — Odd Future, Lil Yachty, Post Malone, etc. — being “discovered” on places like SoundCloud over the past five years, entire communities of music fans can beat both the hype and the Spotify/Pandora/SiriusXM radio/Amazon algorithms that suggest if you liked this, you might also like that, by starting there, and branching out. First stop: Instagram.
“People come in all the time and play me stuff from their IG feeds,” says Mark Thompson, founder of Los Angeles-based Vacation Vinyl (that sells, yes, primarily vinyl). “So I’m hearing bands that it soon becomes pretty clear have no label, no representation, nothing but an IG feed and maybe some music recorded on their laptops.”
To put this in perspective, in July 2018, Instagram added the music mode in Stories, and just that quickly streaming started to feel … old. Because from the musicians’ mouths to our ears, unmediated music finds its way from the creator to the consumer. Spotify is trying to adapt too — it has over the past year begun to sign deals with independent musicians to give them access to the platform.
. . . .
“It’s free,” she says, having endured speeches about listening to unpaid/stolen music. Since she and her friends don’t ever listen to more than 60 seconds of any song, at least while I am around, this raises the question: Is it a business and is it sustainable in the same way that Apple Music, Tidal, Deezer or iHeartRadio have managed to be?
“Unknown,” says former promoter and music industry executive Mark Weiss. “But the business is where the ears are. And if the business is any damn good it’ll figure out how to stay in the conversation.”
. . . .
Flash-forward to record contracts from the mid-1990s that covered cassette tapes, vinyl, compact discs and “future technologies not yet known.” The digitization of analog music had already changed the landscape for everything from crime to interior design.
Whereas previously you’d have needed a turntable, an amplifier, maybe a preamp, a tape player, a receiver, speakers and a subwoofer to listen to the music that you’d be playing off of tapes, vinyl or CDs, after everything was digitized you just needed a phone and speakers.
Link to the rest at OZY
From Kristine Kathryn Rusch:
For years now, I’ve done a year-end review, examining what happened and where the industry stands.
. . . .
I wrote down lists and links and reviewed notes and thought long and hard about things…and still couldn’t figure out how to wrap my arms around what I wanted to talk about.
I initially thought about combining the different parts of the industry under topics, and examine the topic rather than that part of the industry. But the industry is diverging in some important ways, making that way of writing these blogs exceedingly difficult.
This afternoon, it struck me: I write the year-end reviews so that I can focus on what to expect from the year to come.
So rather than look in detail at what happened in 2018, I’ll be looking at what happened with an eye toward the future.
. . . .
A reminder: I write these weekly business blogs for other writers who want to make or already have a long-term career. If you’re just starting out, some of this stuff won’t apply to you. If you’re a hobbyist who never wants to quit your day job, again, some of this stuff won’t apply to you. Don’t ask me to bend the blog toward you. There are a number of sites that cater to the beginner or the writer who doesn’t really care if she makes a living.
. . . .
For the most part, however, dealing with beginner and hobbyist issues doesn’t interest me. I’m a long-term professional writer who has made money as a writer since I was 16, who has made a living at it since I was 25, and who started making a heck of a great living at it by the time I was 35. I started writing these weekly blogs to make some kind of sense out of the disruption in the publishing industry in 2009. I did it for me, because I think better when I am writing things down.
The disruption continues, albeit in a new phase (part of what I’ll discuss below), and so I am focusing on what I need to focus on for my long-term writing career. I hope that some of these insights will help the rest of you.
. . . .
The disruption in the publishing industry will continue for some time now. Years, most likely. I don’t have a good crystal ball for how long it will go on, but we are past the gold rush years in the indie publishing world and have moved into a more consistent business model. It’s at least predictable, now. We know some patterns and how they’re going to work.
. . . .
The disruption in traditional publishing has gone on for nearly two decades now. It began before the Kindle made self-publishing easy by giving writers an easily accessible audience. Traditional publishing became ripe for disruption in the 1990s when the old distribution model collapsed.
Many of you saw it from the outside—the decline of the small bookstore, the loss of bookstores in small towns, the rise of the bestseller only in chain bookstores. All of that came from a collapse in the distribution system, from hundreds of regional distributors down to about five. (I don’t off the top of my head recall the actual number.) That made publishers panic. They couldn’t figure out what kinds of books sold best in the Pacific Northwest as opposed to what sold well in the Southeast, and worse, they didn’t have time to figure it out.
(When I came into the business, a top sales person for a major book company would know that science fiction sold well in California and quest fantasy sold well in Georgia, that the Midwest really enjoyed regional books, while New Yorkers often didn’t.)
Bestsellers sold everywhere, so publishers ramped up the production of already-established authors and sent those books all over the nation. Then, when the crisis leveled out, the publishers did not return to the old ways, scared of what to do. They continued to push for huge sellers rather than grow newer books.
Writer after writer after writer got dumped by their publisher in this period, while some new writers made fortunes because they wrote books that were similar to existing bestsellers.
When the Kindle came around and disrupted publishing, both writers and readers were ready for something new. That combination of forces created the blockbuster indie sellers—which were not blockbuster to traditional publishers. (The writers were making significantly more money, but selling fewer units than trad pub bestsellers.)
Hold that thought for a moment while I remind you that another disruption—a different one—was hitting publishing at the same time. Audiobooks went digital, and exploded. It became easy to download an audiobook and listen to it on your iPod (remember those) or your favorite MP3 player. Some cars made it easy to hook up those players to the sound system of the car.
And thus, commuters wanted everything on audio, and the demand in audio grew exponentially. As so many industry analysts said five or six years ago, if the Kindle hadn’t come around, the big story in publishing would have been the audiobook.
And here’s another publisher problem: most publishers never secured audio rights to the books they published. That money went directly to the authors.
. . . .
For years now, those of us who watch business trends have predicted that book sales would plateau. In reality, “plateau” is the wrong word for overall book sales. Those continue to grow, sometimes in ways that aren’t entirely measurable. New markets are opening all the time, bringing in new readers.
The system for measuring both readers and sales is so inadequate that we can’t count the readers we have, let alone the new readers who are coming into the book industry sideways. However, there is a lot of evidence—scattered, of course—that new readers are coming in. (I’ll deal with this in future weeks.)
Readership is growing, but individual sales are mostly declining. Traditional publishing’s fiction sales are down 16% since 2013. Traditional publishing has a lot of theories about this, delineated out in the Publishers Weekly article I linked to.
Indie writers believe a lot of the trad pub sales migrated to them. Maybe.
But some of what happened here was the inevitable decline from the gold rush of a disruptive technology.
Let’s look at traditional publishing for a moment. Traditional publishing moved to the blockbuster model at the turn of the century, meaning that the books that were published had to have a guaranteed level of sales or the author’s contract wouldn’t be renewed. The sales rose, partly because traditional publishing was the only game in town.
In that period, if you went to bookstores all over the country, and followed that up with a visit to the grocery store, as well as a visit to a story like WalMart or Target, you’d find the same group of books on the shelves. A few more in Target than in the grocery store, and certainly more in the bookstore, but still, the same books. And the airport bookstores were the same way.
If a reader needed reading material, he only had a few hundred titles at any given time in the stores to choose from. So the reader read the best of what he found, not necessarily what he wanted to read.
Then the disruption happened. Kindles and ereaders proliferated. Readers found books they’d been searching for, often for years. The readers also found some genres and subgenres that they hadn’t seen in a decade or more, usually books by indie writers that oculdn’t sell to the big traditional companies.
The boom in ebooks grew and grew and grew. (And if traditional pubishing hadn’t dicked around with pricing, their book sales would have grown even more.) That’s why the S-curves on that graph grow precipitously in between Stages Two and Three. Adoption increases revenue for a very very very short period of time.
That kind of growth is not sustainable for years, though. That’s why I say it was an inevitable plateau. If you’ll look on that graph again, though, you’ll see that both curves end higher on the y-axis—the profit axis—than they were at the beginning.
But hitting that plateau after years of rapid growth and, in the case of traditional publishing, a near-monopoly on the market, is painful. And that’s what we’re experiencing.
Also, sales are spreading out. I’ll talk about this a bit more in the next couple of weeks. But think of it this way. Instead of a lot of readers reluctantly reading the latest blockbuster because they’re trapped in the airport and can’t find anything else to read, those readers are now downloading dozens of books on their phones, and reading a variety of things—some of which we don’t have measurements of. Those readers have left the blockbusters they barely liked behind and found books/authors they like better.
So the money that would have gone to five different authors at three different publishing companies is now going to twenty authors, and only two of those authors are with traditional publishing companies. The books the readers are reading, though, aren’t the latest blockbuster by that author, but an older book that came out a decade ago. The price is lower, and the companies aren’t interested in those sales. They want the newest book to sell the most copies.
The consumer spends the same amount of money, but spreads it out over a wider range. Many of these sales are untrackable. Not all of those twenty authors report their sales to anyone, and not all of those sales were made through traditional channels. A few of the authors sold on their own websites. Some of those books came out of bundles. And some came out of a subscription service like Amazon. The traditional publishing companies lost most of the revenue, because their book sales have legitimately declined.
But that doesn’t mean people are reading less or that fiction reading is declining.
I’m not the only one who sees this. Mark Williams of The New Publishing Standard had the same reaction to the traditional publishing fiction numbers that I did. He wrote on November 18:
The big problem we have is that the fiction market, much more so than the wider book market, is so fragmented now, thanks to digital (by which I mean not just ebooks and audiobooks but online POD and most of all social media democratising the promotion of fiction titles), such that it seems like fewer people are reading fiction, but the reality is likely just the opposite.
The fragmented market is but one thing we’ll talk about in the next few weeks. We’ll look at how writers can use that market to their own advantage.
Link to the rest at Kristine Kathryn Rusch
PG always appreciates the analysis Kris and Dean bring to the publishing world, traditional and indie. He was going to add a few of his thoughts to Kris’ excellent post, but, perhaps as a result of holiday hangover (not the alcoholic kind), his little gray cells are not as well-regimented as usual.
Here’s a link to Kris Rusch’s books. If you like the thoughts Kris shares, you can show your appreciation by checking out her books.
Here is the most recent Kris Rusch book selling on Amazon:
Several years ago, I was sitting in the audience at a big tech conference, learning about a startup that made it easy for people to rent rooms in other people’s houses for short stays. In a world where people can now travel to any part of the world and share someone else’s home, could we hope, the CEO asked, for greater cross-cultural understanding? “Would nations have less war if the residents lived together?”
I closed my eyes, breathed deeply, and felt an immense sense of peace and hope for humanity wash over me.
Then I opened my eyes and thought, “Isn’t this basically a hotel in someone’s house — a cool, convenient, unregulated hotel?”
When it was my turn to take the stage, I too had a grandiose proclamation: Our startup, I declared, was helping people make meaningful connections in the real world.
What I really should have said was: We help people hook up.
On the plane ride home, I began to write what would eventually become The Big Disruption, a satirical novel based on my experience working at both a startup and one of the biggest tech companies in the world. I had no goal at the time other than to provide a bit of cathartic escape from the tech industry, where, on the surface, things seemed really important and exciting.
We were doing big things!
Bringing the internet to the developing world!
Singing songs to orphans!
But also, on some level, it all felt a bit off.
So, where to begin?
. . . .
To be sure, Silicon Valley has built some great products that have truly changed our lives for the better. And I do think that in many, many ways, it has taken noble stands during difficult times and helped redefine what people expect from companies, well beyond just the tech industry. It has also led me to some of my best friends and greatest opportunities, for which I am very grateful. There is so much I really do love about this world.
But there is also what drove me to leave the big tech company last fall and take a break. The issues that I got tired of defending at parties. The endless use of “scale” as an excuse for being unable to solve problems in a human way. The faux earnestness, the self-righteousness. All those cheery product ads set to ukulele music.
I wrote this book for two reasons. First, I wanted to explore what drives the insatiable expansion of the big tech companies. Despite how the industry is sometimes portrayed in the media, I don’t really think the management teams at Facebook, Google, Apple, Uber, or Amazon wake up each morning thinking about how to steal more user data or drive us all out of our jobs. Those are real consequences, but not the root cause. Rather, it’s the desperation to stay on top and avoid being relegated to a dusty corner of the Computer History Museum that pushes these companies into further and further reaches of our lives.
Second, I wrote this book because we should be able to love and celebrate the products that we build — but without ignoring the hard questions they raise. We need to end the self-delusion and either fess up to the reality we are creating or live up to the vision we market to the world. Because if you’re going to tell people you’re their savior, you better be ready to be held to a higher standard. This book is my small way of trying to push us all to be better. Meaning…
You can’t tell your advertisers that you can target users down to the tiniest pixel but then throw your hands up before the politicians and say your machines can’t figure out if bad actors are using your platform.
. . . .
You can’t buy up a big bookstore and then a big diaper store and a big pet supply store and, finally, a big grocery store, national newspaper, and rocket ship and then act surprised when people start wondering if maybe you’re a bit too powerful.
And you can’t really claim that you’re building for everyone in the world when your own workforce doesn’t remotely resemble the outside world.
Link to the rest at Medium
PG notes that this book was published on Medium. Click Here to read in a Medium App or on the web.
From SSRN (footnotes omitted, a few paragraph breaks added):
“Data is the new gold. It’s the new oil. It’s the new plastics.”
— Mark Cuban, 2017
Over the last decade the music, motion picture, and publishing industries have faced what many have characterized as a crisis. Online piracy and the digital technologies that enable it are said to have destroyed traditional models of content creation and distribution.
The music industry is most often offered as the leading example. In the nearly two decades since the digital file-sharing service Napster burst on the scene, recording company revenues have plunged by approximately 72% in the U.S., or almost 80% adjusted for inflation.
A great deal of that decline in revenue can be traced to the ability to distribute and share content digitally without either legal permission or much chance of consequence.
The story appears to be dire, and yet it is increasingly obvious that the crisis narrative obscures more than it reveals. To be sure, the shift to digital and the related upsurge in online piracy — a phenomenon we refer to here as the “first digital disruption” — dramatically re-organized power within the music industry and transformed the ways in which the industry does business and makes (or does not make) money. But the industry adjusted, and the disruption did not fundamentally change the way music is created.
The first digital disruption mainly undermined a particular set of music industry business models. Most of the impact fell on middlemen (record labels, publishing companies, and retailers) who saw their revenues sink. And even there, the story has been as much about creation as disruption. Record labels, formerly the dominant force in the industry, are much diminished today.
But streaming services, such as Spotify, Apple Music, and Tidal, once tiny, are now important players. Turning the destructive potential of digital distribution on its head, they have utilized the internet to pioneer new and lucrative modes of content dissemination. Indeed, the total revenue of digital distributors now exceeds the total revenues of recording companies.
The U.S. live music industry has also grown substantially, and is expected to continue to grow at about twice the rate of the overall economy. And even as record company revenues have shrunk, the best evidence suggests that more music is being produced than ever before.
On the other side of the market, consumers pay less, and have more access to, that cornucopia of music than ever before.
The next digital disruption is going to reach deeper. It will re-order how creative work is produced, and not simply how it is promoted and sold. It will transform our notions of authorship. It will raise fundamental questions about the nature and value of human creativity. And, perhaps less consequentially for the world at large — but of central importance to lawyers — it may shift how we think about the the value and utility of, and even the moral justification for, intellectual property rules.
What is this second digital disruption? We can see its onset in the high-stakes merger between AT&T, which owns digital cable and satellite networks for distributing video programming, and Time Warner, which produces film and television content. The Department of Justice challenged the merger, arguing that it would harm competition in video programming and distribution markets. In its pre-trial brief, Time Warner argued for the merger by noting that, as a stand-alone content producer it faced a competitive disadvantage versus rivals, such as Netflix, Google, and Facebook, that produce content but also own a digital distribution platform. As Time Warner argued:
First, unlike Google and Facebook, Time Warner has no access to meaningful data about its customers and their needs, interests, and preferences. In most cases, Time Warner does not even know its viewers’ names. This data gap impedes its ability to compete with Google, Facebook, and other digital companies in advertising sales, which are critical to Turner [Broadcasting (the owner of Time Warner]’s viability, and which allow Turner to keep subscription fees much lower than they otherwise would be. Whereas digital companies have the data and the technology to deliver advertisements that are both specifically addressed (shown) to a particular viewer and tailored to that viewer’s specific needs and interests, Time Warner cannot target its television advertising in those ways, creating an increasing competitive disadvantage for the company. The data gap also gives online video programmers a competitive advantage in the production and aggregation of content based on extensive data about the content preferences of their viewers.
This spring Judge Richard Leon of the United States District Court for the District of Columbia agreed, holding that “traditional programmers and distributors are experiencing increased competition from innovative, over-the-top content services [i.e., companies that provide video programming over the Internet] …. Those web-based companies are harnessing the power of the internet and data to provide lower-cost, better-tailored programming content directly to consumers. The dramatic growth of the leading [Internet video providers] in particular, including Netflix, Hulu, and Amazon Prime, can be traced in part to the value conferred by vertical integration — that is, to having content creation and aggregation as well as content distribution under the same roof.”
Data is at the core of the second digital disruption. In Mark Cuban’s words, data is “the new gold”: the resource that will create, and likely destroy, fortunes in the content business.
The “data gap” Time Warner spoke of is not just a competitive disadvantage for firms that produce many different types of creative content. Access to data about consumer preferences is rapidly becoming a competitive necessity, and the inability to gather such data, on a massive scale, is a fundamental disability.
Increasingly, we will see the rise of firms that own large and even dominant digital distribution platforms but also produce content for those platforms. Indeed, this trend is visible already. Netflix, Amazon, and, not yet but perhaps soon, Spotify, use the data they collect on consumer preferences and usage to make decisions about advertisements. All now use this data to decide how to organize and recommend content to users.
And some use their data to produce content that is more effectively targeted to consumer preferences. It is this last twist — the use of data to shape content creation, which we refer to as “data-driven authorship” — that is ultimately the most interesting feature of this new model.
Link to the rest at SSRN
PG says indie authors are conducting a variation on the concept in the OP with increasingly sophisticated salting of key words within their promotional materials in order to attract the types of people who will want to purchase their books.
One example is the more frequent use of author or title comparisons in book descriptions, such as, “If you like Penelope Blunderbuss, you’ll love ________”
When Amazon’s algorithms are trying to present books a reader will want to purchase, if that reader has just finished a book by Penelope, the algorithms may bump a book that includes Penelope’s name up near the top of its suggestions for that reader.
This is the great, great, great grand-descendant of Search Engine Optimization, first used by PG about 15 years ago to push his company’s products higher in the Google search results when people searched for those products.
Search algorithms have become enormously more sophisticated during the intervening years, particularly at Amazon, where they know both what you’ve searched for and what you’ve purchased, but the first principle of a successful search engine – show the customer what the customer wants to see – hasn’t changed.
From Seeking Alpha:
Amazon’s interest in the $450 billion US pharmaceutical market is long-standing. The company already sells over-the-counter medicines like aspirin and antihistamines, to go along with its copious offerings of supplements and vitamins on its worldwide platform. It already has licensing to sell pharmaceuticals in 12 states (Nevada, Arizona, North Dakota, Louisiana, Alabama, New Jersey, Michigan, Connecticut, Idaho, New Hampshire, Oregon, Tennessee, with an application pending in Maine). And now with the purchase of the closely held, online, Manchester New Hampshire-based PillPack that will clear in the latter part of the year, Amazon with the stroke of a pen has the necessary licensing to sell pharmaceuticals in 49-states. Investors were certainly quick to notice and the reaction hit markets with tsunami force. With the 28th of June announcement, brick and mortar stalwarts of the pharmaceutical retail world like Walgreens Boots Alliance, CVS Health and Rite Aid collectively shed about $11 billion in market value.
Amazon’s renown logistical efficiencies and willingness to sacrifice short-term margins for long-term market share were at the fore of the market move. Most prescriptions in the US are still filled in person and the delivery of scripts remains highly fragmented. If Amazon did nothing else but centralize the distribution of pharmaceuticals, this alone could likely apply enough downward price pressure on the cost of drugs to deliver real savings to US consumers. Centralizing and organizing existing data is the most likely front of significant cost savings and returns for both the end user and company alike. It could provide augmented negotiating clout to force better pricing deals on drug manufacturers. Even if mere convenience becomes the watchword of the makeover, laying the groundwork for cost reductions in the delivery of pharmaceuticals to those in medical need is a worthy endeavor for a country that spends almost 18% of its total annual output on healthcare. While the ordering, dispensing and delivery of pharmaceuticals to consumers is heavily regulated by myriad agencies at almost every level of government, the logistics of the operation plays to Amazon’s well-honed strengths.
Link to the rest at Seeking Alpha
If the majors don’t play ball and give in to Target’s new sale terms, it could considerably hasten the phase down of the CD format.
Even though digital is on the upswing, physical is still performing relatively well on a global basis — if not in the U.S. market, where CD sales were down 18.5 percent last year. But things are about to get worse here, if some of the noise coming out of the big-box retailers comes to fruition.
Best Buy has just told music suppliers that it will pull CDs from its stores come July 1. At one point, Best Buy was the most powerful music merchandiser in the U.S., but nowadays it’s a shadow of its former self, with a reduced and shoddy offering of CDs. Sources suggest that the company’s CD business is nowadays only generating about $40 million annually. While it says it’s planning to pull out CDs, Best Buy will continue to carry vinyl for the next two years, keeping a commitment it made to vendors. The vinyl will now be merchandised with the turntables, sources suggest.
Meanwhile, sources say that Target has demanded to music suppliers that it wants to be sold on what amounts to a consignment basis. Currently, Target takes the inventory risk by agreeing to pay for any goods it is shipped within 60 days, and must pay to ship back unsold CDs for credit. With consignment, the inventory risk shifts back to the labels.
Link to the rest at Billboard and thanks to Dave for the tip.
From today’s letters to the editor of the Adirondack Daily Enterprise:
This is in reply to the May 22 article titled “$10 million wish list.” As a business owner, here are my thoughts.
We’re going after grant money to revitalize our village and particularly the business district. But has anyone bothered to ask themselves how we got this way in the first place?
Over the years we’ve lost a fair amount of mom-and-pop stores. Anybody know why? Do you suppose the business climate as a whole has changed drastically over the years? And if it has, what can we do about it?
In my industry, Christian bookstores that have been in business for over 30 years are closing. Cedar Springs Christian Bookstore in Knoxville, Tennessee, a bookstore larger than the old Newberry’s store we used to have in town, is now closed. Why? Because of competition from the internet. Unfortunately, they didn’t have enough loyal customers anymore, and competition from the internet was killing them. So these good folks just threw in the towel after more than 30 years.
Christian bookstores everywhere are blaming internet competition for hurting their business. Some are finding their own ways of fighting back, but others are just tired of fighting and they’re closing their doors. It seems that Amazon, eBay and various other websites have found that there’s money to be made in selling Christian product.
Now let’s look at our little village. What have we lost over the last few years? Has anybody bothered to ask why these stores closed? Granted, some of the owners either wanted to retire or do something else. But what about those who faced stiff competition from the internet and elsewhere, and decided they’d had enough?
Let’s take a look at the empty storefronts in town. First, most of them are too small. Many are under 1,000 square feet. How much inventory can you sell in that small a space? Not much. And as a small business owner, you have to pay top wholesale prices to get that inventory. You can’t purchase in large lots to get better prices. So your business ends up with a small selection and at what some folks would consider high prices. And since most people shop by price these days, you’re at a huge disadvantage compared to shopping for those same things online.
Secondly, most of those storefronts need work. Who pays for the needed renovations? Paint, paneling, flooring and whatever else that space needs isn’t cheap. And that’s just the stores’ renovations. You haven’t purchased store shelving, fixtures, displays or inventory! Fixtures and displays aren’t cheap. I have a two-sided lockable jewelry display. That display alone cost $900, and that’s just the display. I also have to pay for shipping to get it here. And remember this is just the display and not the product that goes inside. A four-sided display will run me over $1,000, and that doesn’t include the shipping. So anyone who thinks that they can throw a store together for a few bucks knows nothing of the real costs.
Thirdly, most who want to go into business have no idea how much money it’ll take. It’s been recommended that you have enough money to live off of for one to two years in the bank. Why? Because you won’t be able to write yourself a paycheck for at least that long, if not longer. The money people spend in your store will go for expenses, inventory and advertising. As one business owner said to a customer, “First you pay your rent. Then you pay your utilities. Then you pay your suppliers, and if there’s anything left over, you get it.”
There were other goals in this article like diversity shopping opportunities and providing low-cost retail space to encourage new businesses. But how do we diversify our shopping opportunities when we have small storefronts that cannot hold a lot of product? What kind of variety can a business offer in less than 1,000 square feet? And who provides low-cost retail space to encourage new businesses? Building owners have expenses to pay, too. And how long do those low costs last?
Think business hasn’t changed? When was the last time you went to the Plattsburgh mall? Today’s mall in Plattsburgh is a shell of what it used to be. In the food court we had a Philly steak place, Taco Bell, Subway, a pizza place, a Chinese eatery, a Mediterranean eatery, a chicken place and a Burger King. Today the steak place, Chinese and Mediterranean eatery, pizza place, chicken place and Burger King are gone. You had to look hard to find a place to sit and eat, but not anymore. On a Saturday afternoon the food court is empty. And you can almost drive a car down the main walkway and not hit anyone. It’s sad, but a reminder of how things have changed.
. . . .
Long gone are the days when customers would break your doors down if you just opened your store. If today’s retailers can’t get creative and find ways to bring in customers, they’re dead. When I was growing up, a 10 percent off storewide sale got everyone’s attention. Today 10 percent off gets nobody’s attention. Folks can find all the bargains they want in the comfort of their own home, and many do. I’ve also heard stories of folks who split up when they go shopping, and talk back and forth comparing prices on their phones.
So if we think that by putting in street trees, decorative light poles and sidewalks, we’ll somehow fill our empty storefronts, think again. If we’re not careful, we’ll be grasping at straws in order to fix a problem that we can’t fix. The internet has taken business away from every retailer in town, and if they tell you that’s not true, they’re probably lying.
And if you think that by hopefully getting a $10 million grant, we can fix things, I don’t think we’re being realistic. Perhaps instead of a grant, today’s retailers or would-be retailers need lessons on how to deal with competition, particularly on the internet. That’s more realistic.
Link to the rest at the Adirondack Daily Enterprise and thanks to DM for the tip.
From Publishing Perspectives:
‘Your competitors like Netflix, Amazon Prime and Audible,’ publishers will hear this year at BookExpo and the rival rights fair, ‘are more than willing to fill the gap.’
. . . .
The reality, he says, is that “big data” is not really the stuff of most publishers’ future traction in a digital world. Something that may well seem like “little data” is, because it’s more available, readable, and actionable than the “big data” operations of major tech forces in the marketplace.
And the “invitation to a wild ride” he’s talking about is one that some will not accept gladly. It requires studying and analyzing many available “tracks” and trends at once, right down to what’s in a publisher’s “own backyard,” as we might say. “Who on your staff and around your own house reports back, in some structured way,” he asks, “on what they read, or how their kids operate their smartphones?”
What Wischenbart says he’s seeing is that even in the largest houses, such as Penguin Random House with its armada of imprints “acting like little companies,” the corporation can certainly engage in larger data activities, “but they don’t have the tool set,” he says, “to listen to what their employees are doing.”
. . . .
“[E]ven traditional readers—a majority of them urban, well-educated and older than 40—have seen their ‘mobile time’ rising from a modest 26 minutes in 2012 to more than one hour in 2017.”
Among Millennials, he says, “mobile time” may be expanding to as much as three hours per day.
But look at corresponding numbers in publishing markets that Wischenbart cites in his new article.
In Germany, data in Wischenbart’s report shows more than 6 million book buyers disappearing in the past five years . . . . Today, publishers there, he says, see a maximum audience of some 30 million in a total population of 80 million.
. . . .
Wischenbart has his fictitious publisher say to herself, “We need to stick to our bread and butter, to the rare books that hit the top of the charts, the well-established authors. Well, we even need the copy-cat income, or other cheap thrills, to simply secure a continuous income.”
But is that true? Wischenbart agrees in an interview with Publishing Perspectives that the blockbuster isn’t where publishers can afford to focus today, and not only because we’re in a largely blockbuster-less drought in the US market.
Wischenbart agrees that the buyer of the biggest blockbuster may do no more for the industry and for reading than pay for her or his one copy: these are generally not habitual readers. They’re novelty readers, readers drawn to the occasional breakthrough phenomenon, entertainment patrons who drop in on the world of books to catch a peak moment, then sail off to cinema, video, games, and music.
“I would phrase it this way,” Wischenbart says from his office in Austria. “First, the transformation that has been predicted now is here. It has arrived. We’re not talking about the future.
“And the transformation is much deeper” than many who became fixated on ebooks and perhaps today are transfixed by audiobooks’ uptake might think. “It’s a transformation of consumer behavior and habits.
“Second, such rough waters of transformation are creating higher risk” than publishers may have realized, not least because they’ve thought of “digital” as being about formats and largely now accomplished.”
. . . .
“I do see a difference in the US and UK markets and the rest of the world,” he says, in terms of how in the big US and UK markets, publishing has an upbeat sense that it knows where it’s going. “Hardly anyone in the industry in continental Europe or elsewhere feels so comfortable.”
The sense of greater comfort, command, and solidity in the UK and American markets, he agrees, may come from a plethora of self-congratulatory awards programs and morale-boosting coverage. “They’re always winning,” he says about such trends, which can lead a market to believe that all is going better than may be the reality.
. . . .
“Right now, my inkling is that a lot of truly critical information sits in drawers and on hard disks, underused, if noticed at all.
“We see, day by day, how publishing is getting ever more segmented. From formerly three distinct sectors, trade or consumer versus educational versus professional or academic, we have moved into an ever-thinner slicing of the cake that used to be served in the business of books.”
Link to the rest at Publishing Perspectives
PG has long noted that the traditional book business lacks even rudimentary data skills.
Its reliance on Neilson and other data sources that do not include data from Amazon, by far the world’s largest bookstore, is Exhibit A.
Exhibit B is Big Publishing’s schizoid frienemies attitude toward Amazon, its largest customer.
For those who are newcomers to the recent history of Big Publishing’s strategies for dealing with ebooks and Amazon, in 2012, the United States Department of Justice charged Hachette, HarperCollins, Penguin, Simon & Schuster and Macmillan with illegally conspiring with Apple to fix ebook prices in the United States.
This group was conspiring to keep ebook prices high to prop up sales of printed books. Amazon, which was selling ebooks at low prices to help sell Kindle devices and expand the ebook market, was the target of this conspiracy.
In 2013, after each of these large publishers had admitted to acting in violation of antitrust laws, a trial judge found Apple guilty of participating in this same illegal price-fixing conspiracy. Apple appealed and the trial court’s decision was affirmed in 2015.
Exhibit C is Author Earnings, a small organization that does have people with good data skills.
Beginning in 2014, Author Earnings began to release a series of reports that detailed ebook sales on Amazon by both traditional publishers and by individual self-publishers working through the Kindle Direct Publishing program. This series of reports demonstrated that ebook sales indie authors were a large and growing segment of the overall ebook market.
As additional Author Earnings reports were released periodically, they reflected the continuing growth in the market for indie-published ebooks. Indie authors came to dominate ebook sales in the romance, fantasy and science fiction genres.
Had Big Publishing been willing to hire employees with any sort of data skills, it could have duplicated the work of Author Earnings and developed even more sophisticated analyses because of access to its own ebook sales data (which was not made available to Author Earnings).
Big Publishing has consistently elected to base its business decisions on hunches generated by a small group of former English majors running its businesses in Manhattan. The “golden gut” school of publishing management has resulted in Big Publishing missing the ebook train and failing to treat Amazon as a potential window into the rapidly-changing and ever-growing ebook market.
Another disadvantage Big Publishing has is that, by New York City standards, it doesn’t pay very well. A twenty-something with data skills can receive a much larger salary from any number of other employers who are not in the publishing business.
PG will restrain himself from commenting on the blinkered view of the world common in the large European holding companies that own all but one of the largest US publishers. Suffice to say, New York publishing executives are not receiving a lot of phone calls and emails from Europe urging them to invest more in technologies and people that will position the publisher favorably for a new and different future.
Those of us who may be short on time and haven’t been able to get to that autobiography we’ve been meaning to write need worry no longer: Artillect Publishing will do the work for you by scanning your online presence, merging some ancillary information and producing your sure-to-be best-selling biography. Using artificial intelligence in its production process, Artillect is just one example of the increasing number of applications of artificial intelligence (AI) replacing inefficient processes, creating new products and adding insight to publishing.
News organizations including the Washington Post and Associated Press have been using artificial intelligence tools to create news reports for weather, sports and financial reporting where interpretation of the day’s (or game’s) activity can be fairly straightforward. As these uses have grown in acceptance and utility, the use of AI to deliver more complex products is also growing. AI tools can analyze text, images and data and deliver to a journalist sufficiently structured content around which they can build articles and stories. AI tools can do this faster, more comprehensively and with greater accuracy than traditional research methods. In analyzing text or images, AI tools can characterize the content: Positive/negative, liberal/conservative, for example. Journalists have even used this capability to change editorial content to match specific political viewpoints, creating liberal, center and/or conservative versions of the same article.
. . . .
[J]ournal publisher Taylor & Francis (T&F) announced two partnerships with AI companies to add AI tools to their editorial processes. In the first of these, T&F are working with Katalyst Technologies to create “contextual copyediting” using AI and natural language processing to assess and score the language quality of articles accepted into their journal’s workflow. This use of AI is designed to make their editorial process more efficient by identifying and classifying journal submissions.
. . . .
StoryFit: Uses machine learning and data analysis to predict content marketability, improve discovery and drive sales for publishers and movie studios.
. . . .
Ross Intelligence: Using a combination of IBM Watson and proprietary algorithms, ROSS is the AI-driven successor to tools like LexisNexis and supports both legal discovery and legal research findings.
Link to the rest at Personanondata
From National Public Radio:
In 1984, two men were thinking a lot about the Internet. One of them invented it. The other is an artist who would see its impact on society with uncanny prescience.
First is the man often called “the father of the Internet,” Vint Cerf. Between the early 1970s and early ’80s, he led a team of scientists supported by research from the Defense Department.
Initially, Cerf was trying to create an Internet through which scientists and academics from all over the world could share data and research.
Then, one day in 1988, Cerf says he went to a conference for commercial vendors where they were selling products for the Internet.
“I just stood there thinking, ‘My God! Somebody thinks they’re going to make money out of the Internet.’ ” Cerf was surprised and happy. “I was a big proponent of that. My friends in the community thought I was nuts. ‘Why would you let the unwashed masses get access to the Internet?’ And I said, ‘Because I want everybody to take advantage of its capability.’ ”
. . . .
Cerf admits all that dark stuff never crossed his mind. “And we have to cope with that — I mean, welcome to the real world,” he says.
. . . .
While Cerf and his colleagues were busy inventing, the young aspiring science fiction writer William Gibson was looking for a place to set his first novel. Gibson was living in Seattle, and he had friends who worked in the budding tech industry. They told him about computers and the Internet, “and I was sitting with a yellow legal pad trying to come up with trippy names for a new arena in which science fiction could be staged.”
The name Gibson came up with: cyberspace. And for a guy who had never seen it, he did a great job describing it in that 1984 book, Neuromancer: “A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.”
. . . .
But, it isn’t just the Internet that Gibson saw coming. In Neuromancer, the Internet has become dominated by huge multinational corporations fighting off hackers. The main character is a washed-up criminal hacker who goes to work for an ex-military officer to regain his glory. And get this: The ex-military guy is deeply involved in cyber-espionage between the U.S. and Russia.
Gibson says he didn’t need to try a computer or see the Internet to imagine this future. “The first people to embrace a technology are the first to lose the ability to see it objectively,” he says.
Link to the rest at National Public Radio
From The Register:
A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.
As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything. Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.
A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.
A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.
However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.
. . . .
The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article. Most of the selected pages are used for training, and a few are kept back to develop and test the system.
The paragraphs from each page are ranked and the text from all the pages are added to create one long document. The text is encoded and shortened, by splitting it into 32,000 individual words and used as input.
This is then fed into an abstractive model, where the long sentences in the input are cut shorter. It’s a clever trick used to both create and summarize text. The generated sentences are taken from the earlier extraction phase and aren’t built from scratch, which explains why the structure is pretty repetitive and stiff.
Link to the rest at The Register
PG thinks an easier job would be to create an algorithm that would produce interview quotes from European publishing executives.
He suggests seeding the algorithm with words like stupid, price, protect, booksellers, kill, enriched, Amazon, obscene, amuck, insane, predatory, greedy, la fréquence, répugnant, sale américain, dégradé, goulu, aliéné and vorace.
PG spent a few minutes re-familiarizing himself with websites that generate random words, sentences, etc., to see if he could locate one to inspire him with potential quotes from European publishing executives.
He did not find exactly the right tool for that task, but he did discover InspiroBot, a lovely site to help you create beauteous and profound social media posts. (Yes, it can be addictive.)
The newspaper printing presses may have another decade of life in them, New York Times CEO Mark Thompson told CNBC on Monday.
“I believe at least 10 years is what we can see in the U.S. for our print products,” Thompson said on “Power Lunch.” He said he’d like to have the print edition “survive and thrive as long as it can,” but admitted it might face an expiration date.
“We’ll decide that simply on the economics,” he said. “There may come a point when the economics of [the print paper] no longer make sense for us.”
“The key thing for us is that we’re pivoting,” Thompson said. “Our plan is to go on serving our loyal print subscribers as long as we can. But meanwhile to build up the digital business, so that we have a successful growing company and a successful news operation long after print is gone.”
. . . .
Revenue from digital subscriptions increased more than 51 percent in the quarter compared with a year earlier. Overall subscription revenue increased 19.2 percent.
Link to the rest at CNBC and thanks to Delores for the tip.
From Good EReader:
Augmented reality is the hot new trend and many developers are focusing on the gaming aspect. I think with the advent of the new Intel Vaunt and similar products, AR will not only be the future of shopping, but bookselling.
. . . .
They’re just regular old prescription or non-prescription glasses you would wear during the day and charge at night. There’s not a computer attached to your head or some weird bulky attachment to your existing glasses. They pair with your smartphone, which will keep costs low. Intel has stated that they are not going to market the glasses directly to consumers, but will partner with a firm to bring them to market, likely Amazon.
. . . .
I think the future of shopping in bookstores will be with AR glasses. When you walk into your average bookshop such as Amazon Books, Barnes and Noble or Indigo Books and Music there are precious few books that are front facing and the rest are stacked side by side. How do you know if a book is new, old, or highly rated? I can imagine a day when your glasses will customized features for bookstore shopping, such as reviews that float over a book with the GoodReads API framework and clicking on a virtual review will give you everything you need to know.
Link to the rest at Good EReader
PG wonders whether anyone will be physically walking in to a structure at a fixed location with AR glasses. He bets Amazon (or a future Amazon competitor) will have incredible virtual bookstores you can walk through wherever you are with your AR glasses (or AR helmet or AR bodysuit). Alexa will probably be walking around Amazon’s AR bookstore, too, reminding people that their timers have expired.
From the Sierra Nevada College Eagle’s Eye:
Nick Visconti reminisces about his first two-page spread in Transworld Snowboarding. Months after the trick was shot, he held a physical copy of his scrapes, bruises, and triumph. The photo of him, upside down, one hand holding him against the wall and the other grabbing his board, 20 feet above the ground lives in print. Now, Visconti, a former pro, X Games medalist and social media influencer, navigates an industry where progression and style is measured through the immediacy of likes and followers.
“In one viral video or photo, anyone or any brand can far surpass traditional publications’ ability to connect to active buyers, potential customers or interested audiences,” Visconti said.
The media system and the way we gather information is in a drastic state of change, and in snowboarding the culture has moved from the pages of print magazines to the less tangible virtual world of digital media. The evolution of trends, tricks, and brand marketing strategies that were once tracked in the pages of Transworld, SNOW BOARDER, and Snowboard are a relic of snowboarding’s nascent past. Now, Snowboard Magazine exists only digitally, and other mainstream publications have slashed their print runs to as infrequently as once a year.
. . . .
“The majority of content consumed is in a digital platform, be it on a computer or a phone,” Tom Monterosso, senior editor and photographer at SNOWBOARDER Magazine, said. “Fewer kids are buying magazines and more kids have access to a phone or a computer than ever, so the content that comes out is consumed, digested and passed through much more quickly.”
Content is also conforming to readers’ dwindling attention. Marketing specialists for Boreal mountain resort and Woodward Tahoe who track analytics have discovered the average watch time for any video or photo across all social networks is 22-23 seconds.
“For web, it’s short, sweet and digestible,” Monterosso said. “Read it and keep browsing. For social, it’s ‘viral,’ which I personally believe is a terrible term.”
. . . .
As a brand that deals with marketing in this new world, Boreal has pulled out of most billboards, newspapers, and magazines and shifted its advertising to social media.
“I see much more value in getting 10 Instagram posts on SNOWBOARDER Magazine’s Instagram, than running a full-page ad,” Tucker Norred, Boreal marketing and communications manager, said. SNOWBOARDER Magazine has a following of more than 1 million on Instagram. Brands want access to that far-reaching platform.
Link to the rest at Eagle’s Eye
From veteran publishing consultant Mike Shatzkin:
Amazon showed a willingness to sell ebooks for Kindle at prices below the costs publishers charged them, the big legacy publishers became alarmed. They could see no end to the switch to ebooks and it seemed logical to figure out a way to encourage competition across ebook ecosystems.
Their solution, aided and abetted by the new Apple iBooks ecosystem that debuted in April of 2010, was to move from “wholesale” pricing, where the retailer controlled the ultimate price to the consumer, to “agency”, where the publisher was the seller to the consumer and controlled the price. The intermediary — the retailer — was just an “agent” without pricing power.
This led to anti-trust action by the US government by which agency pricing was allowed, but only by newly negotiated agreements between each of the major publishers and their vendors, including Amazon. And the DOJ made sure that those agreements entitled the retail “agent” to discount from the publisher’s agency price, as long as the aggregated discounts to consumers didn’t exceed the retailers’ aggregate margin on those ebooks.
They needn’t have bothered. Amazon was essentially done with the strategy of discounting big publishers’ ebooks. And big publishers are left wondering whether they should be glad they got what they wished for. Let’s remember that those discounts from Amazon came from their share of the price; now with agency protocols, publishers can only discount ebooks by reducing their own take!
. . . .
Author-driven publishing just continued to grow as Kindle and the other ebook installed base grew faster and faster when smartphones and tablets both spread like wildfire and removed the need for a dedicated ebook device. With Amazon establishing a royalty rate for its own self-published authors of 70 percent of the selling price, equivalent to what agency publishers collected, successful self-publishers could make substantial money with very low-priced ebooks and zero or near-zero revenues from print.
. . . .
[E]ach week now, a handful of those genre Amazon Publishing ebook titles are handily selling more units than most of the titles on the NYT and USA Today’s best seller lists. Amazon found it relatively easy to grow market share in those areas where the bookstore sale, and even the online print sale, was diminishing in favor of the ebook.
. . . .
That has produced the world where big publishers with their agency-priced ebooks tell us that ebook sales have flattened or declined and that print book sales are holding their own, but Amazon says ebook sales are continuing to grow. And it is also a world where the big publishers are working feverishly, and largely futilely, to make their non-Amazon sales grow.
. . . .
Data Guy, first encouraged by indie author star Hugh Howey (one of the early beneficiaries of the changed marketplace), is now one of the principals behind Bookstat.com, an online-sales database built by scraping Amazon and other major online retailers. Bookstats’s realtime dashboard presents a consolidated, title-level view of the online US market, current through yesterday. It includes Amazon sales. It separates out Amazon Publishing from the indie authors Amazon enables. And, when used alongside data from Bookscan, Bookstat now lets us back out how brick-and-mortar sales alone are faring in relation to online.
. . . .
1. Amazon continues to grow its share of print and digital sales. It appear to be approaching half of all print sales and more than 90% of ebook sales.
Data Guy says:
On the print front, Amazon is indeed very close to half the US market: Our own Bookstat-derived total of 312 million print units sold by Amazon in 2017 is 45.5% of Bookscan’s total reported 2017 print sales of 687 million, which means Amazon sales now comprise the majority of Bookscan’s “Club & Retail” share. Even allowing for the other 15%-20% of US print sales that remain untracked by Bookscan, that puts Amazon’s US print share is at least 40%. And that’s ignoring another 10-15 million unreported Amazon print sales a year from CreateSpace titles that aren’t trackable through Ingram “expanded distribution.”
Amazon’s share of of US print sales is still growing rapidly. In the prior year, 2016, the 280 million Amazon online print sales Bookstat reports were only 41.7% of 674 million total units and in 2015 Bookstall’s 246 million print unit total for Amazon was only 37.7% of Bookscan’s 653 million reported units. So Amazon’s online print sales continue to grow by a double digit percentage each year.
Barnes & Noble — the next largest retailer of print books, from their public financial reporting, was by our math contributing 23% of Bookscan’s total in 2017 — which means that B&N has shrunk to where it now moves only half as many print books a year as Amazon, and B&N’s own financials show those remaining B&N sales are shrinking by 4% a year.
. . . .
2. The overall market is growing, but Amazon Publishing and indies are the growing segments. All legacy publishing, including the Big Five, are sharing from a diminishing pool of “what’s left” after that growth.
. . . .
3. Legacy publishing below the Big Five is suffering more, seeing their market share increase at Amazon even faster than the major houses are.
. . . .
In ebook sales, both Big Five and non-Big Five legacy publishers have ceded a huge chunk of market share to non-traditional players over the last several years; roughly half of the ebook market in unit terms, and nearly a third of it in dollar terms.
Link to the rest at The Shatzkin Files
PG has witnessed disruptive change driven by new technologies in several different industries (and participated in more than one of those changes).
From that viewpoint (admittedly personal), PG suggests that no industry has reacted to a disruptive innovation – ecommerce and ebooks – in a more pathetic and self-defeating manner than Big Publishing and its bricks-and-mortar sales and distribution infrastructure.
At every major juncture, when faced with a decision, Big Publishing has chosen the wrong path.
Antitrust violations, understanding ebooks in the marketplace, allying itself with Apple and against Amazon, failing to hire people who might have had a chance to revive a moribund industry (It’s too late now.), engaging in the sleaziest tactics with authors (Hello Harlequin!) etc., etc., etc.
PR spinmeisters (“digital fatigue” “back to print), trying to get the Justice Department to bring an antitrust case against Amazon for selling ebooks at low prices, looking at opportunity and seeing dystopia and so forth.
From The New Yorker:
If you wanted to hear the future in late May, 1968, you might have gone to Abbey Road to hear the Beatles record a new song of John Lennon’s—something called “Revolution.” Or you could have gone to the decidedly less fab midtown Hilton in Manhattan, where a thousand “leaders and future leaders,” ranging from the economist John Kenneth Galbraith to the peace activist Arthur Waskow, were invited to a conference by the Foreign Policy Association. For its fiftieth anniversary, the F.P.A. scheduled a three-day gathering of experts, asking them to gaze fifty years ahead. An accompanying book shared the conference’s far-off title: “Toward the Year 2018.”
The timing was not auspicious. In America, cities were still cleaning up from riots after Martin Luther King, Jr.,’s assassination, in April, and protests were brewing for that summer’s Democratic National Convention. But perhaps the future was the only place left to escape from the present: more than eight hundred attendees arrived at the Hilton.
. . . .
Invitees were carefully split by the F.P.A. between over-thirty-fives and under-thirty-fives—but, less carefully, they didn’t pick any principal speakers from the under-thirty-fives. As their elders mused on a future of plastics and plasma jets, without mention of Vietnam and violence in the streets, there was muttering among the younger attendees. Representatives from Students for a Democratic Society demanded time at the mike and circulated a letter questioning whether the conference was for “discussion or brain washing.”
. . . .
If participants on either side emerged from the Hilton ballroom confident of what 2018 would look like, they soon found themselves disabused about predicting 1968. A week later, Bobby Kennedy was shot dead, and the prospect of grasping the present, let alone the future, seemed further away than ever. And that was about when “Toward the Year 2018” arrived in bookstores.
“more amazing than science fiction,” proclaims the cover, with jacket copy envisioning how “on a summer day in the year 2018, the three-dimensional television screen in your living room” flashes news of “anti-gravity belts,” “a man-made hurricane, launched at an enemy fleet, [that] devastates a neutral country,” and a “citizen’s pocket computer” that averts an air crash. “Will our children in 2018 still be wrestling,” it asks, “with racial problems, economic depressions, other Vietnams?”
. . . .
The Stanford wonk Charles Scarlott predicts, exactly incorrectly, that nuclear breeder reactors will move to the fore of U.S. energy production while natural gas fades. (He concedes that natural gas might make a comeback—through atom-bomb-powered fracking.) The M.I.T. professor Ithiel de Sola Pool foresees an era of outright control of economies by nations—“They will select their levels of employment, of industrialization, of increase in GNP”—and then, for good measure, predicts “a massive loosening of inhibitions on all human impulses save that toward violence.” From the influential meteorologist Thomas F. Malone, we get the intriguing forecast of “the suppression of lightning”—most likely, he figures, “by the late 1980s.”
. . . .
Oettinger is now the sole surviving “Toward the Year 2018” essayist to witness the era he predicted so well fifty years ago. For some attending the accompanying conference, the changes didn’t need nearly that long to unfold: Edwin Yoder saw the news profession digitize with a speed that rendered the most ambitious predictions quaint. “The huge press room at the G.O.P. convention in Kansas City in 1976 was a noisy din of typewriters,” he recalls now. “Four years later, it was eerily quiet, as if cushioned.”
Just as conspicuous, though, is what was missing altogether from the book. Not a single writer predicts the end of the Soviet Union—who in their right mind would have?
. . . .
“We can see it now,” one newspaper mused. “Sane people in the year 2018 will be yearning for a return to simpler times and the ‘good old days’ of the 1970s.”
Link to the rest at The New Yorker
PG suggests that items like the OP should be a source of humility for those living in today’s world.
In every era, large numbers of people believe that, unlike people of previous generations, their day’s consensus has finally understood what’s important and not important, true and untrue, what will work and not work.
Fifty years ago, in 1968 (part of the era included in the good old days of the 1970s), one of the driving forces for craziness on college campuses was the Vietnam War. It was certainly not the only craziness, but the war and the likelihood of students having to serve in the military amplified the intensity of the craziness.
For male college students, the military draft could be avoided so long as they were full-time students. Those who graduated from or dropped out of college in 1968 lost their student deferments shortly after graduation. More than a few, regardless of their opinion about the war, were required to become soldiers and obey the orders they received to shoot other people, accept the likelihood they would be shot at in return, blow up other people, burn other people, etc., etc.
About 10% of those who served in the military in Vietnam became casualties, either killed or (more likely) wounded so severely they could no longer perform military duties. In part because of the nature of the war, mental illness, either short-term or long-term, was also common. For these and other reasons, avoiding the draft was a frequent topic of male student conversations as graduation neared.
In 1968, PG suspects most men who would be subject to military service knew someone who had been severely wounded, permanently disabled or killed in Vietnam.
Some college students would enter the military voluntarily to have some control over what branch of the service they would be a part of and how they would serve – a man who joined in the enlisted ranks was at the bottom of military hierarchy but could be finished with his mandatory military service sooner than an officer who enlisted would be. However, serving as a draftee generally involved a shorter period of service than enlisting into the military.
The Army and Marines included most foot soldiers directly involved in ground combat. Most draftees served in the Army, almost always as enlisted men. Junior officers in the Army and Marines were more likely than others to be shot at by enemy soldiers because they were directly involved in leading soldiers in battle.
The Navy was a mixture of enlisted personnel serving on ships or in shore installations generally isolated from the fighting and pilot officers who could describe exactly what a surface-to-air or air-to-air missile looked like when it was fired at them and how they tried to avoid being killed by the missile. Plus, the “Brown-Water Navy” consisted of officers and seamen who were assigned to small boats that patrolled marshy areas with heavy vegetation. The men of the Brown-Water Navy were subject to frequent ambushes and intense firefights with enemy soldiers who were hiding only a few yards away.
The Air Force was generally regarded as the safest service in which to serve. Anyone but a pilot could be pretty certain that he would not come under direct enemy first. The Air Force was also the most difficult service in which to serve.
Returning from service was also very strange. In prior wars, a group of men would train together, serve together for the duration of the war and return together when the war was over. During the Vietnam war, when a draftee or enlistee finished his term of service, unless he elected to continue his military duties, he would often go home by himself. A few days earlier, he might have been on a dangerous patrol in the jungle and today, he was walking through the airport in San Francisco, trying to resolve the contradictions, still jumpy with his battle reflexes warning him of danger lurking around every corner.
After the Vietnam War was over, the military draft was eliminated and all members of the military, male and female, are volunteers today. The Afghan and other middle eastern wars have been fought solely by volunteer soldiers and, unlike earlier eras, the demographics of members of the military and those who are not not are no longer similar.
Male Privilege would not be a thing on college campuses until many years after 1968. Male students during that era would have found rich irony in the concept.
The OP does not include any mention of the likelihood of large-scale wars in the future in 2018, 50 years after 1968. PG suggests this is unrealistic.
Here is a timeline of wars that have involved a significant percentage of the US population in the fighting.
American Civil War – 1861-1865 – More US casualties than any other war – 646,000 US casualties (2.385% of US population)
World War One – 1914-1918 – WWI began 49 years after the end of the Civil War – 320,518 US casualties
World War Two – 1939-1945 – WWII began 25 years after the end of the WWI – 416,800 US casualties
Korean War – 1950-1953 – Korean War began 5 years after the end of WWII – 128,650 US casualties
Vietnam War – 1964-1975 – (11 year duration) Vietnam War began 11 years after the end of the Korean War – 211,454 US casualties
War in Afghanistan – 2001-present – (17 years and counting) Afghan war began 26 years after the end of the Vietnam War – 20,904 US casualties (to date)
Iraq War – 2003-present – (15 years and counting) Began two years after the beginning of the Afghan war – 36,710 US casualties (to date)
PG suggests the history of wars in which the US has been and is involved does not suggest a future in which all major problems will be solved and the world can focus its efforts and resources on the improvement of humanity.
While PG is generally optimistic about the future of the human race, he finds it interesting that most major predictions for the future of the type exemplified in the OP do not mention anything about the armed conflicts of the future as if such conflicts will not have any impact on future society.
From Seeking Alpha:
The tremendous ructions occurring in the retail industry continue and are gaining momentum at a tremendous pace as Amazon and the rapid growth of e-commerce progresses. Already the number of bankruptcies in the retail industry for 2017 thus far have exceeded all of 2016 and there are signs of more to come.
Indeed, even retailers typically perceived to be resistant to the disruptive influence of Amazon and the rapid growth in popularity of e-commerce have proven vulnerable.
The Oracle of Omaha Warren Buffett considered by many to be the world’s greatest investor also chose to weigh in on the debate earlier this year, stating at the Berkshire Hathaway annual meeting:
The department store is online now, . . .
There are a range of signals which indicate that it is only going to get worse for traditional bricks-and-mortar retailing which makes it foolish for investors to consider investing in the industry.
. . . .
North American retailers are filing for bankruptcy at a record rate this year. According to industry data over 35 retailers in the U.S. alone have filed for bankruptcy this year with some of the standout names being Toys R Us, Payless ShoeSource and Radio Shack. It isn’t the first time for Radio Shack, it filed for bankruptcy protection just a little over two years ago because of similar problems including a challenging operating environment, rising competition and dwindling sales.
. . . .
The bad news doesn’t stop there, many major department store chains focused on cutting costs by reducing their operational footprint through store closures because the unprecedented competition created by e-commerce and Amazon has left very few other options.
One-time industry leader Sears is aggressively closing stores in a desperate bid to survive. The embattled retailer closed 180 stores during the fiscal year 2017 and plans to close another 150 by the end of its fiscal third quarter which amounts to roughly 10% of its remaining Sears and Kmart locations. For the second quarter revenue fell by a deeply worrying 23% year over year while comparable store sales declined 11.5%.
. . . .
Department store chain J.C.Penney which saw second quarter comparable store sales slip by 1.3% year over year doesn’t appear to be much healthier. It has also embarked on an ambitious restructuring strategy which involves closing 138 stores over coming months.
. . . .
Long-time industry stalwart Macy’s is also planning to close 88 stores and layoff thousands of employees.
. . . .
According to the report grocery shopping’s transition to online will occur at a far more rapid rate than other industries that have already done so such as banking or media because of a greater acceptance of e-commerce among consumers.
Younger, newer and more engaged digital shoppers adopt digital technologies more quickly, and will hasten the expansion of digital grocery shopping further.
. . . .
In a stunning revelation of just how fast e-commerce sales will grow, the National Retail Federation has forecast that as a proportion of total retail sales they will expand by 8% to 12% annually. This is around three-times greater than total retail sales, indicating that e-commerce’s share of total retail sales will grow at a rapid clip.
. . . .
For the reasons discussed investing in bricks-and-mortar retailers is becoming increasingly unappealing and risky. The depth and breadth of the industry’s transformation coupled with rapidly changing technology as well as an increasing appetite among consumers to accept technological changes places almost every bricks-and-mortar retailer under threat.
Link to the rest at Seeking Alpha
PG notes that some of the recent discussions about Barnes & Noble on TPV, have tended to focus on physical bookstores vs. Amazon as an isolated battle.
As indicated by the OP here and in many other posts on TPV, the movement from bricks & mortar to online sales is a megatrend affecting all sorts of different retailers. If people buy children’s clothing online instead of going to Target and small appliances online instead of going to Sears and office supplies online instead of going to Staples, why would books be any different?
There is one additional factor that does make books special, but not in a way that benefits Barnes & Noble and other physical bookstores.
PG is not aware of eclothing or eappliances, but he and many others are regular consumers of ebooks.
Due to a combination of disastrous decisions by management and incredible ignorance of ecommerce and all other things internet, Barnes & Noble squandered the opportunity to leverage its brand and relationship with millions of longtime Barnes & Noble customers into a dominating online store for ebooks (very high profit margins once properly-designed infrastructure is in place) and physical books.
Competent management of any b&m bookstore chain should have looked at ebooks as a wonderful source of increased revenues and profits. Instead of supporting a business structure to deal with thousands of poorly-paid store employees managed by hundreds of not much better paid store managers, a relative handful of well-compensated technical, design and marketing employees located in one place could have generated expanding revenues with consistently higher profit margins.
PG appears to be suffering from an attack of run-on sentences today, so he will stop. The blindness of the entire traditional book business to the opportunities for online sales, particularly of ebooks, is prime fodder for dozens of business school lectures, case studies and discussions for decades.
From The Wall Street Journal:
Google is rolling out a package of new policies and services to help news publishers increase subscriptions, a move likely to warm its icy relationship with some of the biggest critics of its power over the internet.
Google said it will end this week its decade-old “first click free” policy that required news websites to give readers free access to articles from Google’s search results. The policy upset publishers that require subscriptions, believing it undercut their efforts to get readers to pay for news.
Google, a unit of Alphabet Inc., said it also plans tools to help increase subscriptions, including enabling users to log in with their Google passwords to simplify the subscription process and sharing user data with news organizations to better target potential subscribers.
With billions of people using its search, YouTube and other web properties, Google has an outsize influence on a wealth of industries and modern society.
. . . .
The new publisher rules are good news for the print industry, which has largely struggled to convert its business model to the internet as print advertising sales have plummeted in the digital age. Google and Facebook dominate the internet ad industry, and news organizations are increasingly reliant on those two tech giants for web traffic. Google says it drives 10 billion clicks a month to publishers’ sites.
Some newspapers even asked Congress this year to exempt them from antitrust laws so they could negotiate collectively with the tech giants.
. . . .
“We really recognize the transition to digital for publishers hasn’t been easy,” Google Chief Business Officer Philipp Schindler said in an interview. He said a strong news industry boosts the utility of Google search and helps Google’s ad business, which sells ads on news sites. “The economics are pretty clear: If publishers aren’t successful, we can’t be successful.”
. . . .
Kinsey Wilson, the former executive editor of USA Today who now advises New York Times Co. , said publishers must be careful about letting Google be the middleman to its readers. “Google can remove some friction,” he said, “but publishers have to stay vigilant.”
Link to the rest at The Wall Street Journal
From Publishers Weekly:
During its annual meeting held Tuesday morning at its flagship store in New York City, Barnes & Noble chairman Len Riggio supported its new CEO, Demos Parneros who was named to his current role in April.
During the meeting, Riggio called Parneros “the perfect fit” to help the company grow its top line and improve profits. Observing that Parneros “has brought lots of energy to the company,” Riggio said he is looking forward to watching the executive over the next few years, noting that Parneros shares his vision and will revive B&N “store by store.”
. . . .
Riggio also assured shareholders that B&N is no longer in the tech business. While the Nook e-reader and e-books will remain a part of the company’s offerings to customers, bricks and mortar stores will be its focus. Riggio explained that when e-book sales began exploding several years ago, B&N felt it had no choice but to enter the digital market. In retrospect, Riggio said, B&N didn’t have the culture or financing to compete with the likes of Amazon and Google.
Instead, according to Riggio, B&N will focus on its physical stores and will partner with technology companies to keep a presence in the digital space. “There is no business model in technology” for B&N, Riggio acknowledged.
Link to the rest at Publishers Weekly and thanks to Nate at The Digital Reader for the tip.
PG says the Nook business was doomed from its earliest days. The big reasons are:
Riggio didn’t want to pay for top online talent.
This was evident from the first time PG visited the Nook Store. Poorly designed and poorly executed. And it never really changed.
Real tech talent is rare and in great demand. In the beginning, for the right money, skilled tech people would have gone to work at Nook, but Barnes & Noble wanted to pay bookstore salaries.
PG has no idea if Nook tried to hire really good talent at the right price after it became clear that the Nook Store was a disaster. Unfortunately, by that time, serious tech talent wouldn’t have come regardless of salary because nobody wants to clean up someone else’s mess and a line mentioning the Nook Store would have been deadly on the résumé.
Besides, nobody would have believed Barnes & Noble stock options would ever make them rich at that point.
The Nook Store set ebook prices at a level designed to support the print book prices in its stores.
One of PG’s least favorite things to hear during a product planning meeting is, “We don’t want to cannibalize our existing business.”
The problem is that, if your business is cannibalizable by you, it’s cannibalizable by somebody else. Jeff Bezos has always been a happy cannibal.
Low ebook prices combined with instant availability fueled Amazon’s early dominance. Over time, by cultivating successful indie authors, in part by using Kindle Unlimited, Amazon has added tens of thousands of high quality titles that Riggio couldn’t sell if he wanted to.
Amazon vs. Big Bookstores and Big Publishing is going to be a classic business case used in MBA programs around the world for decades to come. Brains and speed beat money and size once again.