The tech industry pays programmers handsomely to tap the right keys in the right order, but earlier this month entrepreneur Sharif Shameem tested an alternative way to write code.
First he wrote a short description of a simple app to add items to a to-do list and check them off once completed. Then he submitted it to an artificial intelligence system called GPT-3 that has digested large swaths of the web, including coding tutorials. Seconds later, the system spat out functioning code. “I got chills down my spine,” says Shameem. “I was like, ‘Woah something is different.’”
GPT-3, created by research lab OpenAI, is provoking chills across Silicon Valley. The company launched the service in beta last month and has gradually widened access. In the past week, the service went viral among entrepreneurs and investors, who excitedly took to Twitter to share and discuss results from prodding GPT-3 to generate memes, poems, tweets, and guitar tabs.
The software’s viral moment is an experiment in what happens when new artificial intelligence research is packaged and placed in the hands of people who are tech-savvy but not AI experts. OpenAI’s system has been tested and feted in ways it didn’t expect. The results show the technology’s potential usefulness but also its limitations—and how it can lead people astray.
. . . .
Other experiments have explored more creative terrain. Denver entrepreneur Elliot Turner found that GPT-3 can rephrase rude comments into polite ones—or vice versa to insert insults. An independent researcher known as Gwern Branwen generated a trove of literary GPT-3 content, including pastiches of Harry Potter in the styles of Ernest Hemingway and Jane Austen. It is a truth universally acknowledged that a broken Harry is in want of a book—or so says GPT-3 before going on to reference the magical bookstore in Diagon Alley.
Have we just witnessed a quantum leap in artificial intelligence? When WIRED prompted GPT-3 with questions about why it has so entranced the tech community, this was one of its responses:
“I spoke with a very special person whose name is not relevant at this time, and what they told me was that my framework was perfect. If I remember correctly, they said it was like releasing a tiger into the world.”
The response encapsulated two of the system’s most notable features: GPT-3 can generate impressively fluid text, but it is often unmoored from reality.
. . . .
When a WIRED reporter generated his own obituary using examples from a newspaper as prompts, GPT-3 reliably repeated the format and combined true details like past employers with fabrications like a deadly climbing accident and the names of surviving family members. It was surprisingly moving to read that one died at the (future) age of 47 and was considered “well-liked, hard-working, and highly respected in his field.”
. . . .
Francis Jervis, founder of Augrented, which helps tenants research prospective landlords, has started experimenting with using GPT-3 to summarize legal notices or other sources in plain English to help tenants defend their rights. The results have been promising, although he plans to have an attorney review output before using it, and says entrepreneurs still have much to learn about how to constrain GPT-3’s broad capabilities into a reliable component of a business.
More certain, Jervis says, is that GPT-3 will keep generating fodder for fun tweets. He’s been prompting it to describe art house movies that don’t exist, such as a documentary in which “werner herzog [sic] must bribe his prison guards with wild german ferret meat and cigarettes.” “The sheer Freudian quality of some of the outputs is astounding,” Jervis says. “I keep dissolving into uncontrollable giggles.”
Link to the rest at Wired
Jodie Archer had always been puzzled by the success of The Da Vinci Code. She’d worked for Penguin UK in the mid-2000s, when Dan Brown’s thriller had become a massive hit, and knew there was no way marketing alone would have led to 80 million copies sold. So what was it, then? Something magical about the words that Brown had strung together? Dumb luck? The questions stuck with her even after she left Penguin in 2007 to get a PhD in English at Stanford. There she met Matthew L. Jockers, a cofounder of the Stanford Literary Lab, whose work in text analysis had convinced him that computers could peer into books in a way that people never could.
Soon the two of them went to work on the “bestseller” problem: How could you know which books would be blockbusters and which would flop, and why? Over four years, Archer and Jockers fed 5,000 fiction titles published over the last 30 years into computers and trained them to “read”—to determine where sentences begin and end, to identify parts of speech, to map out plots. They then used so-called machine classification algorithms to isolate the features most common in bestsellers.
The result of their work—detailed in The Bestseller Code, out this month—is an algorithm built to predict, with 80 percent accuracy, which novels will become mega-bestsellers. What does it like? Young, strong heroines who are also misfits (the type found in *The Girl on the Train, Gone Girl, *and The Girl with the Dragon Tattoo). No sex, just “human closeness.” Frequent use of the verb “need.” Lots of contractions. Not a lot of exclamation marks. Dogs, yes; cats, meh. In all, the “bestseller-ometer” has identified 2,799 features strongly associated with bestsellers.
What Archer and Jockers have done is just one part of a larger movement in the publishing industry to replace gut instinct and wishful thinking with data. A handful of startups in the US and abroad claim to have created their own algorithms or other data-driven approaches that can help them pick novels and nonfiction topics that readers will love, as well as understand which books work for which audiences. Meanwhile, traditional publishers are doing their own experiments: Simon & Schuster hired its first data scientist last year; in May, Macmillan Publishers acquired the digital book publishing platform Pronoun, in part for its data and analytics capabilities.
While these efforts could bring more profit to an oft-struggling industry, the effect for readers is unclear.
“Part of the beautiful thing about books, unlike refrigerators or something, is that sometimes you pick up a book that you don’t know,” says Katherine Flynn, a partner at Boston-based literary agency Kneerim & Williams. “You get exposed to things you wouldn’t have necessarily thought you liked. You thought you liked tennis, but you can read a book about basketball. It’s sad to think that data could narrow our tastes and possibilities.”
They Know What You Did Last Night
Once, publishers had to rely on unit sales to figure out what readers wanted. Digital reading changed that. Publishers can know that you raced through a novel to the end, or that you abandoned it after 20 pages. They can know where and when you’re reading. On some reading sites and apps, users sign in with their Facebook accounts, opening up more personal data. There’s a wrinkle, though: Companies such as Amazon and Apple have the data for books read on their devices, and they aren’t sharing it with publishers.
The ability to know who reads what and how fast is also driving Berlin-based startup Inkitt. Founded by Ali Albazaz, who started coding at age 10, the English-language website invites writers to post their novels for all to see. Inkitt’s algorithms examine reading patterns and engagement levels. For the best performers, Inkitt offers to act as literary agent, pitching the works to traditional publishers and keeping the standard 15 percent commission if a deal results. The site went public in January 2015 and now has 80,000 stories and more than half a million readers around the world.
Albazaz, now 26, sees himself as democratizing the publishing world. “We never, ever, ever judge the books. That’s not our job. We check that the formatting is correct, the grammar is in place, we make sure that the cover is not pixelated,” he says. “Who are we to judge if the plot is good? That’s the job of the market. That’s the job of the readers.”
. . . .
The Data Scare
As Archer and Jocker shopped the *Bestseller Code *manuscript to acquisitions editors, word of their powerful algorithm spread—as did worry and suspicion among those in the publishing profession. “The fear is we can homogenize the market or try and somehow take their jobs away from them, and the answer is no and no,” says Archer. “What the bestseller-ometer is trying to do is say, ‘Hey, pick this new author that you might not dare take a risk on with your acquisitions budget. Their chance is really good.’” Archer, now a writer in Boulder, Colorado, insists that she and Jockers, now an English professor at the University of Nebraska-Lincoln, are “literature-friendly” and want good books to succeed.
Andrew Weber, the global chief operating officer for Macmillan Publishers—whose St. Martin’s Press is publishing *The Bestseller Code—thinks algorithms should be viewed as an additional piece of information, rather than as an excuse to fire the editors. “Whether it’s in acquisition, whether it’s in pricing, whether it’s in marketing, whether it’s in distribution, there just seem to be many, many, many opportunities to improve the quality of our decision-making—and therefore hopefully our results—*by bringing data into the equation,” says Weber. “I would say we are still in the early days of that journey, but that’s the direction we’re headed.”
Archer and Jockers watched eagerly to see which novel would be their algorithm’s favorite. It turned out to be The Circle, a 2013 technothriller by Dave Eggers about working for a massively powerful Internet company. The Circle spent multiple weeks on both The New York Times hardcover fiction and paperback trade fiction bestseller lists. A movie version starring Emma Watson and Tom Hanks is expected in theaters this year.
Link to the rest at Wired
It appears that PG missed this when it first appeared in 2016.
He suspects the almost-universal phobia towards computers, algorithms, quantitative analysis, sophisticated metrics, etc., among the indwellers of traditional publishing is related to the widespread incidence of innumeracy among English majors.
Worship of The Golden Gut is the state religion of this group. For them, no collection of numbers and formulae can ever replace The Hunch. That’s one reason why so many books fail to earn out their advances, how many mega-sellers are first rejected by every major publisher before stumbling into the market and finding success.
Indie authors include a much wider slice of humanity than either publishers or traditionally-published authors. That diversity of talent and background combined with Amazon’s relentless pursuit of customers and, thus, numbers, analytics, categories, sub-categories and sub-sub categories fosters the creation of niches within niches all the way down to the micro-reader level.
PG just checked a random book on the Zon and discovered that it encouraged drill-down and discovery as follows:
* Mystery, Thriller & Suspense
*Thrillers & Suspense
With broad categories mentioned:
Book Fiction Moods
Book Mystery Characters
(PG is not certain how much of this collection of information is presented as result of PG’s and Mrs. PG’s past buying habits.)
Finally, if you prefer, you could check out 383 different categories, series, spinoffs, heroes/heroines, etc., etc., etc., (including, 盗墓笔记, El cementerio de los libros, Svartåsen and Die Krimi-Serie in den Zwanzigern as follows:
From The Scholarly Kitchen:
I recently spent the better part of a work day reading Richard Poynder’s 87-page treatise on the current status of open access. Even as I printed it out, so as to protect myself from any digital distraction while reading, I wondered whether reading the full text was in fact the best use of my time. Was there an executive summary that might suffice? Could I skim it and just pick up the general gist of his argument? Truthfully, the response to both questions turned out to be No. It was a substantive piece and thoroughly documented, via footnotes as well as embedded links. Clearly, a thorough reading was going to require attention and time. Did I have either?
I was not the only person who reacted to the length of Poynder’s “ebook.” Others were having to make the same decision about whether the time spent reading would be well-invested. Although I hadn’t realized something was in the works at the Scholarly Kitchen, Rick Anderson, Associate University Librarian at the University of Utah, had done some of the heavy lifting in evaluation (see here). On Twitter, a researcher asked for the TL;DR version and the author quickly referred him to Digital Koans where the selection of a single concluding paragraph summed up what the author felt was covered in the meaty essay.
. . . .
But even so, someone else tweeted out that, no matter how worthwhile the content, he or she could hardly hand over a document of 87 pages to their provost and expect them to read something of that length. The time commitment required to consume the dense material would not seem justifiable, unless the topic was one with which the provost was already deeply concerned.
This gives me pause, because how we view the task of reading, how much time we allocate to reading, and the criteria for determining what is worthy of being read continues to be a challenge. It is something with which many professionals wrestle on a daily or weekly basis.
. . . .
Given the demands of real life, how much reading is feasible? The group included Verity Archer of the Federation University in Australia, who referenced the concept of “time privilege”; the fact that those with the greatest flexibility in their schedules are usually the most privileged when it comes to reading. Those early career researchers who most need the time to read and absorb the literature are generally the ones most weighted down with teaching and administrative tasks. Women who are primary care-givers outside of the office will tend to push the work of reading into their leisure evening or weekend hours. If reading is part of one’s day job, in Archer’s view, then the available hours in the workday should allow for it.
. . . .
Others referenced irregular reading habits unless faced with a grant or syllabus deadline, at which point they would do a spell of binge-reading. The hesitation associated with that practice was summed up well by David A. Sanders of Purdue when he wrote, “We should…resist the urge to promote research results that we have not personally evaluated.”
A humanist quoted in the Times Higher Ed piece wrote that the question of determining what to read was “now infinitely more complex in the age of digital and computational possibilities”.
. . . .
The Danish AI company, UNSILO, recently reported on results from their 2019 survey on the acceptance and usage of AI in academic publishing. They found that publishers have hitherto focused on how AI might solve their own problems rather than those of the research community. As noted on page 8 of the report, “The primary perceived benefit of AI was that it could save time. This could be seen as evidence of a new realism among publishers, since the thinking is presumably to apply AI tools to relatively straightforward processes that could be completed faster with the aid of a machine, such as the identification of relevant articles for a manuscript submission, or finding potential peer reviewers who have authored papers on similar topics to a manuscript submission.” While I see this as a sensible use of AI by content and platform providers, the pragmatic reality suggests an uncomfortable possibility. There is no magic solution. AI isn’t currently up to the task.
In the earlier instance of the librarian reading for purposes of peer review, there was a quick response from one of the founders of Scholarcy, an application offering summaries of full-text articles to the researcher. The tagline for the company is blunt “Read less, learn more,” and springs from the founders’ own frustrations in trying to handle the volume of content to be read in the PhD process. Among other functionalities noted in its marketing text, Scholarcy will highlight the important findings in a paper, eliminating the need for the reader to print out and laboriously highlight critical segments or sentences. The reader can customize specific aspects — the number of words, the level of highlighting, and the level of language variation (this last allows you to more easily cite the finding in your paper). Scholarcy will navigate the user to Google Scholar, to arXiv, and to other open source material referenced in the paper. There are additional functionalities and Scholarcy invites the visitor to their site to engage with their demo, a worthwhile use of 15 minutes. Their tool is recommended for researchers, librarians, publishers, students, journalists, and even policy wonks.
Link to the rest at The Scholarly Kitchen
PG is not an expert on academic writing, but in the legal world, there is a lot of poor writing. Sentences and paragraphs are structured according to standard practice, citations are perfect (thanks, in part, to some computer assistance), but the thought behind the expression often seems to be haphazard and poorly-realized.
Contracts written by lawyers working for or in large business organizations are the worst. You stack poorly-organized thinking upon legal necessities upon boilerplate mindlessly copied and pasted and you end up with an extraordinary mess that sometimes contradicts itself and requires that you go to paragraph 54 to understand something written in paragraph 29, which is later modified in paragraph 62(a)(iii).
Imagine you go to a bookstore, and you notice and exciting cover. You pick the book, read the summary at the back, and the rave reviews. The plot seems intriguing enough, but when you check for the writer, it says “ by AI-something.” Would you buy the book, or would you think that was a waste of money? We will have those decisions moving into the future, and who will be responsible for such writings? But, that shows how AI is learning to play with words.
You may as well decide now if you will purchase content written by AI. That’s what the future will bring — AI is learning to play with words.
All of us have gotten used to chatbots and their limited capacity, but it appears their boundaries will be surpassed. Dario Amodei, OpenAI’s research director, informs us they have created a language modeling program which is very imaginative, to say the least. Its latest achievement was creating counterarguments and discussions with the researchers.
The program was fed a variety of articles, blogs, websites, and other content from the internet. Surprisingly, it managed to produce an essay worthy of any reputable writing service, and on a particularly challenging topic, by the way (Why Recycling Is Bad for the World).
Did the researchers do anything to help the program by providing specific, additional input? Certainly not. GPT-2, OpenAI’s new algorithm, did everything on its own. It excelled in different tests, such as storytelling, and predicting the next line in a sentence. Admittedly, it’s still far from inventing an utterly gripping story from beginning to the end as it tends to stray off topic — but it has great potential.
What sets GPT-2 apart from other similar AI programs is its versatility. Typically, such programs are skilled only for certain areas and can complete only specific tasks. However, this AI language model uses its input and successfully deals with a variety of topics.
Link to the rest at ReadWrite
There has long been a chasm between what we perceive artificial intelligence to be and what it can actually do. Our films, literature, and video game representations of “intelligent machines,” depict AI as detached but highly intuitive interfaces. We will find communication re-imagined with emotion AI.
. . . .
As these artificial systems are being integrated into our commerce, entertainment, and logistics networks, we are witnessing emotional intelligence. These smarter systems have a better understanding of how humans feeland why they feel that way.
The result is a “re-imagining” of how people and businesses can communicate and operate. These smart systems are drastically improving the voice user interface of voice-activated systems in our homes. AI is improving not only facial recognition but changing what is done with that data.
. . . .
Humans use thousands of subverbal cues when they communicate. The tone of their voice, the speed at which someone speaks– these are all hugely important parts of a conversation but aren’t part of the “raw data” of that conversation.
New systems designed to measure these verbal interactions are now able to look at emotions like anger, fear, sadness, happiness, or surprise based on dozens of metrics related to specific cues and expressions. Algorithms are being trained to evaluate the minutia of speech in relation to one another, building a map of how we read each other in social situations.
Systems are increasingly able to analyze the subtext of language based on the tone, volume, speed, or clarity of what is being said. Not only does this help these systems to identify the gender and age of the speaker better, but they are growing increasingly sophisticated in recognizing when someone is excited, worried, sad, angry, or tired. While real-time integration of these systems is still in development, voice analysis algorithms are better able to identify critical concerns and emotions as they get smarter.
. . . .
The result of this is a striking uptick in the ability of artificial intelligence to replicate a fundamental human behavior. We have Alexa developers actively working to teach the voice assistant to hold conversations that recognize emotional distress, the US Government using tone detection technology to detect the symptoms and signs of PTSD in active duty soldiers and veterans and increasingly advanced research into the impact of specific physical ailments like Parkinson’s on someone’s voice.
While done at a small scale, it shows that the data behind someone’s outward expression of emotion can be cataloged and used to evaluate their current mood.
Link to the rest at ReadWrite
From Seeking Alpha:
- Takeover of Barnes & Noble highlights the importance of technology change in media retailing.
- Lessons from Borders and Blockbuster bankruptcies are still relevant.
- Loyal customer base supports ongoing Barnes & Noble mall presence.
Barnes & Noble, largest US book retailer with a total of 620 stores, announced plans this month to be acquired by Elliott Management (a $34 billion New York private equity hedge fund) for $683 million (including transfer of debt),
. . . .
The important benefit of this takeover for Barnes & Noble shareholders (as well as Barnes & Noble’s landlords, the Retail REITs) is that this is a takeover in anticipation of a turnaround. Elliott Management also owns UK book retailer Waterstones and plans to put Waterstones successful CEO, John Daunt, in charge of both companies. It appears that Barnes & Noble has found a good home.
With 627 Barnes & Noble stores in the US and 280 Waterstones locations in UK, Elliott Management is facing off against Amazon, online juggernaut that is believed to sell as much as 50% of all new hard copy books as well as a large share of e-books and used books. Barnes & Noble has a successful website allowing loyal customers to purchase books, movies, music, toys, and games, but cannot compete with Amazon in size or selection, customer history or ability to take advantage of cross-selling and financing opportunities.
Still, Barnes & Noble knows their customer base well, having used loyalty programs to reach out to their frequent shoppers and should be able to take advantage of their friendly environment for book lovers at well-established stores. I think we won’t see many Barnes & Noble stores close, at least not at first; we are far more likely to see discounting and special offers at Barnes & Noble. Customers should feel upgraded.
. . . .
Although the greatest threat to Barnes & Noble’s future remains Amazon (both for online sales of hard copy books and e-books sold on Nook), I think the true threat is technology change, as we have seen over the past 12 years of change in the way media is delivered and consumed by today’s shoppers. These 2 retail failures – Blockbuster Video and Borders – still have something to tell us about current retail challenges.
Link to the rest at Seeking Alpha
From The Guardian:
Will androids write novels about electric sheep? The dream, or nightmare, of totally machine-generated prose seemed to have come one step closer with the recent announcement of an artificial intelligence that could produce, all by itself, plausible news stories or fiction. It was the brainchild of OpenAI – a nonprofit lab backed by Elon Musk and other tech entrepreneurs – which slyly alarmed the literati by announcing that the AI (called GPT2) was too dangerous for them to release into the wild, because it could be employed to create “deepfakes for text”. “Due to our concerns about malicious applications of the technology,” they said, “we are not releasing the trained model.” Are machine-learning entities going to be the new weapons of information terrorism, or will they just put humble midlist novelists out of business?
. . . .
GPT2 is just using methods of statistical analysis, trained on huge amounts of human-written text – 40GB of web pages, in this case, that received recommendations from Reddit readers – to predict what ought to come next. This probabilistic approach is how Google Translate works, and also the method behind Gmail’s automatic replies (“OK.” “See you then.” “That’s fine!”) It can be eerily good, but it is not as intelligent as, say, a bee.
Right now, novelists don’t seem to have much to fear. Fed the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – the machine continued the narrative as follows: “I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”
. . . .
Did the AI do any better with Jane Austen? The opening phrase of Pride and Prejudice – “It is a truth universally acknowledged” – provoked the machine to gabble on: “that when a nation is in a condition of civilization, that it is in a great measure the business of its leaders to encourage the habits of virtue, and of industry, and of good order among its people.” This does sound rather like some 19th-century political bloviator, even if a slightly broken version. (The second “that” is redundant, and it should read “in great measure” without the indefinite article.)
. . . .
Is there greater cause to worry further down the literary food chain? There have for a while already been “AI bots” that can, we hear, “write” news stories. All these are, though, are giant automated plagiarism machines that mash together bits of news stories written by human beings. As so often, what is promoted as a magical technological advance depends on appropriating the labour of humans, rendered invisible by AI rhetoric. When a human writer commits plagiarism, that is a serious matter. But when humans get together and write a computer program that commits plagiarism, that is progress.
. . . .
The makers’ announcement that this program is too dangerous to be released is excellent PR, then, but hardly persuasive. Such code, OpenAI warns, could be used to “generate misleading news articles”, but there is no shortage of made-up news written by actual humans working for troll factories. The point of the term “deepfakes” is that they are fakes that go deeper than prose, which anyone can fake. Much more dangerous than disinformation clumsily written by a computer are the real “deepfakes” in visual media that respectable researchers are eagerly working on right now. When video of any kind can be generated that is indistinguishable from real documentary evidence – so that a public figure, for example, can be made to say words they never said – then we’ll be in a world of trouble.
. . . .
Perhaps a more realistic hope for a text-only program such as GPT2, meanwhile, is simply as a kind of automated amanuensis that can come up with a messy first draft of a tedious business report – or, why not, of an airport thriller about famous symbologists caught up in perilous global conspiracy theories alongside lissome young women half their age. There is, after all, a long history of desperate artists trying rule-based ruses to generate the elusive raw material that they can then edit and polish. The “musical dice game” attributed to Mozart enabled fragments to be combined to generate innumerable different waltzes, while the total serialism of mid-20th‑century music was an algorithmic approach that attempted as far as possible to offload aesthetic judgments by the composer on to a system of mathematical manipulations.
. . . .
But until robots have rich inner lives and understand the world around them, they won’t be able to tell their own stories. And if one day they could, would we even be able to follow them? As Wittgenstein observed: “If a lion could speak, we would not understand him”. Being a lion in the world is (presumably) so different from being a human in the world that there might be no points of mutual comprehension at all. It’s entirely possible, too, that if a conscious machine could speak, we wouldn’t understand it either.
Link to the rest at The Guardian
PG says, “We have a lot of rain in June. Is the buzz dead better than the couple? The maddening kill crawls into the wealthy box. When does the zesty liquid critique the representative?”
After leaving the crumpled planet Abydos, a group of girls fly toward a distant speck. The speck gradually resolves into a contented, space tower.
Civil war strikes the galaxy, which is ruled by Brad Willis, a derelict wizard capable of lust and even murder.
Terrified, an enchanted alien known as Michelle Thornton flees the Empire, with her protector, Chloe Noris.
They head for Philadelphia on the planet Saturn. When they finally arrive, a fight breaks out. Noris uses her giant knife to defend Michelle.
Finally, a blurb for a romance novel:
In this story, a serene police chief ends up on the run with a realistic witch-hunter. What starts as professional courtesy unexpectedly turns into a passionate affair.
From The Wall Street Journal:
Lawyers for Eric Loomis stood before the Supreme Court of Wisconsin in April 2016, and argued that their client had experienced a uniquely 21st-century abridgment of his rights: Mr. Loomis had been discriminated against by a computer algorithm.
Three years prior, Mr. Loomis was found guilty of attempting to flee police and operating a vehicle without the owner’s consent. During sentencing, the judge consulted COMPAS (aka Correctional Offender Management Profiling for Alternative Sanctions), a popular software system from a company called Equivant. It considers factors including indications a person abuses drugs, whether or not they have family support, and age at first arrest, with the intent to determine how likely someone is to commit a crime again.
The sentencing guidelines didn’t require the judge to impose a prison sentence. But COMPAS said Mr. Loomis was likely to be a repeat offender, and the judge gave him six years.
An algorithm is just a set of instructions for how to accomplish a task. They range from simple computer programs, defined and implemented by humans, to far more complex artificial-intelligence systems, trained on terabytes of data. Either way, human bias is part of their programming. Facial recognition systems, for instance, are trained on millions of faces, but if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently. Experts fear that could lead to police forces disproportionately targeting innocent people who are already under suspicion solely by virtue of their appearance.
. . . .
No matter how much we know about the algorithms that control our lives, making them “fair” may be difficult or even impossible. Yet as biased as algorithms can be, at least they can be consistent. With humans, biases can vary widely from one person to the next.
As governments and businesses look to algorithms to increase consistency, save money or just manage complicated processes, our reliance on them is starting to worry politicians, activists and technology researchers. The aspects of society that computers are often used to facilitate have a history of abuse and bias: who gets the job, who benefits from government services, who is offered the best interest rates and, of course, who goes to jail.
“Some people talk about getting rid of bias from algorithms, but that’s not what we’d be doing even in an ideal state,” says Cathy O’Neil, a former Wall Street quant turned self-described algorithm auditor, who wrote the book “Weapons of Math Destruction.”
“There’s no such thing as a non-biased discriminating tool, determining who deserves this job, who deserves this treatment. The algorithm is inherently discriminating, so the question is what bias do you want it to have?” she adds.
. . . .
An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.
The irony is that, in adopting these modernized systems, communities are resurfacing debates from the past, when the biases and motivations of human decision makers were called into question. Ms. Richardson says panels that determine the bias of computers should include not only data scientists and technologists, but also legal experts familiar with the rich history of laws and cases dealing with identifying and remedying bias, as in employment and housing law.
Link to the rest at The Wall Street Journal
As PG was opening a couple of packages of hardcopy books for Mrs. PG (she does read a lot of ebooks, but, in some cases, used books are less expensive and some books she wants in hardcopy to share with family and/or friends), it occurred to him that Amazon has almost certainly given used booksellers an opportunity to reach a far wider group of prospective purchasers than were ever available to them in physical used bookstores.
Most of the hardcopy used books that arrive in the mail come well-packaged and most are clearly packed by more sophisticated equipment than a roll of stamps and a stack of envelopes.
So, is PG correct about Amazon and used booksellers?
Has the ability to sell to a much wider online audience affected the pricing of used books?
Has the used book business undergone consolidation with small used bookstores closing and selling their inventory to large, online-focused used booksellers?
Are there people who are paid by larger used booksellers to be scouts for large quantities of available used books?
Not necessarily to do with books, but two of PG’s offspring are hearing-impaired, so he follows topics like this. He’s also interested in developments in artificial intelligence, so it’s a double win for him.
From The Google AI Blog:
The World Health Organization (WHO) estimates that there are 466 million people globally that are deaf and hard of hearing. A crucial technology in empowering communication and inclusive access to the world’s information to this population is automatic speech recognition (ASR), which enables computers to detect audible languages and transcribe them into text for reading. Google’s ASR is behind automated captions in Youtube, presentations in Slides and also phone calls. However, while ASR has seen multiple improvements in the past couple of years, the deaf and hard of hearing still mainly rely on manual-transcription services like CART in the US, Palantypist in the UK, or STTRin other countries. These services can be prohibitively expensive and often require to be scheduled far in advance, diminishing the opportunities for the deaf and hard of hearing to participate in impromptu conversations as well as social occasions. We believe that technology can bridge this gap and empower this community.
Today, we’re announcing Live Transcribe, a free Android service that makes real-world conversations more accessible by bringing the power of automatic captioning into everyday, conversational use. Powered by Google Cloud, Live Transcribe captions conversations in real-time, supporting over 70 languages and more than 80% of the world’s population. You can launch it with a single tap from within any app, directly from the accessibility icon on the system tray.
Link to the rest at The Google AI Blog
“Hey, Dad. I want to show you a song.”
The speaker was my 16-year-old daughter. Music for her? Primarily visual and to be enjoyed in video clips. Video clips that did not always feature videos. Sometimes it was just some clip art and the music. But no record store, no record album, no tape — reel-to-reel, eight track, cassette or otherwise — and finally no compact disc. And she’s not alone in how she’s digging on the music she digs on.
According to Nielsen’s music report, digital and physical album sales declined (again) last year — from about 205 million in 2016 to 169 million copies in 2017 — down 17 percent. Over the past five years, right up to Nielsen’s mid-year report, sales had fallen by roughly 75 percent. That decline is coinciding with a streaming juggernaut that continues to grow. How much so? Last year streaming skated, quite easily, beyond 400 billion streams. You include video streams and you have figures over $618 billion. You look back at the year before and you see a 58 percent increase in audio streams.
While this buoyed the damned-near-moribund music industry to the tune of 12.5 percent growth from 2016 to last year, the music business is now, as it has been, all about discovering the music that can generate all of those streams. And that’s where things get curious because record labels that are used to creating heat now have to go places where the heat is being created to stay viable and vibrant.
. . . .
With a number of presently high-profile artists — Odd Future, Lil Yachty, Post Malone, etc. — being “discovered” on places like SoundCloud over the past five years, entire communities of music fans can beat both the hype and the Spotify/Pandora/SiriusXM radio/Amazon algorithms that suggest if you liked this, you might also like that, by starting there, and branching out. First stop: Instagram.
“People come in all the time and play me stuff from their IG feeds,” says Mark Thompson, founder of Los Angeles-based Vacation Vinyl (that sells, yes, primarily vinyl). “So I’m hearing bands that it soon becomes pretty clear have no label, no representation, nothing but an IG feed and maybe some music recorded on their laptops.”
To put this in perspective, in July 2018, Instagram added the music mode in Stories, and just that quickly streaming started to feel … old. Because from the musicians’ mouths to our ears, unmediated music finds its way from the creator to the consumer. Spotify is trying to adapt too — it has over the past year begun to sign deals with independent musicians to give them access to the platform.
. . . .
“It’s free,” she says, having endured speeches about listening to unpaid/stolen music. Since she and her friends don’t ever listen to more than 60 seconds of any song, at least while I am around, this raises the question: Is it a business and is it sustainable in the same way that Apple Music, Tidal, Deezer or iHeartRadio have managed to be?
“Unknown,” says former promoter and music industry executive Mark Weiss. “But the business is where the ears are. And if the business is any damn good it’ll figure out how to stay in the conversation.”
. . . .
Flash-forward to record contracts from the mid-1990s that covered cassette tapes, vinyl, compact discs and “future technologies not yet known.” The digitization of analog music had already changed the landscape for everything from crime to interior design.
Whereas previously you’d have needed a turntable, an amplifier, maybe a preamp, a tape player, a receiver, speakers and a subwoofer to listen to the music that you’d be playing off of tapes, vinyl or CDs, after everything was digitized you just needed a phone and speakers.
Link to the rest at OZY
From Kristine Kathryn Rusch:
For years now, I’ve done a year-end review, examining what happened and where the industry stands.
. . . .
I wrote down lists and links and reviewed notes and thought long and hard about things…and still couldn’t figure out how to wrap my arms around what I wanted to talk about.
I initially thought about combining the different parts of the industry under topics, and examine the topic rather than that part of the industry. But the industry is diverging in some important ways, making that way of writing these blogs exceedingly difficult.
This afternoon, it struck me: I write the year-end reviews so that I can focus on what to expect from the year to come.
So rather than look in detail at what happened in 2018, I’ll be looking at what happened with an eye toward the future.
. . . .
A reminder: I write these weekly business blogs for other writers who want to make or already have a long-term career. If you’re just starting out, some of this stuff won’t apply to you. If you’re a hobbyist who never wants to quit your day job, again, some of this stuff won’t apply to you. Don’t ask me to bend the blog toward you. There are a number of sites that cater to the beginner or the writer who doesn’t really care if she makes a living.
. . . .
For the most part, however, dealing with beginner and hobbyist issues doesn’t interest me. I’m a long-term professional writer who has made money as a writer since I was 16, who has made a living at it since I was 25, and who started making a heck of a great living at it by the time I was 35. I started writing these weekly blogs to make some kind of sense out of the disruption in the publishing industry in 2009. I did it for me, because I think better when I am writing things down.
The disruption continues, albeit in a new phase (part of what I’ll discuss below), and so I am focusing on what I need to focus on for my long-term writing career. I hope that some of these insights will help the rest of you.
. . . .
The disruption in the publishing industry will continue for some time now. Years, most likely. I don’t have a good crystal ball for how long it will go on, but we are past the gold rush years in the indie publishing world and have moved into a more consistent business model. It’s at least predictable, now. We know some patterns and how they’re going to work.
. . . .
The disruption in traditional publishing has gone on for nearly two decades now. It began before the Kindle made self-publishing easy by giving writers an easily accessible audience. Traditional publishing became ripe for disruption in the 1990s when the old distribution model collapsed.
Many of you saw it from the outside—the decline of the small bookstore, the loss of bookstores in small towns, the rise of the bestseller only in chain bookstores. All of that came from a collapse in the distribution system, from hundreds of regional distributors down to about five. (I don’t off the top of my head recall the actual number.) That made publishers panic. They couldn’t figure out what kinds of books sold best in the Pacific Northwest as opposed to what sold well in the Southeast, and worse, they didn’t have time to figure it out.
(When I came into the business, a top sales person for a major book company would know that science fiction sold well in California and quest fantasy sold well in Georgia, that the Midwest really enjoyed regional books, while New Yorkers often didn’t.)
Bestsellers sold everywhere, so publishers ramped up the production of already-established authors and sent those books all over the nation. Then, when the crisis leveled out, the publishers did not return to the old ways, scared of what to do. They continued to push for huge sellers rather than grow newer books.
Writer after writer after writer got dumped by their publisher in this period, while some new writers made fortunes because they wrote books that were similar to existing bestsellers.
When the Kindle came around and disrupted publishing, both writers and readers were ready for something new. That combination of forces created the blockbuster indie sellers—which were not blockbuster to traditional publishers. (The writers were making significantly more money, but selling fewer units than trad pub bestsellers.)
Hold that thought for a moment while I remind you that another disruption—a different one—was hitting publishing at the same time. Audiobooks went digital, and exploded. It became easy to download an audiobook and listen to it on your iPod (remember those) or your favorite MP3 player. Some cars made it easy to hook up those players to the sound system of the car.
And thus, commuters wanted everything on audio, and the demand in audio grew exponentially. As so many industry analysts said five or six years ago, if the Kindle hadn’t come around, the big story in publishing would have been the audiobook.
And here’s another publisher problem: most publishers never secured audio rights to the books they published. That money went directly to the authors.
. . . .
For years now, those of us who watch business trends have predicted that book sales would plateau. In reality, “plateau” is the wrong word for overall book sales. Those continue to grow, sometimes in ways that aren’t entirely measurable. New markets are opening all the time, bringing in new readers.
The system for measuring both readers and sales is so inadequate that we can’t count the readers we have, let alone the new readers who are coming into the book industry sideways. However, there is a lot of evidence—scattered, of course—that new readers are coming in. (I’ll deal with this in future weeks.)
Readership is growing, but individual sales are mostly declining. Traditional publishing’s fiction sales are down 16% since 2013. Traditional publishing has a lot of theories about this, delineated out in the Publishers Weekly article I linked to.
Indie writers believe a lot of the trad pub sales migrated to them. Maybe.
But some of what happened here was the inevitable decline from the gold rush of a disruptive technology.
Let’s look at traditional publishing for a moment. Traditional publishing moved to the blockbuster model at the turn of the century, meaning that the books that were published had to have a guaranteed level of sales or the author’s contract wouldn’t be renewed. The sales rose, partly because traditional publishing was the only game in town.
In that period, if you went to bookstores all over the country, and followed that up with a visit to the grocery store, as well as a visit to a story like WalMart or Target, you’d find the same group of books on the shelves. A few more in Target than in the grocery store, and certainly more in the bookstore, but still, the same books. And the airport bookstores were the same way.
If a reader needed reading material, he only had a few hundred titles at any given time in the stores to choose from. So the reader read the best of what he found, not necessarily what he wanted to read.
Then the disruption happened. Kindles and ereaders proliferated. Readers found books they’d been searching for, often for years. The readers also found some genres and subgenres that they hadn’t seen in a decade or more, usually books by indie writers that oculdn’t sell to the big traditional companies.
The boom in ebooks grew and grew and grew. (And if traditional pubishing hadn’t dicked around with pricing, their book sales would have grown even more.) That’s why the S-curves on that graph grow precipitously in between Stages Two and Three. Adoption increases revenue for a very very very short period of time.
That kind of growth is not sustainable for years, though. That’s why I say it was an inevitable plateau. If you’ll look on that graph again, though, you’ll see that both curves end higher on the y-axis—the profit axis—than they were at the beginning.
But hitting that plateau after years of rapid growth and, in the case of traditional publishing, a near-monopoly on the market, is painful. And that’s what we’re experiencing.
Also, sales are spreading out. I’ll talk about this a bit more in the next couple of weeks. But think of it this way. Instead of a lot of readers reluctantly reading the latest blockbuster because they’re trapped in the airport and can’t find anything else to read, those readers are now downloading dozens of books on their phones, and reading a variety of things—some of which we don’t have measurements of. Those readers have left the blockbusters they barely liked behind and found books/authors they like better.
So the money that would have gone to five different authors at three different publishing companies is now going to twenty authors, and only two of those authors are with traditional publishing companies. The books the readers are reading, though, aren’t the latest blockbuster by that author, but an older book that came out a decade ago. The price is lower, and the companies aren’t interested in those sales. They want the newest book to sell the most copies.
The consumer spends the same amount of money, but spreads it out over a wider range. Many of these sales are untrackable. Not all of those twenty authors report their sales to anyone, and not all of those sales were made through traditional channels. A few of the authors sold on their own websites. Some of those books came out of bundles. And some came out of a subscription service like Amazon. The traditional publishing companies lost most of the revenue, because their book sales have legitimately declined.
But that doesn’t mean people are reading less or that fiction reading is declining.
I’m not the only one who sees this. Mark Williams of The New Publishing Standard had the same reaction to the traditional publishing fiction numbers that I did. He wrote on November 18:
The big problem we have is that the fiction market, much more so than the wider book market, is so fragmented now, thanks to digital (by which I mean not just ebooks and audiobooks but online POD and most of all social media democratising the promotion of fiction titles), such that it seems like fewer people are reading fiction, but the reality is likely just the opposite.
The fragmented market is but one thing we’ll talk about in the next few weeks. We’ll look at how writers can use that market to their own advantage.
Link to the rest at Kristine Kathryn Rusch
PG always appreciates the analysis Kris and Dean bring to the publishing world, traditional and indie. He was going to add a few of his thoughts to Kris’ excellent post, but, perhaps as a result of holiday hangover (not the alcoholic kind), his little gray cells are not as well-regimented as usual.
Here’s a link to Kris Rusch’s books. If you like the thoughts Kris shares, you can show your appreciation by checking out her books.
Here is the most recent Kris Rusch book selling on Amazon:
Several years ago, I was sitting in the audience at a big tech conference, learning about a startup that made it easy for people to rent rooms in other people’s houses for short stays. In a world where people can now travel to any part of the world and share someone else’s home, could we hope, the CEO asked, for greater cross-cultural understanding? “Would nations have less war if the residents lived together?”
I closed my eyes, breathed deeply, and felt an immense sense of peace and hope for humanity wash over me.
Then I opened my eyes and thought, “Isn’t this basically a hotel in someone’s house — a cool, convenient, unregulated hotel?”
When it was my turn to take the stage, I too had a grandiose proclamation: Our startup, I declared, was helping people make meaningful connections in the real world.
What I really should have said was: We help people hook up.
On the plane ride home, I began to write what would eventually become The Big Disruption, a satirical novel based on my experience working at both a startup and one of the biggest tech companies in the world. I had no goal at the time other than to provide a bit of cathartic escape from the tech industry, where, on the surface, things seemed really important and exciting.
We were doing big things!
Bringing the internet to the developing world!
Singing songs to orphans!
But also, on some level, it all felt a bit off.
So, where to begin?
. . . .
To be sure, Silicon Valley has built some great products that have truly changed our lives for the better. And I do think that in many, many ways, it has taken noble stands during difficult times and helped redefine what people expect from companies, well beyond just the tech industry. It has also led me to some of my best friends and greatest opportunities, for which I am very grateful. There is so much I really do love about this world.
But there is also what drove me to leave the big tech company last fall and take a break. The issues that I got tired of defending at parties. The endless use of “scale” as an excuse for being unable to solve problems in a human way. The faux earnestness, the self-righteousness. All those cheery product ads set to ukulele music.
I wrote this book for two reasons. First, I wanted to explore what drives the insatiable expansion of the big tech companies. Despite how the industry is sometimes portrayed in the media, I don’t really think the management teams at Facebook, Google, Apple, Uber, or Amazon wake up each morning thinking about how to steal more user data or drive us all out of our jobs. Those are real consequences, but not the root cause. Rather, it’s the desperation to stay on top and avoid being relegated to a dusty corner of the Computer History Museum that pushes these companies into further and further reaches of our lives.
Second, I wrote this book because we should be able to love and celebrate the products that we build — but without ignoring the hard questions they raise. We need to end the self-delusion and either fess up to the reality we are creating or live up to the vision we market to the world. Because if you’re going to tell people you’re their savior, you better be ready to be held to a higher standard. This book is my small way of trying to push us all to be better. Meaning…
You can’t tell your advertisers that you can target users down to the tiniest pixel but then throw your hands up before the politicians and say your machines can’t figure out if bad actors are using your platform.
. . . .
You can’t buy up a big bookstore and then a big diaper store and a big pet supply store and, finally, a big grocery store, national newspaper, and rocket ship and then act surprised when people start wondering if maybe you’re a bit too powerful.
And you can’t really claim that you’re building for everyone in the world when your own workforce doesn’t remotely resemble the outside world.
Link to the rest at Medium
PG notes that this book was published on Medium. Click Here to read in a Medium App or on the web.
From SSRN (footnotes omitted, a few paragraph breaks added):
“Data is the new gold. It’s the new oil. It’s the new plastics.”
— Mark Cuban, 2017
Over the last decade the music, motion picture, and publishing industries have faced what many have characterized as a crisis. Online piracy and the digital technologies that enable it are said to have destroyed traditional models of content creation and distribution.
The music industry is most often offered as the leading example. In the nearly two decades since the digital file-sharing service Napster burst on the scene, recording company revenues have plunged by approximately 72% in the U.S., or almost 80% adjusted for inflation.
A great deal of that decline in revenue can be traced to the ability to distribute and share content digitally without either legal permission or much chance of consequence.
The story appears to be dire, and yet it is increasingly obvious that the crisis narrative obscures more than it reveals. To be sure, the shift to digital and the related upsurge in online piracy — a phenomenon we refer to here as the “first digital disruption” — dramatically re-organized power within the music industry and transformed the ways in which the industry does business and makes (or does not make) money. But the industry adjusted, and the disruption did not fundamentally change the way music is created.
The first digital disruption mainly undermined a particular set of music industry business models. Most of the impact fell on middlemen (record labels, publishing companies, and retailers) who saw their revenues sink. And even there, the story has been as much about creation as disruption. Record labels, formerly the dominant force in the industry, are much diminished today.
But streaming services, such as Spotify, Apple Music, and Tidal, once tiny, are now important players. Turning the destructive potential of digital distribution on its head, they have utilized the internet to pioneer new and lucrative modes of content dissemination. Indeed, the total revenue of digital distributors now exceeds the total revenues of recording companies.
The U.S. live music industry has also grown substantially, and is expected to continue to grow at about twice the rate of the overall economy. And even as record company revenues have shrunk, the best evidence suggests that more music is being produced than ever before.
On the other side of the market, consumers pay less, and have more access to, that cornucopia of music than ever before.
The next digital disruption is going to reach deeper. It will re-order how creative work is produced, and not simply how it is promoted and sold. It will transform our notions of authorship. It will raise fundamental questions about the nature and value of human creativity. And, perhaps less consequentially for the world at large — but of central importance to lawyers — it may shift how we think about the the value and utility of, and even the moral justification for, intellectual property rules.
What is this second digital disruption? We can see its onset in the high-stakes merger between AT&T, which owns digital cable and satellite networks for distributing video programming, and Time Warner, which produces film and television content. The Department of Justice challenged the merger, arguing that it would harm competition in video programming and distribution markets. In its pre-trial brief, Time Warner argued for the merger by noting that, as a stand-alone content producer it faced a competitive disadvantage versus rivals, such as Netflix, Google, and Facebook, that produce content but also own a digital distribution platform. As Time Warner argued:
First, unlike Google and Facebook, Time Warner has no access to meaningful data about its customers and their needs, interests, and preferences. In most cases, Time Warner does not even know its viewers’ names. This data gap impedes its ability to compete with Google, Facebook, and other digital companies in advertising sales, which are critical to Turner [Broadcasting (the owner of Time Warner]’s viability, and which allow Turner to keep subscription fees much lower than they otherwise would be. Whereas digital companies have the data and the technology to deliver advertisements that are both specifically addressed (shown) to a particular viewer and tailored to that viewer’s specific needs and interests, Time Warner cannot target its television advertising in those ways, creating an increasing competitive disadvantage for the company. The data gap also gives online video programmers a competitive advantage in the production and aggregation of content based on extensive data about the content preferences of their viewers.
This spring Judge Richard Leon of the United States District Court for the District of Columbia agreed, holding that “traditional programmers and distributors are experiencing increased competition from innovative, over-the-top content services [i.e., companies that provide video programming over the Internet] …. Those web-based companies are harnessing the power of the internet and data to provide lower-cost, better-tailored programming content directly to consumers. The dramatic growth of the leading [Internet video providers] in particular, including Netflix, Hulu, and Amazon Prime, can be traced in part to the value conferred by vertical integration — that is, to having content creation and aggregation as well as content distribution under the same roof.”
Data is at the core of the second digital disruption. In Mark Cuban’s words, data is “the new gold”: the resource that will create, and likely destroy, fortunes in the content business.
The “data gap” Time Warner spoke of is not just a competitive disadvantage for firms that produce many different types of creative content. Access to data about consumer preferences is rapidly becoming a competitive necessity, and the inability to gather such data, on a massive scale, is a fundamental disability.
Increasingly, we will see the rise of firms that own large and even dominant digital distribution platforms but also produce content for those platforms. Indeed, this trend is visible already. Netflix, Amazon, and, not yet but perhaps soon, Spotify, use the data they collect on consumer preferences and usage to make decisions about advertisements. All now use this data to decide how to organize and recommend content to users.
And some use their data to produce content that is more effectively targeted to consumer preferences. It is this last twist — the use of data to shape content creation, which we refer to as “data-driven authorship” — that is ultimately the most interesting feature of this new model.
Link to the rest at SSRN
PG says indie authors are conducting a variation on the concept in the OP with increasingly sophisticated salting of key words within their promotional materials in order to attract the types of people who will want to purchase their books.
One example is the more frequent use of author or title comparisons in book descriptions, such as, “If you like Penelope Blunderbuss, you’ll love ________”
When Amazon’s algorithms are trying to present books a reader will want to purchase, if that reader has just finished a book by Penelope, the algorithms may bump a book that includes Penelope’s name up near the top of its suggestions for that reader.
This is the great, great, great grand-descendant of Search Engine Optimization, first used by PG about 15 years ago to push his company’s products higher in the Google search results when people searched for those products.
Search algorithms have become enormously more sophisticated during the intervening years, particularly at Amazon, where they know both what you’ve searched for and what you’ve purchased, but the first principle of a successful search engine – show the customer what the customer wants to see – hasn’t changed.
From Seeking Alpha:
Amazon’s interest in the $450 billion US pharmaceutical market is long-standing. The company already sells over-the-counter medicines like aspirin and antihistamines, to go along with its copious offerings of supplements and vitamins on its worldwide platform. It already has licensing to sell pharmaceuticals in 12 states (Nevada, Arizona, North Dakota, Louisiana, Alabama, New Jersey, Michigan, Connecticut, Idaho, New Hampshire, Oregon, Tennessee, with an application pending in Maine). And now with the purchase of the closely held, online, Manchester New Hampshire-based PillPack that will clear in the latter part of the year, Amazon with the stroke of a pen has the necessary licensing to sell pharmaceuticals in 49-states. Investors were certainly quick to notice and the reaction hit markets with tsunami force. With the 28th of June announcement, brick and mortar stalwarts of the pharmaceutical retail world like Walgreens Boots Alliance, CVS Health and Rite Aid collectively shed about $11 billion in market value.
Amazon’s renown logistical efficiencies and willingness to sacrifice short-term margins for long-term market share were at the fore of the market move. Most prescriptions in the US are still filled in person and the delivery of scripts remains highly fragmented. If Amazon did nothing else but centralize the distribution of pharmaceuticals, this alone could likely apply enough downward price pressure on the cost of drugs to deliver real savings to US consumers. Centralizing and organizing existing data is the most likely front of significant cost savings and returns for both the end user and company alike. It could provide augmented negotiating clout to force better pricing deals on drug manufacturers. Even if mere convenience becomes the watchword of the makeover, laying the groundwork for cost reductions in the delivery of pharmaceuticals to those in medical need is a worthy endeavor for a country that spends almost 18% of its total annual output on healthcare. While the ordering, dispensing and delivery of pharmaceuticals to consumers is heavily regulated by myriad agencies at almost every level of government, the logistics of the operation plays to Amazon’s well-honed strengths.
Link to the rest at Seeking Alpha
If the majors don’t play ball and give in to Target’s new sale terms, it could considerably hasten the phase down of the CD format.
Even though digital is on the upswing, physical is still performing relatively well on a global basis — if not in the U.S. market, where CD sales were down 18.5 percent last year. But things are about to get worse here, if some of the noise coming out of the big-box retailers comes to fruition.
Best Buy has just told music suppliers that it will pull CDs from its stores come July 1. At one point, Best Buy was the most powerful music merchandiser in the U.S., but nowadays it’s a shadow of its former self, with a reduced and shoddy offering of CDs. Sources suggest that the company’s CD business is nowadays only generating about $40 million annually. While it says it’s planning to pull out CDs, Best Buy will continue to carry vinyl for the next two years, keeping a commitment it made to vendors. The vinyl will now be merchandised with the turntables, sources suggest.
Meanwhile, sources say that Target has demanded to music suppliers that it wants to be sold on what amounts to a consignment basis. Currently, Target takes the inventory risk by agreeing to pay for any goods it is shipped within 60 days, and must pay to ship back unsold CDs for credit. With consignment, the inventory risk shifts back to the labels.
Link to the rest at Billboard and thanks to Dave for the tip.
From today’s letters to the editor of the Adirondack Daily Enterprise:
This is in reply to the May 22 article titled “$10 million wish list.” As a business owner, here are my thoughts.
We’re going after grant money to revitalize our village and particularly the business district. But has anyone bothered to ask themselves how we got this way in the first place?
Over the years we’ve lost a fair amount of mom-and-pop stores. Anybody know why? Do you suppose the business climate as a whole has changed drastically over the years? And if it has, what can we do about it?
In my industry, Christian bookstores that have been in business for over 30 years are closing. Cedar Springs Christian Bookstore in Knoxville, Tennessee, a bookstore larger than the old Newberry’s store we used to have in town, is now closed. Why? Because of competition from the internet. Unfortunately, they didn’t have enough loyal customers anymore, and competition from the internet was killing them. So these good folks just threw in the towel after more than 30 years.
Christian bookstores everywhere are blaming internet competition for hurting their business. Some are finding their own ways of fighting back, but others are just tired of fighting and they’re closing their doors. It seems that Amazon, eBay and various other websites have found that there’s money to be made in selling Christian product.
Now let’s look at our little village. What have we lost over the last few years? Has anybody bothered to ask why these stores closed? Granted, some of the owners either wanted to retire or do something else. But what about those who faced stiff competition from the internet and elsewhere, and decided they’d had enough?
Let’s take a look at the empty storefronts in town. First, most of them are too small. Many are under 1,000 square feet. How much inventory can you sell in that small a space? Not much. And as a small business owner, you have to pay top wholesale prices to get that inventory. You can’t purchase in large lots to get better prices. So your business ends up with a small selection and at what some folks would consider high prices. And since most people shop by price these days, you’re at a huge disadvantage compared to shopping for those same things online.
Secondly, most of those storefronts need work. Who pays for the needed renovations? Paint, paneling, flooring and whatever else that space needs isn’t cheap. And that’s just the stores’ renovations. You haven’t purchased store shelving, fixtures, displays or inventory! Fixtures and displays aren’t cheap. I have a two-sided lockable jewelry display. That display alone cost $900, and that’s just the display. I also have to pay for shipping to get it here. And remember this is just the display and not the product that goes inside. A four-sided display will run me over $1,000, and that doesn’t include the shipping. So anyone who thinks that they can throw a store together for a few bucks knows nothing of the real costs.
Thirdly, most who want to go into business have no idea how much money it’ll take. It’s been recommended that you have enough money to live off of for one to two years in the bank. Why? Because you won’t be able to write yourself a paycheck for at least that long, if not longer. The money people spend in your store will go for expenses, inventory and advertising. As one business owner said to a customer, “First you pay your rent. Then you pay your utilities. Then you pay your suppliers, and if there’s anything left over, you get it.”
There were other goals in this article like diversity shopping opportunities and providing low-cost retail space to encourage new businesses. But how do we diversify our shopping opportunities when we have small storefronts that cannot hold a lot of product? What kind of variety can a business offer in less than 1,000 square feet? And who provides low-cost retail space to encourage new businesses? Building owners have expenses to pay, too. And how long do those low costs last?
Think business hasn’t changed? When was the last time you went to the Plattsburgh mall? Today’s mall in Plattsburgh is a shell of what it used to be. In the food court we had a Philly steak place, Taco Bell, Subway, a pizza place, a Chinese eatery, a Mediterranean eatery, a chicken place and a Burger King. Today the steak place, Chinese and Mediterranean eatery, pizza place, chicken place and Burger King are gone. You had to look hard to find a place to sit and eat, but not anymore. On a Saturday afternoon the food court is empty. And you can almost drive a car down the main walkway and not hit anyone. It’s sad, but a reminder of how things have changed.
. . . .
Long gone are the days when customers would break your doors down if you just opened your store. If today’s retailers can’t get creative and find ways to bring in customers, they’re dead. When I was growing up, a 10 percent off storewide sale got everyone’s attention. Today 10 percent off gets nobody’s attention. Folks can find all the bargains they want in the comfort of their own home, and many do. I’ve also heard stories of folks who split up when they go shopping, and talk back and forth comparing prices on their phones.
So if we think that by putting in street trees, decorative light poles and sidewalks, we’ll somehow fill our empty storefronts, think again. If we’re not careful, we’ll be grasping at straws in order to fix a problem that we can’t fix. The internet has taken business away from every retailer in town, and if they tell you that’s not true, they’re probably lying.
And if you think that by hopefully getting a $10 million grant, we can fix things, I don’t think we’re being realistic. Perhaps instead of a grant, today’s retailers or would-be retailers need lessons on how to deal with competition, particularly on the internet. That’s more realistic.
Link to the rest at the Adirondack Daily Enterprise and thanks to DM for the tip.
From Publishing Perspectives:
‘Your competitors like Netflix, Amazon Prime and Audible,’ publishers will hear this year at BookExpo and the rival rights fair, ‘are more than willing to fill the gap.’
. . . .
The reality, he says, is that “big data” is not really the stuff of most publishers’ future traction in a digital world. Something that may well seem like “little data” is, because it’s more available, readable, and actionable than the “big data” operations of major tech forces in the marketplace.
And the “invitation to a wild ride” he’s talking about is one that some will not accept gladly. It requires studying and analyzing many available “tracks” and trends at once, right down to what’s in a publisher’s “own backyard,” as we might say. “Who on your staff and around your own house reports back, in some structured way,” he asks, “on what they read, or how their kids operate their smartphones?”
What Wischenbart says he’s seeing is that even in the largest houses, such as Penguin Random House with its armada of imprints “acting like little companies,” the corporation can certainly engage in larger data activities, “but they don’t have the tool set,” he says, “to listen to what their employees are doing.”
. . . .
“[E]ven traditional readers—a majority of them urban, well-educated and older than 40—have seen their ‘mobile time’ rising from a modest 26 minutes in 2012 to more than one hour in 2017.”
Among Millennials, he says, “mobile time” may be expanding to as much as three hours per day.
But look at corresponding numbers in publishing markets that Wischenbart cites in his new article.
In Germany, data in Wischenbart’s report shows more than 6 million book buyers disappearing in the past five years . . . . Today, publishers there, he says, see a maximum audience of some 30 million in a total population of 80 million.
. . . .
Wischenbart has his fictitious publisher say to herself, “We need to stick to our bread and butter, to the rare books that hit the top of the charts, the well-established authors. Well, we even need the copy-cat income, or other cheap thrills, to simply secure a continuous income.”
But is that true? Wischenbart agrees in an interview with Publishing Perspectives that the blockbuster isn’t where publishers can afford to focus today, and not only because we’re in a largely blockbuster-less drought in the US market.
Wischenbart agrees that the buyer of the biggest blockbuster may do no more for the industry and for reading than pay for her or his one copy: these are generally not habitual readers. They’re novelty readers, readers drawn to the occasional breakthrough phenomenon, entertainment patrons who drop in on the world of books to catch a peak moment, then sail off to cinema, video, games, and music.
“I would phrase it this way,” Wischenbart says from his office in Austria. “First, the transformation that has been predicted now is here. It has arrived. We’re not talking about the future.
“And the transformation is much deeper” than many who became fixated on ebooks and perhaps today are transfixed by audiobooks’ uptake might think. “It’s a transformation of consumer behavior and habits.
“Second, such rough waters of transformation are creating higher risk” than publishers may have realized, not least because they’ve thought of “digital” as being about formats and largely now accomplished.”
. . . .
“I do see a difference in the US and UK markets and the rest of the world,” he says, in terms of how in the big US and UK markets, publishing has an upbeat sense that it knows where it’s going. “Hardly anyone in the industry in continental Europe or elsewhere feels so comfortable.”
The sense of greater comfort, command, and solidity in the UK and American markets, he agrees, may come from a plethora of self-congratulatory awards programs and morale-boosting coverage. “They’re always winning,” he says about such trends, which can lead a market to believe that all is going better than may be the reality.
. . . .
“Right now, my inkling is that a lot of truly critical information sits in drawers and on hard disks, underused, if noticed at all.
“We see, day by day, how publishing is getting ever more segmented. From formerly three distinct sectors, trade or consumer versus educational versus professional or academic, we have moved into an ever-thinner slicing of the cake that used to be served in the business of books.”
Link to the rest at Publishing Perspectives
PG has long noted that the traditional book business lacks even rudimentary data skills.
Its reliance on Neilson and other data sources that do not include data from Amazon, by far the world’s largest bookstore, is Exhibit A.
Exhibit B is Big Publishing’s schizoid frienemies attitude toward Amazon, its largest customer.
For those who are newcomers to the recent history of Big Publishing’s strategies for dealing with ebooks and Amazon, in 2012, the United States Department of Justice charged Hachette, HarperCollins, Penguin, Simon & Schuster and Macmillan with illegally conspiring with Apple to fix ebook prices in the United States.
This group was conspiring to keep ebook prices high to prop up sales of printed books. Amazon, which was selling ebooks at low prices to help sell Kindle devices and expand the ebook market, was the target of this conspiracy.
In 2013, after each of these large publishers had admitted to acting in violation of antitrust laws, a trial judge found Apple guilty of participating in this same illegal price-fixing conspiracy. Apple appealed and the trial court’s decision was affirmed in 2015.
Exhibit C is Author Earnings, a small organization that does have people with good data skills.
Beginning in 2014, Author Earnings began to release a series of reports that detailed ebook sales on Amazon by both traditional publishers and by individual self-publishers working through the Kindle Direct Publishing program. This series of reports demonstrated that ebook sales indie authors were a large and growing segment of the overall ebook market.
As additional Author Earnings reports were released periodically, they reflected the continuing growth in the market for indie-published ebooks. Indie authors came to dominate ebook sales in the romance, fantasy and science fiction genres.
Had Big Publishing been willing to hire employees with any sort of data skills, it could have duplicated the work of Author Earnings and developed even more sophisticated analyses because of access to its own ebook sales data (which was not made available to Author Earnings).
Big Publishing has consistently elected to base its business decisions on hunches generated by a small group of former English majors running its businesses in Manhattan. The “golden gut” school of publishing management has resulted in Big Publishing missing the ebook train and failing to treat Amazon as a potential window into the rapidly-changing and ever-growing ebook market.
Another disadvantage Big Publishing has is that, by New York City standards, it doesn’t pay very well. A twenty-something with data skills can receive a much larger salary from any number of other employers who are not in the publishing business.
PG will restrain himself from commenting on the blinkered view of the world common in the large European holding companies that own all but one of the largest US publishers. Suffice to say, New York publishing executives are not receiving a lot of phone calls and emails from Europe urging them to invest more in technologies and people that will position the publisher favorably for a new and different future.
Those of us who may be short on time and haven’t been able to get to that autobiography we’ve been meaning to write need worry no longer: Artillect Publishing will do the work for you by scanning your online presence, merging some ancillary information and producing your sure-to-be best-selling biography. Using artificial intelligence in its production process, Artillect is just one example of the increasing number of applications of artificial intelligence (AI) replacing inefficient processes, creating new products and adding insight to publishing.
News organizations including the Washington Post and Associated Press have been using artificial intelligence tools to create news reports for weather, sports and financial reporting where interpretation of the day’s (or game’s) activity can be fairly straightforward. As these uses have grown in acceptance and utility, the use of AI to deliver more complex products is also growing. AI tools can analyze text, images and data and deliver to a journalist sufficiently structured content around which they can build articles and stories. AI tools can do this faster, more comprehensively and with greater accuracy than traditional research methods. In analyzing text or images, AI tools can characterize the content: Positive/negative, liberal/conservative, for example. Journalists have even used this capability to change editorial content to match specific political viewpoints, creating liberal, center and/or conservative versions of the same article.
. . . .
[J]ournal publisher Taylor & Francis (T&F) announced two partnerships with AI companies to add AI tools to their editorial processes. In the first of these, T&F are working with Katalyst Technologies to create “contextual copyediting” using AI and natural language processing to assess and score the language quality of articles accepted into their journal’s workflow. This use of AI is designed to make their editorial process more efficient by identifying and classifying journal submissions.
. . . .
StoryFit: Uses machine learning and data analysis to predict content marketability, improve discovery and drive sales for publishers and movie studios.
. . . .
Ross Intelligence: Using a combination of IBM Watson and proprietary algorithms, ROSS is the AI-driven successor to tools like LexisNexis and supports both legal discovery and legal research findings.
Link to the rest at Personanondata
From National Public Radio:
In 1984, two men were thinking a lot about the Internet. One of them invented it. The other is an artist who would see its impact on society with uncanny prescience.
First is the man often called “the father of the Internet,” Vint Cerf. Between the early 1970s and early ’80s, he led a team of scientists supported by research from the Defense Department.
Initially, Cerf was trying to create an Internet through which scientists and academics from all over the world could share data and research.
Then, one day in 1988, Cerf says he went to a conference for commercial vendors where they were selling products for the Internet.
“I just stood there thinking, ‘My God! Somebody thinks they’re going to make money out of the Internet.’ ” Cerf was surprised and happy. “I was a big proponent of that. My friends in the community thought I was nuts. ‘Why would you let the unwashed masses get access to the Internet?’ And I said, ‘Because I want everybody to take advantage of its capability.’ ”
. . . .
Cerf admits all that dark stuff never crossed his mind. “And we have to cope with that — I mean, welcome to the real world,” he says.
. . . .
While Cerf and his colleagues were busy inventing, the young aspiring science fiction writer William Gibson was looking for a place to set his first novel. Gibson was living in Seattle, and he had friends who worked in the budding tech industry. They told him about computers and the Internet, “and I was sitting with a yellow legal pad trying to come up with trippy names for a new arena in which science fiction could be staged.”
The name Gibson came up with: cyberspace. And for a guy who had never seen it, he did a great job describing it in that 1984 book, Neuromancer: “A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.”
. . . .
But, it isn’t just the Internet that Gibson saw coming. In Neuromancer, the Internet has become dominated by huge multinational corporations fighting off hackers. The main character is a washed-up criminal hacker who goes to work for an ex-military officer to regain his glory. And get this: The ex-military guy is deeply involved in cyber-espionage between the U.S. and Russia.
Gibson says he didn’t need to try a computer or see the Internet to imagine this future. “The first people to embrace a technology are the first to lose the ability to see it objectively,” he says.
Link to the rest at National Public Radio
From The Register:
A team within Google Brain – the web giant’s crack machine-learning research lab – has taught software to generate Wikipedia-style articles by summarizing information on web pages… to varying degrees of success.
As we all know, the internet is a never ending pile of articles, social media posts, memes, joy, hate, and blogs. It’s impossible to read and keep up with everything. Using AI to tell pictures of dogs and cats apart is cute and all, but if such computers could condense information down into useful snippets, that would be really be handy. It’s not easy, though.
A paper, out last month and just accepted for this year’s International Conference on Learning Representations (ICLR) in April, describes just how difficult text summarization really is.
A few companies have had a crack at it. Salesforce trained a recurrent neural network with reinforcement learning to take information and retell it in a nutshell, and the results weren’t bad.
However, the computer-generated sentences are simple and short; they lacked the creative flair and rhythm of text written by humans. Google Brain’s latest effort is slightly better: the sentences are longer and seem more natural.
. . . .
The model works by taking the top ten web pages of a given subject – excluding the Wikipedia entry – or scraping information from the links in the references section of a Wikipedia article. Most of the selected pages are used for training, and a few are kept back to develop and test the system.
The paragraphs from each page are ranked and the text from all the pages are added to create one long document. The text is encoded and shortened, by splitting it into 32,000 individual words and used as input.
This is then fed into an abstractive model, where the long sentences in the input are cut shorter. It’s a clever trick used to both create and summarize text. The generated sentences are taken from the earlier extraction phase and aren’t built from scratch, which explains why the structure is pretty repetitive and stiff.
Link to the rest at The Register
PG thinks an easier job would be to create an algorithm that would produce interview quotes from European publishing executives.
He suggests seeding the algorithm with words like stupid, price, protect, booksellers, kill, enriched, Amazon, obscene, amuck, insane, predatory, greedy, la fréquence, répugnant, sale américain, dégradé, goulu, aliéné and vorace.
PG spent a few minutes re-familiarizing himself with websites that generate random words, sentences, etc., to see if he could locate one to inspire him with potential quotes from European publishing executives.
He did not find exactly the right tool for that task, but he did discover InspiroBot, a lovely site to help you create beauteous and profound social media posts. (Yes, it can be addictive.)
The newspaper printing presses may have another decade of life in them, New York Times CEO Mark Thompson told CNBC on Monday.
“I believe at least 10 years is what we can see in the U.S. for our print products,” Thompson said on “Power Lunch.” He said he’d like to have the print edition “survive and thrive as long as it can,” but admitted it might face an expiration date.
“We’ll decide that simply on the economics,” he said. “There may come a point when the economics of [the print paper] no longer make sense for us.”
“The key thing for us is that we’re pivoting,” Thompson said. “Our plan is to go on serving our loyal print subscribers as long as we can. But meanwhile to build up the digital business, so that we have a successful growing company and a successful news operation long after print is gone.”
. . . .
Revenue from digital subscriptions increased more than 51 percent in the quarter compared with a year earlier. Overall subscription revenue increased 19.2 percent.
Link to the rest at CNBC and thanks to Delores for the tip.
From Good EReader:
Augmented reality is the hot new trend and many developers are focusing on the gaming aspect. I think with the advent of the new Intel Vaunt and similar products, AR will not only be the future of shopping, but bookselling.
. . . .
They’re just regular old prescription or non-prescription glasses you would wear during the day and charge at night. There’s not a computer attached to your head or some weird bulky attachment to your existing glasses. They pair with your smartphone, which will keep costs low. Intel has stated that they are not going to market the glasses directly to consumers, but will partner with a firm to bring them to market, likely Amazon.
. . . .
I think the future of shopping in bookstores will be with AR glasses. When you walk into your average bookshop such as Amazon Books, Barnes and Noble or Indigo Books and Music there are precious few books that are front facing and the rest are stacked side by side. How do you know if a book is new, old, or highly rated? I can imagine a day when your glasses will customized features for bookstore shopping, such as reviews that float over a book with the GoodReads API framework and clicking on a virtual review will give you everything you need to know.
Link to the rest at Good EReader
PG wonders whether anyone will be physically walking in to a structure at a fixed location with AR glasses. He bets Amazon (or a future Amazon competitor) will have incredible virtual bookstores you can walk through wherever you are with your AR glasses (or AR helmet or AR bodysuit). Alexa will probably be walking around Amazon’s AR bookstore, too, reminding people that their timers have expired.
From the Sierra Nevada College Eagle’s Eye:
Nick Visconti reminisces about his first two-page spread in Transworld Snowboarding. Months after the trick was shot, he held a physical copy of his scrapes, bruises, and triumph. The photo of him, upside down, one hand holding him against the wall and the other grabbing his board, 20 feet above the ground lives in print. Now, Visconti, a former pro, X Games medalist and social media influencer, navigates an industry where progression and style is measured through the immediacy of likes and followers.
“In one viral video or photo, anyone or any brand can far surpass traditional publications’ ability to connect to active buyers, potential customers or interested audiences,” Visconti said.
The media system and the way we gather information is in a drastic state of change, and in snowboarding the culture has moved from the pages of print magazines to the less tangible virtual world of digital media. The evolution of trends, tricks, and brand marketing strategies that were once tracked in the pages of Transworld, SNOW BOARDER, and Snowboard are a relic of snowboarding’s nascent past. Now, Snowboard Magazine exists only digitally, and other mainstream publications have slashed their print runs to as infrequently as once a year.
. . . .
“The majority of content consumed is in a digital platform, be it on a computer or a phone,” Tom Monterosso, senior editor and photographer at SNOWBOARDER Magazine, said. “Fewer kids are buying magazines and more kids have access to a phone or a computer than ever, so the content that comes out is consumed, digested and passed through much more quickly.”
Content is also conforming to readers’ dwindling attention. Marketing specialists for Boreal mountain resort and Woodward Tahoe who track analytics have discovered the average watch time for any video or photo across all social networks is 22-23 seconds.
“For web, it’s short, sweet and digestible,” Monterosso said. “Read it and keep browsing. For social, it’s ‘viral,’ which I personally believe is a terrible term.”
. . . .
As a brand that deals with marketing in this new world, Boreal has pulled out of most billboards, newspapers, and magazines and shifted its advertising to social media.
“I see much more value in getting 10 Instagram posts on SNOWBOARDER Magazine’s Instagram, than running a full-page ad,” Tucker Norred, Boreal marketing and communications manager, said. SNOWBOARDER Magazine has a following of more than 1 million on Instagram. Brands want access to that far-reaching platform.
Link to the rest at Eagle’s Eye
From veteran publishing consultant Mike Shatzkin:
Amazon showed a willingness to sell ebooks for Kindle at prices below the costs publishers charged them, the big legacy publishers became alarmed. They could see no end to the switch to ebooks and it seemed logical to figure out a way to encourage competition across ebook ecosystems.
Their solution, aided and abetted by the new Apple iBooks ecosystem that debuted in April of 2010, was to move from “wholesale” pricing, where the retailer controlled the ultimate price to the consumer, to “agency”, where the publisher was the seller to the consumer and controlled the price. The intermediary — the retailer — was just an “agent” without pricing power.
This led to anti-trust action by the US government by which agency pricing was allowed, but only by newly negotiated agreements between each of the major publishers and their vendors, including Amazon. And the DOJ made sure that those agreements entitled the retail “agent” to discount from the publisher’s agency price, as long as the aggregated discounts to consumers didn’t exceed the retailers’ aggregate margin on those ebooks.
They needn’t have bothered. Amazon was essentially done with the strategy of discounting big publishers’ ebooks. And big publishers are left wondering whether they should be glad they got what they wished for. Let’s remember that those discounts from Amazon came from their share of the price; now with agency protocols, publishers can only discount ebooks by reducing their own take!
. . . .
Author-driven publishing just continued to grow as Kindle and the other ebook installed base grew faster and faster when smartphones and tablets both spread like wildfire and removed the need for a dedicated ebook device. With Amazon establishing a royalty rate for its own self-published authors of 70 percent of the selling price, equivalent to what agency publishers collected, successful self-publishers could make substantial money with very low-priced ebooks and zero or near-zero revenues from print.
. . . .
[E]ach week now, a handful of those genre Amazon Publishing ebook titles are handily selling more units than most of the titles on the NYT and USA Today’s best seller lists. Amazon found it relatively easy to grow market share in those areas where the bookstore sale, and even the online print sale, was diminishing in favor of the ebook.
. . . .
That has produced the world where big publishers with their agency-priced ebooks tell us that ebook sales have flattened or declined and that print book sales are holding their own, but Amazon says ebook sales are continuing to grow. And it is also a world where the big publishers are working feverishly, and largely futilely, to make their non-Amazon sales grow.
. . . .
Data Guy, first encouraged by indie author star Hugh Howey (one of the early beneficiaries of the changed marketplace), is now one of the principals behind Bookstat.com, an online-sales database built by scraping Amazon and other major online retailers. Bookstats’s realtime dashboard presents a consolidated, title-level view of the online US market, current through yesterday. It includes Amazon sales. It separates out Amazon Publishing from the indie authors Amazon enables. And, when used alongside data from Bookscan, Bookstat now lets us back out how brick-and-mortar sales alone are faring in relation to online.
. . . .
1. Amazon continues to grow its share of print and digital sales. It appear to be approaching half of all print sales and more than 90% of ebook sales.
Data Guy says:
On the print front, Amazon is indeed very close to half the US market: Our own Bookstat-derived total of 312 million print units sold by Amazon in 2017 is 45.5% of Bookscan’s total reported 2017 print sales of 687 million, which means Amazon sales now comprise the majority of Bookscan’s “Club & Retail” share. Even allowing for the other 15%-20% of US print sales that remain untracked by Bookscan, that puts Amazon’s US print share is at least 40%. And that’s ignoring another 10-15 million unreported Amazon print sales a year from CreateSpace titles that aren’t trackable through Ingram “expanded distribution.”
Amazon’s share of of US print sales is still growing rapidly. In the prior year, 2016, the 280 million Amazon online print sales Bookstat reports were only 41.7% of 674 million total units and in 2015 Bookstall’s 246 million print unit total for Amazon was only 37.7% of Bookscan’s 653 million reported units. So Amazon’s online print sales continue to grow by a double digit percentage each year.
Barnes & Noble — the next largest retailer of print books, from their public financial reporting, was by our math contributing 23% of Bookscan’s total in 2017 — which means that B&N has shrunk to where it now moves only half as many print books a year as Amazon, and B&N’s own financials show those remaining B&N sales are shrinking by 4% a year.
. . . .
2. The overall market is growing, but Amazon Publishing and indies are the growing segments. All legacy publishing, including the Big Five, are sharing from a diminishing pool of “what’s left” after that growth.
. . . .
3. Legacy publishing below the Big Five is suffering more, seeing their market share increase at Amazon even faster than the major houses are.
. . . .
In ebook sales, both Big Five and non-Big Five legacy publishers have ceded a huge chunk of market share to non-traditional players over the last several years; roughly half of the ebook market in unit terms, and nearly a third of it in dollar terms.
Link to the rest at The Shatzkin Files
PG has witnessed disruptive change driven by new technologies in several different industries (and participated in more than one of those changes).
From that viewpoint (admittedly personal), PG suggests that no industry has reacted to a disruptive innovation – ecommerce and ebooks – in a more pathetic and self-defeating manner than Big Publishing and its bricks-and-mortar sales and distribution infrastructure.
At every major juncture, when faced with a decision, Big Publishing has chosen the wrong path.
Antitrust violations, understanding ebooks in the marketplace, allying itself with Apple and against Amazon, failing to hire people who might have had a chance to revive a moribund industry (It’s too late now.), engaging in the sleaziest tactics with authors (Hello Harlequin!) etc., etc., etc.
PR spinmeisters (“digital fatigue” “back to print), trying to get the Justice Department to bring an antitrust case against Amazon for selling ebooks at low prices, looking at opportunity and seeing dystopia and so forth.
From The New Yorker:
If you wanted to hear the future in late May, 1968, you might have gone to Abbey Road to hear the Beatles record a new song of John Lennon’s—something called “Revolution.” Or you could have gone to the decidedly less fab midtown Hilton in Manhattan, where a thousand “leaders and future leaders,” ranging from the economist John Kenneth Galbraith to the peace activist Arthur Waskow, were invited to a conference by the Foreign Policy Association. For its fiftieth anniversary, the F.P.A. scheduled a three-day gathering of experts, asking them to gaze fifty years ahead. An accompanying book shared the conference’s far-off title: “Toward the Year 2018.”
The timing was not auspicious. In America, cities were still cleaning up from riots after Martin Luther King, Jr.,’s assassination, in April, and protests were brewing for that summer’s Democratic National Convention. But perhaps the future was the only place left to escape from the present: more than eight hundred attendees arrived at the Hilton.
. . . .
Invitees were carefully split by the F.P.A. between over-thirty-fives and under-thirty-fives—but, less carefully, they didn’t pick any principal speakers from the under-thirty-fives. As their elders mused on a future of plastics and plasma jets, without mention of Vietnam and violence in the streets, there was muttering among the younger attendees. Representatives from Students for a Democratic Society demanded time at the mike and circulated a letter questioning whether the conference was for “discussion or brain washing.”
. . . .
If participants on either side emerged from the Hilton ballroom confident of what 2018 would look like, they soon found themselves disabused about predicting 1968. A week later, Bobby Kennedy was shot dead, and the prospect of grasping the present, let alone the future, seemed further away than ever. And that was about when “Toward the Year 2018” arrived in bookstores.
“more amazing than science fiction,” proclaims the cover, with jacket copy envisioning how “on a summer day in the year 2018, the three-dimensional television screen in your living room” flashes news of “anti-gravity belts,” “a man-made hurricane, launched at an enemy fleet, [that] devastates a neutral country,” and a “citizen’s pocket computer” that averts an air crash. “Will our children in 2018 still be wrestling,” it asks, “with racial problems, economic depressions, other Vietnams?”
. . . .
The Stanford wonk Charles Scarlott predicts, exactly incorrectly, that nuclear breeder reactors will move to the fore of U.S. energy production while natural gas fades. (He concedes that natural gas might make a comeback—through atom-bomb-powered fracking.) The M.I.T. professor Ithiel de Sola Pool foresees an era of outright control of economies by nations—“They will select their levels of employment, of industrialization, of increase in GNP”—and then, for good measure, predicts “a massive loosening of inhibitions on all human impulses save that toward violence.” From the influential meteorologist Thomas F. Malone, we get the intriguing forecast of “the suppression of lightning”—most likely, he figures, “by the late 1980s.”
. . . .
Oettinger is now the sole surviving “Toward the Year 2018” essayist to witness the era he predicted so well fifty years ago. For some attending the accompanying conference, the changes didn’t need nearly that long to unfold: Edwin Yoder saw the news profession digitize with a speed that rendered the most ambitious predictions quaint. “The huge press room at the G.O.P. convention in Kansas City in 1976 was a noisy din of typewriters,” he recalls now. “Four years later, it was eerily quiet, as if cushioned.”
Just as conspicuous, though, is what was missing altogether from the book. Not a single writer predicts the end of the Soviet Union—who in their right mind would have?
. . . .
“We can see it now,” one newspaper mused. “Sane people in the year 2018 will be yearning for a return to simpler times and the ‘good old days’ of the 1970s.”
Link to the rest at The New Yorker
PG suggests that items like the OP should be a source of humility for those living in today’s world.
In every era, large numbers of people believe that, unlike people of previous generations, their day’s consensus has finally understood what’s important and not important, true and untrue, what will work and not work.
Fifty years ago, in 1968 (part of the era included in the good old days of the 1970s), one of the driving forces for craziness on college campuses was the Vietnam War. It was certainly not the only craziness, but the war and the likelihood of students having to serve in the military amplified the intensity of the craziness.
For male college students, the military draft could be avoided so long as they were full-time students. Those who graduated from or dropped out of college in 1968 lost their student deferments shortly after graduation. More than a few, regardless of their opinion about the war, were required to become soldiers and obey the orders they received to shoot other people, accept the likelihood they would be shot at in return, blow up other people, burn other people, etc., etc.
About 10% of those who served in the military in Vietnam became casualties, either killed or (more likely) wounded so severely they could no longer perform military duties. In part because of the nature of the war, mental illness, either short-term or long-term, was also common. For these and other reasons, avoiding the draft was a frequent topic of male student conversations as graduation neared.
In 1968, PG suspects most men who would be subject to military service knew someone who had been severely wounded, permanently disabled or killed in Vietnam.
Some college students would enter the military voluntarily to have some control over what branch of the service they would be a part of and how they would serve – a man who joined in the enlisted ranks was at the bottom of military hierarchy but could be finished with his mandatory military service sooner than an officer who enlisted would be. However, serving as a draftee generally involved a shorter period of service than enlisting into the military.
The Army and Marines included most foot soldiers directly involved in ground combat. Most draftees served in the Army, almost always as enlisted men. Junior officers in the Army and Marines were more likely than others to be shot at by enemy soldiers because they were directly involved in leading soldiers in battle.
The Navy was a mixture of enlisted personnel serving on ships or in shore installations generally isolated from the fighting and pilot officers who could describe exactly what a surface-to-air or air-to-air missile looked like when it was fired at them and how they tried to avoid being killed by the missile. Plus, the “Brown-Water Navy” consisted of officers and seamen who were assigned to small boats that patrolled marshy areas with heavy vegetation. The men of the Brown-Water Navy were subject to frequent ambushes and intense firefights with enemy soldiers who were hiding only a few yards away.
The Air Force was generally regarded as the safest service in which to serve. Anyone but a pilot could be pretty certain that he would not come under direct enemy first. The Air Force was also the most difficult service in which to serve.
Returning from service was also very strange. In prior wars, a group of men would train together, serve together for the duration of the war and return together when the war was over. During the Vietnam war, when a draftee or enlistee finished his term of service, unless he elected to continue his military duties, he would often go home by himself. A few days earlier, he might have been on a dangerous patrol in the jungle and today, he was walking through the airport in San Francisco, trying to resolve the contradictions, still jumpy with his battle reflexes warning him of danger lurking around every corner.
After the Vietnam War was over, the military draft was eliminated and all members of the military, male and female, are volunteers today. The Afghan and other middle eastern wars have been fought solely by volunteer soldiers and, unlike earlier eras, the demographics of members of the military and those who are not not are no longer similar.
Male Privilege would not be a thing on college campuses until many years after 1968. Male students during that era would have found rich irony in the concept.
The OP does not include any mention of the likelihood of large-scale wars in the future in 2018, 50 years after 1968. PG suggests this is unrealistic.
Here is a timeline of wars that have involved a significant percentage of the US population in the fighting.
American Civil War – 1861-1865 – More US casualties than any other war – 646,000 US casualties (2.385% of US population)
World War One – 1914-1918 – WWI began 49 years after the end of the Civil War – 320,518 US casualties
World War Two – 1939-1945 – WWII began 25 years after the end of the WWI – 416,800 US casualties
Korean War – 1950-1953 – Korean War began 5 years after the end of WWII – 128,650 US casualties
Vietnam War – 1964-1975 – (11 year duration) Vietnam War began 11 years after the end of the Korean War – 211,454 US casualties
War in Afghanistan – 2001-present – (17 years and counting) Afghan war began 26 years after the end of the Vietnam War – 20,904 US casualties (to date)
Iraq War – 2003-present – (15 years and counting) Began two years after the beginning of the Afghan war – 36,710 US casualties (to date)
PG suggests the history of wars in which the US has been and is involved does not suggest a future in which all major problems will be solved and the world can focus its efforts and resources on the improvement of humanity.
While PG is generally optimistic about the future of the human race, he finds it interesting that most major predictions for the future of the type exemplified in the OP do not mention anything about the armed conflicts of the future as if such conflicts will not have any impact on future society.
From Seeking Alpha:
The tremendous ructions occurring in the retail industry continue and are gaining momentum at a tremendous pace as Amazon and the rapid growth of e-commerce progresses. Already the number of bankruptcies in the retail industry for 2017 thus far have exceeded all of 2016 and there are signs of more to come.
Indeed, even retailers typically perceived to be resistant to the disruptive influence of Amazon and the rapid growth in popularity of e-commerce have proven vulnerable.
The Oracle of Omaha Warren Buffett considered by many to be the world’s greatest investor also chose to weigh in on the debate earlier this year, stating at the Berkshire Hathaway annual meeting:
The department store is online now, . . .
There are a range of signals which indicate that it is only going to get worse for traditional bricks-and-mortar retailing which makes it foolish for investors to consider investing in the industry.
. . . .
North American retailers are filing for bankruptcy at a record rate this year. According to industry data over 35 retailers in the U.S. alone have filed for bankruptcy this year with some of the standout names being Toys R Us, Payless ShoeSource and Radio Shack. It isn’t the first time for Radio Shack, it filed for bankruptcy protection just a little over two years ago because of similar problems including a challenging operating environment, rising competition and dwindling sales.
. . . .
The bad news doesn’t stop there, many major department store chains focused on cutting costs by reducing their operational footprint through store closures because the unprecedented competition created by e-commerce and Amazon has left very few other options.
One-time industry leader Sears is aggressively closing stores in a desperate bid to survive. The embattled retailer closed 180 stores during the fiscal year 2017 and plans to close another 150 by the end of its fiscal third quarter which amounts to roughly 10% of its remaining Sears and Kmart locations. For the second quarter revenue fell by a deeply worrying 23% year over year while comparable store sales declined 11.5%.
. . . .
Department store chain J.C.Penney which saw second quarter comparable store sales slip by 1.3% year over year doesn’t appear to be much healthier. It has also embarked on an ambitious restructuring strategy which involves closing 138 stores over coming months.
. . . .
Long-time industry stalwart Macy’s is also planning to close 88 stores and layoff thousands of employees.
. . . .
According to the report grocery shopping’s transition to online will occur at a far more rapid rate than other industries that have already done so such as banking or media because of a greater acceptance of e-commerce among consumers.
Younger, newer and more engaged digital shoppers adopt digital technologies more quickly, and will hasten the expansion of digital grocery shopping further.
. . . .
In a stunning revelation of just how fast e-commerce sales will grow, the National Retail Federation has forecast that as a proportion of total retail sales they will expand by 8% to 12% annually. This is around three-times greater than total retail sales, indicating that e-commerce’s share of total retail sales will grow at a rapid clip.
. . . .
For the reasons discussed investing in bricks-and-mortar retailers is becoming increasingly unappealing and risky. The depth and breadth of the industry’s transformation coupled with rapidly changing technology as well as an increasing appetite among consumers to accept technological changes places almost every bricks-and-mortar retailer under threat.
Link to the rest at Seeking Alpha
PG notes that some of the recent discussions about Barnes & Noble on TPV, have tended to focus on physical bookstores vs. Amazon as an isolated battle.
As indicated by the OP here and in many other posts on TPV, the movement from bricks & mortar to online sales is a megatrend affecting all sorts of different retailers. If people buy children’s clothing online instead of going to Target and small appliances online instead of going to Sears and office supplies online instead of going to Staples, why would books be any different?
There is one additional factor that does make books special, but not in a way that benefits Barnes & Noble and other physical bookstores.
PG is not aware of eclothing or eappliances, but he and many others are regular consumers of ebooks.
Due to a combination of disastrous decisions by management and incredible ignorance of ecommerce and all other things internet, Barnes & Noble squandered the opportunity to leverage its brand and relationship with millions of longtime Barnes & Noble customers into a dominating online store for ebooks (very high profit margins once properly-designed infrastructure is in place) and physical books.
Competent management of any b&m bookstore chain should have looked at ebooks as a wonderful source of increased revenues and profits. Instead of supporting a business structure to deal with thousands of poorly-paid store employees managed by hundreds of not much better paid store managers, a relative handful of well-compensated technical, design and marketing employees located in one place could have generated expanding revenues with consistently higher profit margins.
PG appears to be suffering from an attack of run-on sentences today, so he will stop. The blindness of the entire traditional book business to the opportunities for online sales, particularly of ebooks, is prime fodder for dozens of business school lectures, case studies and discussions for decades.
From The Wall Street Journal:
Google is rolling out a package of new policies and services to help news publishers increase subscriptions, a move likely to warm its icy relationship with some of the biggest critics of its power over the internet.
Google said it will end this week its decade-old “first click free” policy that required news websites to give readers free access to articles from Google’s search results. The policy upset publishers that require subscriptions, believing it undercut their efforts to get readers to pay for news.
Google, a unit of Alphabet Inc., said it also plans tools to help increase subscriptions, including enabling users to log in with their Google passwords to simplify the subscription process and sharing user data with news organizations to better target potential subscribers.
With billions of people using its search, YouTube and other web properties, Google has an outsize influence on a wealth of industries and modern society.
. . . .
The new publisher rules are good news for the print industry, which has largely struggled to convert its business model to the internet as print advertising sales have plummeted in the digital age. Google and Facebook dominate the internet ad industry, and news organizations are increasingly reliant on those two tech giants for web traffic. Google says it drives 10 billion clicks a month to publishers’ sites.
Some newspapers even asked Congress this year to exempt them from antitrust laws so they could negotiate collectively with the tech giants.
. . . .
“We really recognize the transition to digital for publishers hasn’t been easy,” Google Chief Business Officer Philipp Schindler said in an interview. He said a strong news industry boosts the utility of Google search and helps Google’s ad business, which sells ads on news sites. “The economics are pretty clear: If publishers aren’t successful, we can’t be successful.”
. . . .
Kinsey Wilson, the former executive editor of USA Today who now advises New York Times Co. , said publishers must be careful about letting Google be the middleman to its readers. “Google can remove some friction,” he said, “but publishers have to stay vigilant.”
Link to the rest at The Wall Street Journal
From Publishers Weekly:
During its annual meeting held Tuesday morning at its flagship store in New York City, Barnes & Noble chairman Len Riggio supported its new CEO, Demos Parneros who was named to his current role in April.
During the meeting, Riggio called Parneros “the perfect fit” to help the company grow its top line and improve profits. Observing that Parneros “has brought lots of energy to the company,” Riggio said he is looking forward to watching the executive over the next few years, noting that Parneros shares his vision and will revive B&N “store by store.”
. . . .
Riggio also assured shareholders that B&N is no longer in the tech business. While the Nook e-reader and e-books will remain a part of the company’s offerings to customers, bricks and mortar stores will be its focus. Riggio explained that when e-book sales began exploding several years ago, B&N felt it had no choice but to enter the digital market. In retrospect, Riggio said, B&N didn’t have the culture or financing to compete with the likes of Amazon and Google.
Instead, according to Riggio, B&N will focus on its physical stores and will partner with technology companies to keep a presence in the digital space. “There is no business model in technology” for B&N, Riggio acknowledged.
Link to the rest at Publishers Weekly and thanks to Nate at The Digital Reader for the tip.
PG says the Nook business was doomed from its earliest days. The big reasons are:
Riggio didn’t want to pay for top online talent.
This was evident from the first time PG visited the Nook Store. Poorly designed and poorly executed. And it never really changed.
Real tech talent is rare and in great demand. In the beginning, for the right money, skilled tech people would have gone to work at Nook, but Barnes & Noble wanted to pay bookstore salaries.
PG has no idea if Nook tried to hire really good talent at the right price after it became clear that the Nook Store was a disaster. Unfortunately, by that time, serious tech talent wouldn’t have come regardless of salary because nobody wants to clean up someone else’s mess and a line mentioning the Nook Store would have been deadly on the résumé.
Besides, nobody would have believed Barnes & Noble stock options would ever make them rich at that point.
The Nook Store set ebook prices at a level designed to support the print book prices in its stores.
One of PG’s least favorite things to hear during a product planning meeting is, “We don’t want to cannibalize our existing business.”
The problem is that, if your business is cannibalizable by you, it’s cannibalizable by somebody else. Jeff Bezos has always been a happy cannibal.
Low ebook prices combined with instant availability fueled Amazon’s early dominance. Over time, by cultivating successful indie authors, in part by using Kindle Unlimited, Amazon has added tens of thousands of high quality titles that Riggio couldn’t sell if he wanted to.
Amazon vs. Big Bookstores and Big Publishing is going to be a classic business case used in MBA programs around the world for decades to come. Brains and speed beat money and size once again.
From The Digital Reader:
I have been writing about industry trends in bits and pieces in each news story, but it has been a long while since I last pulled everything together, took a step back, and told you what I see.
I can sum it up in a single sentence: The major publishers are dead because they bet against digital, which is the future.
The thing about the major publishers is that they thought they could make the market go where they wanted.
They didn’t want ebooks to cannibalize print sales, so they conspired with Apple in early 2010 to bring about the Agency model. Then they doubled down on their bet with Agency 2.0, and hedged that bet by sabotaging subscription ebook services like Scribd and Oyster by saddling them with nonviable business models.
It is now 2017, and book publishing is in the later stages of a transition to digital.
. . . .
The major publishers bet against digital, and they continue to do so, and it is going to kill them in the long run. In fact, we can see them die bit by bit. First they dropped mid-list authors, then they started dropping best-selling authors.
Link to the rest at The Digital Reader
PG thinks the illegal collusion between the big publishers to force Amazon to set higher prices for ebooks was an important milestone on their path to suicide. They got together in various New York restaurants to engage in face-to-face groupthink.
Here’s a summary from Wikipedia:
The Publisher Defendants sold over 48% of all e-books in the U.S. in the first quarter of 2010. The Publisher Defendants along with Random House Publishing are the six largest publishers in the United States (collectively the Publishers) and are often referred to as the “Big Six” in the publishing industry. In 2009 Amazon.com Inc. had nearly 90% of the e-books industry. Amazon charged $9.99 for certain new releases and bestselling e-books which helped make it the market leader in the sale of e-books and e-readers with its Kindle.
Amazon’s price point caused discontent among the Publishers. The Publishers believed that the low price point was a problem for their sales of more profitable hardcover books. Approximately every three months, the CEOs of the Big Six would meet in private dining rooms in New York restaurants “without counsel or assistant present, in order to discuss the common challenges they faced, including most prominently Amazon’s pricing policies.” The Publishers used several different strategies to fight against Amazon’s pricing point, including selling e-books for the same price as their printed version through a continued wholesale model and “windowing” new releases. Windowing is a tactic that would delay the release of books to their e-book form for a certain window of time.
. . . .
Amazon sent a letter to the Federal Trade Commission complaining about the simultaneous nature of the demands for agency model agreements from the Publishers who had signed with Apple. By March, Amazon had completed agency agreements with four of the five publishers. During the negotiations over the agreements, the publishers would talk with each other and share information about what Amazon would concede with each. Apple was closely following all of this progress and Cue was in contact with the publishers. Following Amazon’s move to agency amounted to “an average per unit e-book retail price increase of 14.2% for their new releases, 42.7% for their NYT Bestsellers, and 18.6% across all of the Publisher Defendants’ e-books.” The Publishers also raised the price of some of their New Release hardcover books so as to move the e-book versions into a correspondingly higher price tier. Amazon saw Random House (who for the moment had not joined Apple) e-book sales having an increase of 41%. Two studies showed that the Publishers who moved to agency model sold over 10% fewer units at major retailers. In contrast, other publishers’ sales increased 5.4% in the same period. In January 2011 Random House also moved to the agency model and raised the prices of its e-books, and then experienced a decline in its e-book sales. This allowed Random House to join the iBookstore.
. . . .
Beginning on December 8, 2009, Apple’s senior VP of Internet Software and Services, Eddy Cue, contacted the Publishers to set up meetings for the following week. During the meetings Cue suggested that Apple would sell the majority of e-books between $9.99 and $14.99, with new releases being $12.99 to $14.99. Apple also adopted the agency model which it used in its App Store for distribution of e-books. This let Publishers control the price of the e-books with Apple receiving a 30% commission. Apple also set up price tiers for different books. Apple also included a MFN clause in their contract with the Publishers which allowed for Apple to sell e-book at its competitors’ lowest price.
. . . .
On the day of the launch, Jobs was asked by a reporter why people would pay $14.99 for a book in the iBookstore when they could purchase it for $9.99 from Amazon. In response Jobs stated that “The price will be the same… Publishers are actually withholding their books from Amazon because they are not happy.” By stating this, Jobs acknowledged his understanding that the Publishers would raise e-book prices and that Apple would not have to face any competition from Amazon on price.
This collusion between the top executives of five out of the (then) six major US publishers to destroy Amazon’s pricing model for ebooks helped accelerate the development of anti-Amazon/anti-ebooks groupthink throughout Big Publishing.
Later, when the Justice Department charged these publishers with illegal anti-competitive behavior and publicly humiliated their management by requiring an admission of guilt and forcing monetary settlements, the anti-Amazon/anti-ebook sentiment blossomed into something of an industry-wide psychosis.
Publishing couldn’t live without Amazon and hated the company even more for their dependence upon it.
When Borders, the second largest bookstore chain in the US, went bankrupt in 2011, that shocking event should have set alarm bells ringing in CEO offices of every publisher.
The second-largest bricks-and-mortar customer for every major US publisher had just imploded. Perhaps it was time for some new thinking? Would the future be a lot different than the past? What a silly thought.
Borders would have been happy to sell its assets to virtually any willing purchaser, but smart money was not interested. Neither was dumb money and about 650 retail bookstores in the US just disappeared.
At the time of the Borders bankruptcy, reporters and business writers (often relying on traditional publishing sources) concluded that Borders had made a big mistake by working with Amazon to sell ebooks. On the other hand, Barnes & Noble was brilliant because it had spent lots of money to build up its Nook business as a viable competitor to Amazon’s Kindle.
Amazon Derangement Syndrome was running rampant through the publishing business and that, combined with widespread ignorance of technology among management, blinded them to a simple fact that was evident to anyone with an ounce of internet savvy: Amazon was much, much better at selling books (and a lot of other things) online than Barnes & Noble and the gap between the two organizations was growing at a rapid pace.
The traditional book industry and its convoy of pet pundits have not gotten any part of selling online right for well over ten years and show no indication that anything is going to change in the next ten years (to be clear, PG is not predicting that Big Publishing has ten more years ahead of it).
Barnes & Noble is running on fumes. Whether it continues to sink into the sunset or suddenly implodes won’t impact the overall trajectory of the retail book business. It’s dying. At this point, even if Barnes & Noble were able to hire talented management, PG thinks it’s too late for that to make a difference.
When Barnes & Noble is gone, what’s left for legacy publishing? A bunch of mom and pop bookstores. There may be some fancier moms and pops in Manhattan and Washington DC, but they’re all small businesses with tiny profit margins.
PG ran out of time before he could bloviate about traditionally-published authors heading for the exits and hedge funds taking over management of the gazillion legacy publishing contracts which represent the only value of Big Publishing.
I was struck by Jeff Jarvis’s recent polemic, ‘If I ran a newspaper…’ published on Medium.
In it, he quoted an unnamed editor’s description of the predicament he — and many of us — find ourselves in:
“We have two houses. One is on fire and the other isn’t built yet. So our problem is that we have to fight the flames in the old house at the same time we’re trying to figure out how to build the new one.”
He was, of course, describing the rock-and-a-hard place dilemma that’s beset legacy media brands for more than a decade now: We know print is declining fast, and the future’s digital, but the problem is most of our revenues are still in the former, and the latter will never generate the money we made back in the day.
I’ve lived in this cleft stick for most of my career. The legendary ‘tipping point’ is still talked about hypothetically years after it should have become a reality for more of this country’s legacy media — particularly in the regions. The tipping point comes when your digital revenue growth offsets your print revenue decline. Rather than waiting reluctantly for it to happen — or indeed trying to postpone it — we should have been doing everything to make it happen on our terms. Unfortunately, I think the industry dragged its feet for too long.
. . . .
We announced this week that we are creating a new, standalone and sustainable digital business that could be a model for similar enterprises across the UK and beyond.
At the heart of the new operation is a digital-only newsroom forged from the team that has made BirminghamMail.co.uk the fastest-growing regional news website in the UK for much of the past year. Thanks to my team’s efforts, we reach more than 50% of Brummies every week, and now we want to reach even more with our new approach.
At the same time, we want the new model to be completely self-sustainable, achieving a profit driven by programmatic and solus digital advertising, and not over-dependent on print upsell from legacy clients. There’ll be whole new revenue streams, too.
The new newsroom will be more than digital-first; it will be digital only.
. . . .
When you lose pounds in print, you only ever get pennies back online / we’ll never make enough money to have a newsroom as big as it was ten years ago.
True(ish), and true. Sadly, we know the future requires the business to be leaner and more flexible than we are now, and despite years of seemingly endless restructures and job losses, we will have to make further reductions. We are building the new model by asking the question: “What size newsroom can we afford, given what we know about our current and future digital scale, how much programmatic revenue we get, and how much new digital revenue we think is out there in the market?”.
Link to the rest at Medium
PG says the OP describes a constructive way to deal with disruptive innovation. “What would my business look like if I was starting it from scratch today?” It’s a much better management strategy than, “How can I preserve my existing business when the economics of the market it serves have completely changed?”
The more common strategy of downsizing, then downsizing some more, then further downsizing is self-defeating in the extreme.
- Employee morale tanks and stays tanked with deleterious effects on the enterprise.
- Talented new people who might like the idea of working in a particular business stay away because of justified skepticism about the long-term future of the business and a rational desire to avoid a sinking ship.
- The talent level of new hires is lower than that of veteran employees.
- Existing employees who can leave do leave, taking their experience and abilities with them.
- The percentage of staff who stay with the business because they can’t get a job elsewhere skyrockets.
PG considers himself typical of many traditional readers of printed newspapers.
Growing up, he pretty much read every newspaper that arrived in his home cover to cover every day. When he commuted to work by train, he bought and read one newspaper in the morning and another in the evening.
(Yes, my young friends, in large cities, some newspapers published every morning and others published every afternoon. Chicago had four major daily papers, two morning and two afternoon, plus at least a half-dozen other dailys devoted to particular audiences, African-Americans, for one example.)
When he didn’t commute via mass transit, PG had at least one daily newspaper delivered to his home, usually two.
A couple of years ago, PG observed many issues of the two daily papers he received were piling up, largely or completely unread. He stopped The Wall Street Journal, but still pays for access to the entire digital edition.
At first, he kept his subscription to the local daily paper because he has always wanted to support local news organizations. However, when a week or two would pass without him reading any physical papers, he quit renewing that subscription as well.
Perhaps based upon years of habit, PG’s delivery person has continued to drop the local daily on his driveway, despite PG not having paid for a subscription for several months. PG has wondered if the paper’s management is trying to artificially pump up its subscription numbers.
A number of years ago, PG remembers reading a science fiction story about a future in which a large percentage of the population wore body cameras.
InfoTrends says people will take 1.2 trillion digital photos this year. That’s 100 billion more than last year and nearly double the number taken as recently as 2013.
The rate at which photo taking grows is currently clocked at a whopping 100 billion per year – that means each year humanity takes 100 billion more photos that it did last year.
I think that rate is about to accelerate. And the reason is wearable cameras.
As the cost goes down, quality goes up and ease of use improves (through miniaturization, better software and better batteries), wearable cameras will become more compelling.
These will arrive in the form of clip-on cameras, smartwatch cameras and cameras attached permanently or temporarily to glasses, including smart glasses.
. . . .
Just a few years ago, nobody could have predicted or imagined what’s now acceptable public behavior with a smartphone camera. People shamelessly pose and posture in public for selfies without embarrassment. They take pictures of their food and drinks in restaurants. They take selfies in the bathroom mirror.
. . . .
It turns out that the location of a wearable camera makes all the difference for how it’s used.
Badge-style clip-on cameras are acceptable for “lifelogging” applications – jogging your personal memory about places you go and people you meet. But they’re horrible for “photography.” Because the physical cameras move around, sit at odd angles and aren’t directly controlled by the user (they tend to shoot photos at intervals, or take video), the pictures are universally bad, save that one odd lucky shot.
Wrist-worn cameras are best used as expedient replacements for smartphone cameras – group shots, vacation snapshots and selfies.
As Google Glass wearers learned, eyeglasses-based cameras can take amazing photos. They point the camera where the user is looking, and show a first-person, this-is-what-I-saw picture, which can be photographically compelling.
. . . .
Smartglasses will use camera electronics and lenses as much for data gathering as photography. Images and video will be processed for object and face recognition and this data will be fed back into the AR application. Looking at a table with a goldfish bowl on it, an AR app will know that a virtual kitten can stand on the table but not the bowl, and a virtual shark can swim in the bowl but not the table. In AR, cameras aren’t for photography.
Other applications will capture photos or video all day, and process it through artificial intelligence systems to provide extremely good data on activity, behavior and environment.
Best of all, photography can be retroactive, either as photography or as data.
For example, instead of taking pictures of their food while they’re eating it, consumers can just tell their virtual assistant at the end of the day: “Post a picture of that pie I ate.” A.I. will reach into the recorded video, grab the best still shot of the pie and post it online. From a data perspective, we’ll ask that same assistant: “How many slices of pie did I eat last year?”
. . . .
The co-founder and CEO of Shonin, Sameer Hasan, told me wearable cameras will be initially focused on quality control and documentation, medical applications and security. They’ll be immediately usable for “instruction and demonstration, live entertainment and news reporting.”
Wearable cameras will enable AR to “process video information in real time and instantly provide the wearer with analysis and recommendations based on what the camera is seeing,” according to Hasan.
Link to the rest at TechConnect
As PG remembers the scifi story, the ubiquity of video recording devices were a great assistance to the totalitarian government that collected all the video.
Minutes after the epic finale of the seventh season of Game of Thrones, fans of the show were already dismayed to hear that the final, six-episode season of the series isn’t set to air until spring 2019.
For readers of the A Song of Ice and Fire novel series on which the TV show is based, disappointment stemming from that estimated wait time is laughable. The fifth novel in seven-novel series, A Dance with Dragons, was published in 2011 and author George R.R. Martin has been laboring over the The Winds of Winter since, with no release date in sight. With no new source material, producers of the TV series have been forced to move the story forward themselves since late season 6.
Tired of the wait and armed with technology far beyond the grand maesters of Oldtown, full-stack software engineer Zack Thoutt is training a recurrent neural network (RNN) to predict the events of the unfinished sixth novel. Read the first chapter of the book here.
“I’m a huge fan of Game of Thrones, the books and the show,” said Thoutt, who had just completed a Udacity course on artificial intelligence and deep learning and used what he learned to do the project. “I had worked with RNNs a bit in that class and thought I’d give working with the books a shot.”
. . . .
“It is trying to write a new book. A perfect model would take everything that has happened in the books into account and not write about characters being alive when they died two books ago,” Thoutt said. “The reality, though, is that the model isn’t good enough to do that. If the model were that good authors might be in trouble. The model is striving to be a new book and to take everything into account, but it makes a lot of mistakes because the technology to train a perfect text generator that can remember complex plots over millions of words doesn’t exist yet.”
. . . .
“I start each chapter by giving it a prime word, which I always used as a character name, and tell it how many words after that to generate,” Thoutt said. ” I wanted to do chapters for specific characters like in the books, so I always used one of the character names as the prime word … there is no editing other than supplying the network that first prime word.”
George R.R. Martin isn’t going to be calling for writing tips anytime soon, but Thoutt’s network is able to write mostly readable sentences and is packed with some serious twists.
Link to the rest at Motherboard
PG predicts AI-written books will be common within two years.
He doesn’t know if they will be very good, but it will be interesting to watch the technology develop.
PG also predicts that AI-written books won’t put good human authors out of business.
The Kindle was Amazon’s first standalone product, marking a departure from its online bookstore strategy and into the world of consumer electronics. While the company has never officially announced sales figures, its success paved the way for the release of the Fire tablet, Fire TV Stick, ill-fated Fire Phone, and wildly successful series of smart speakers: the Dot, Echo, Tap and Show, and smart camera Look, all of which Limp watches over as senior vice president of Amazon devices and services.
. . . .
As the Kindle become the go-to term for an e-reader, the Echo, according to Limp, who is in London to visit Amazon’s vast new Shoreditch offices, “kind of invented a new category”. The voice-powered smart speaker, controlled by artificially intelligent digital assistant Alexa, went on sale in the UK last year and is capable of setting timers, browsing the internet and even telling jokes through the cloud, meaning it gets smarter over time. Such is Alexa’s ubiquity, people have started calling Echos ‘Alexas’.
. . . .
What sets Amazon apart from other consumer electronics companies is its desire to “do more than just make a gadget, and by gadget, I mean things you would buy and most likely put in a drawer a few months later,” Limp says, explaining that for people to keep using an Echo, it must constantly evolve to stay relevant and useful.
. . . .
“Our view is that these AIs need to work together. If we get to a world where there’s just one of them, that’s not good for customers,” Limp says. “You want to have the ability for AIs to collaborate, and certain AIs will be good at some things, and others will be better at others. When you use Alexa to order a pizza from Dominos, that’s invoking another AI that [is] an expert in pizza.”
. . . .
AI is “foundational” across all of Amazon’s departments, not just consumer electronics, he says, adding that children born today “will never remember a world when they weren’t able to talk to their house and the things around them – and that’s the way it should be.”
Link to the rest at iNews and thanks to Suzie for the tip.
From The Vulture:
After more than 60 years, the Village Voice is shutting down its weekly print edition. Founded in 1955 and converted into a free weekly in 1996, the Voice built its name as one of the country’s first alt-weeklies by covering and critiquing New York politics, culture, and more with its distinctive downtown sensibility. The progressive alt-weekly plans to continue on in digital form, according to an announcement from Peter Barbey, who purchased it in October 2015 amid financial struggles, and will also continue to sponsor events like the Obie Awards and Pride Awards. “[The Voice] has been a beacon for progress and a literal voice for thousands of people whose identities, opinions, and ideas might otherwise have been unheard. I expect it to continue to be that and much, much more,” Barbey said. “The business has moved online — and so has the Voice’s audience, which expects to do what we do not just once a week, but every day.”
. . . .
The no-longer-weekly alt-weekly has a hallowed history of defining and exemplifying New York counterculture, having been founded by Norman Mailer, Ed Fancher, Dan Wolf, and John Wilcock. It launched the careers of numerous authors and journalists.
Link to the rest at The Vulture
A question popped into PG’s mind as he read this item. Because there was not much else popping in his mind, he noticed this question.
Is there any traditional print publication that has transitioned to a pure digital form (and binned the print side) that has prospered?
By prospering, PG doesn’t mean surviving or claiming a zillion website visits. Rather, he means the publication demonstrates at least some of the traditional outward manifestations of prosperity – hiring more people on a consistent basis without laying them off later, moving to larger offices, etc.
Does any such transitioned publication make more money than it did before the digital deluge?
According to Statista, The New York Times had 5,363 employees in 2012 and 3,710 employees in 2016.
In June of this year, hundreds of staff members of the New York Times staged a walkout to protest more firings. The laid-off copy editors wrote a letter:
“We only ask that you not treat us like a diseased population that must be rounded up en masse, inspected and expelled,” they wrote. “After all, we are, as one senior reporter put it, the immune system of this newspaper, the group that protects the institution from profoundly embarrassing errors, not to mention potentially actionable ones.”
From CBC News:
The Globe and Mail will stop delivering its print edition to the Maritimes, the newspaper said Monday.
Phillip Crawley, the publisher and CEO, said it followed the decision made in 2013 to stop printing in Newfoundland and Labrador.
“In keeping with the same policy, we have watched print subscriber numbers declining in the Maritimes over the last few years as we’ve seen digital subscriptions increase,” he told CBC News in a phone interview.
“It gets to the point where it makes no sense to keep on subsidizing print delivery to that degree, where it’s costing us $1 million a year to do that, and that’s where it’s now at with the Maritimes.”
. . . .
The newspaper recently hired a new Atlantic Canada correspondent, filling a post that had been vacant for more than a year. Crawley said Halifax-based Jessica Leeder will start reporting in September.
“We’re very much interested in the stories coming out of the Maritime provinces. We have a national audience that would expect us to do that.”
People anywhere can still get the digital version of the newspaper.
Subscribers were informed of the change via email this week. “Our core mission is to invest in journalism that matters, so the money now being spent on subsidizing uneconomic delivery routes will be redirected to creating content for all of our customers across the country,” the email reads in part.
Crawley said the Globe and Mail will still provide national coverage.
“We never said we’d deliver to every town, village, hamlet or whatever. We haven’t done that. We make a decision on where it makes sense based on the number of people who want to read it,” Crawley said.
Link to the rest at CBC News and tip-thanks to Tudor, who says this would be like a major American newspaper cutting off delivery of its print edition to New England.
PG hopes people who live in the Maritime provinces have good internet connections but suspects they may not.
From The Digital Reader:
I live in Vancouver, WA, with Powell’s Bookstore in Portland about 40 miles away. I was thrilled when the B&N stores opened in the area, they were a lot smaller than Powell’s, but were closer, and had the coffee bar.
B&N has since operated like they are run by people with 2 years before retirement, they want to keep everything static to maximize their retirement. They for the most part fight the Internet, not embrace it.
. . . .
My memory says that B&N had the first cheap Android tablet, but it was locked down to work only as a book reader, until hackers made it useful. Then, as a hacked, useable tablet, its sales exploded. Then, instead of giving away the razor and making money on the blades (this is exactly what Amazon does with tablets), B&N made it a profit center and failed to compete with Amazon.
B&N has operated from fear, where Amazon grabs the bull by the horns and goes, understanding that they will make errors.
For example, Amazon works with Overdrive to provide ebooks for libraries. A lot of people don’t like library ebooks because they expire in 1-3 weeks, and people like me will read library ebooks all the time. I don’t think Amazon loses much if any sales servicing library customers, but they do get them to their web site.
Link to the rest at The Digital Reader
PG says Barnes & Noble was perfectly situated to dominate the ebook market, but management didn’t understand digital and wanted to protect its legacy business instead of following its customers.
I’ve spent the last eight years building the case for using audience data in newsrooms and the tools and culture required to make it a force for good. So reading Franklin Foer’s piece When Silicon Valley Took Over Journalism was a deeply bizarre experience. Over the first four years of my work at the Guardian, I encountered almost every possible objection to what I was doing and I thought long and hard about each one. At that point a great deal of my job was about making sure that my instincts, processes and arguments were genuinely robust. Foer’s piece is a collection of the most ill-considered objections I saw, blended into one long, unappealing cocktail.
To be clear, there’s the odd thing here that I agree with. It’s obviously true, for example, that publishers need to be more robust in dealing with technology companies (although ironically one of the things we should be asking of them is more data). It’s also true that homogeneity can be dangerous. But the intelligent application of data can also help us spot where this is doing damage.
But most of these arguments, prejudices masquerading as arguments, childish hopes that everything can just go back to ‘normal’ and windy emotional appeals are zombies. They’re dangerous, stupid and they have no business climbing out of their grave and causing damage in 2017.
Technology isn’t an amorphous lump of stuff
The single biggest problem with the piece is the tendency to take anything that went wrong around the New Republic and the wider industry, gather it up and shove it in a big bucket labelled Silicon Valley or Technology. At various points Foer covers the use of data, data itself, platforms, algorithms, selling advertising, advertising itself, revenue, virality and search engine optimisation. Considering the experience he went through, the absolute conviction dripping from each line and his expressed commitment to good journalism, it’s surprising that he seems to have given so little thought to any of these specific issues.
. . . .
Audience data isn’t just page views…
We would resist the impulse to chase traffic, to clutter our home page with an endless stream of clicky content. Our digital pages would prize beauty and finitude; they would brashly announce the import of our project — which he described as nothing less than the preservation of long-form journalism and cultural seriousness.
Considering this stated aim, why the hell is the only metric mentioned in this epic piece page views?
. . . .
Growing audience doesn’t have to be a con trick
People clicked so quickly, they didn’t always fully understand why. These decisions were made in a semiconscious state, influenced by cognitive biases. Enticing a reader entailed a little manipulation, a little hidden persuasion.
Here’s another failure of imagination and thought (not to mention a pretty contemptuous view of the capabilities of readers in the 21st century). Buzzfeed’s viral strategy is basically inapplicable to serious journalism, as partly evidenced by Buzzfeed News’s difficulties in replicating the audience of the broader company despite great journalism. Equally, Upworthy’s aggregation and headline testing approach just doesn’t relate. In both cases the nature of the content is as important as the method of delivery. But using audience data to spot when a story you care about isn’t connecting with readers is a hugely positive thing. There’s a world of difference between writing misleading headlines and making a headline work well in a digital environment. Part of resisting that first impulse is to do with monitoring time spent on page, a metric Foer never mentions.
Link to the rest at Medium
When PG read the initial piece referenced in The Atlantic, he dismissed it as a rant written by someone who didn’t understand modern communications technology (who was also the former editor of The New Republic who was fired from his job because he didn’t adapt well to new technology).
This Medium article does a good job (in PG’s disruptive opinion) of disassembling the original Atlantic essay to more fully demonstrate that author’s lack of understanding of modern communications technology.
PG also suggests that Big Publishing is dominated by people who don’t understand modern communications technology.
When PG wrote the preceding paragraph, he had a sudden vision of the executive offices of a major New York publisher with little orange Amazon warehouse robots scooting around the hallways.
From The Digital Reader:
Have you ever had one of those moments where you kinda sorta agree with someone’s conclusion and yet still disagree with many of the assumptions that lead to the conclusion?
That’s how I feel towards a piece published in The Bookseller earlier today.
Simon Rowberry argues that ebooks aren’t dead, but his arguments betray legacy industry biases. For example, he cavalierly tosses off the assumption that $10 ebook prices are unsustainable.
The fall in revenue from ebooks is a direct consequence of legacy publishers’ prioritization of print sales at the expense of digital books. The Kindle’s North American launch in 2007 marketed new ebook titles at $9.99, a discount of at least $10 on the hardback equivalent. This approach was unsustainable, but it set readers’ expectations for the cost of ebooks.
What’s funny about this assumption are the many indie authors who would disagree, or the publishers like Baen Books that price all of their ebooks under $10.
Baen Books has been selling its ebooks at what Rowberry would describe as an unsustainable price for close to twenty years, and yet they have somehow managed to pull it off.
And that’s not the only data that Rowberry didn’t include. A little earlier in the piece he cites stats from the UK PA and then vaguely hand waves at reasons why the data is incomplete:
But despite the early promise of the ebook, many are questioning whether it has lived up to these expectations. In recent years, the ebook has faced significant backlash amid reports of declining sales in trade publishing. The Publishing Association Yearbook 2016 noted a 17% slump in the sale of consumer ebooks while physical book revenue increased by 8%. Over the last couple of years, audiobooks have replaced ebooks as digital publishing’s critical darling on the back of a rapid increase in revenue. In this climate, several commentators have asked “how ebooks lost their shine.”
The ‘ebook plateau’ argument also ignores emergent sectors of digital-only sales, including self-publishing, where new genres drive a vibrant and divergent market. Amazon facilitates most self-publishing sales, and the company steadfastly refuses to provide sales data for books published exclusively on the Kindle. So a potential increase in sales for emergent digital-only genres is hidden by the headlines about traditional publishers.
Yeah, the data is only obscured if you refuse to go looking for it.
I am referring of course the Author Earnings Report and the pseudonymous Data Guy (who does answer press queries about the latest data).
We know exactly the limits of the stats from the PA (it misses 38% of the UK ebook market), and that is the point that Rowberry should have made.
. . . .
He actually think the legacy industry could kill off ebooks:
For the moment, reports of the ebook’s death are exaggerated. If the disinterest of Amazon and resistance from the book trade continue, however, there is a chance that the ebook is killed off – in my view, prematurely.
While an industry can refuse to supply the market with what the market wants, that industry cannot kill that want.
And in the case of digital goods, it cannot prevent consumers from adopting the digital goods – they’ll just turn to someone else to supply the content.
Link to the rest at The Digital Reader
For a thought experiment, consider an alternative history in which printed books did not exist and ebooks are the only way books have been distributed/sold/read. As with today’s conditions, tablets, ereaders and computers are in wide use.
In that history, if someone invented the printed book and was promoting it as an alternative to ebooks, what would the reaction of the publishing industry be?
Printed books are far too expensive to produce, distribute and sell. Readers will never accept the size and weight of a printed book compared to their ereading devices.
Where will printed books be stored? Thousands of ebooks take up a bit of disk space while the same number of printed books will consume vast physical spaces that could otherwise be used for far more productive purposes.
And the forests! Think of the devastating impact on thousands of acres of beautiful trees!
PG was impressed by this in that many of the activities shown describe communications that individuals create themselves, at least in part.
On the other hand, 103,447,520 spam emails sent every minute.
From The Wall Street Journal:
Robot developers say they are close to a breakthrough—getting a machine to pick up a toy and put it in a box.
It is a simple task for a child, but for retailers it has been a big hurdle to automating one of the most labor-intensive aspects of e-commerce: grabbing items off shelves and packing them for shipping.
Several companies, including Saks Fifth Avenue owner Hudson’s BayCo. and Chinese online-retail giant JD.com Inc. have recently begun testing robotic “pickers” in their distribution centers. Some robotics companies say their machines can move gadgets, toys and consumer products 50% faster than human workers.
Retailers and logistics companies are counting on the new advances to help them keep pace with explosive growth in online sales and pressure to ship faster. U.S. e-commerce revenues hit $390 billion last year, nearly twice as much as in 2011, according to the U.S. Census Bureau. Sales are rising even faster in China, India and other developing countries.
That is propelling a global hiring spree to find people to process those orders. U.S. warehouses added 262,000 jobs over the past five years, with nearly 950,000 people working in the sector, according to the Labor Department. Labor shortages are becoming more common, particularly during the holiday rush, and wages are climbing.
. . . .
Picking is the biggest labor cost in most e-commerce distribution centers, and among the least automated. Swapping in robots could cut the labor cost of fulfilling online orders by a fifth, said Marc Wulfraat, president of consulting firm MWPVL International Inc.
“When you’re talking about hundreds of millions of units, those numbers can be very significant,” he said. “It’s going to be a significant edge for whoever gets there first.”
. . . .
In RightHand Robotics’ Somerville, Mass., test facility, mechanical arms hunt around the clock through bins containing packages of baby wipes, jars of peanut butter and other products. Each attempt—successful or not—feeds into a database. The bigger that data set, the faster and more reliably the machines can pick, said Yaro Tenzer, the startup’s co-founder.
Hudson’s Bay is testing RightHand’s robots in a distribution center in Scarborough, Ontario.
“This thing could run 24 hours a day,” said Erik Caldwell, the retailer’s senior vice president of supply chain and digital operations, at a conference in May. “They don’t get sick; they don’t smoke.”
Link to the rest at The Wall Street Journal (Link may expire)
From the Bookseller:
What happens when we run out of time?
This might sound like a philosophical question, but with the explosion in content and entertainment offerings such as social media and freemium games, we are rapidly approaching a state of peak attention. I define peak attention as the moment where the competition for our attention reaches a saturated point – when there is no more time to spare and something else must miss out.
As the old saying goes; time is the ultimate finite resource. Increasingly, ours is being spent online.
Herbert Simon first coined the term ‘attention economy’ way back in 1971. His simple conclusion was that an explosion of information must lead to a scarcity of what it consumes, our attention. From his office, it’s like he foresaw the entire rise of social media with its endless content feeds. We now collectively spend more than 10bn hours a week on the main social platforms, and it is rising fast. The total attention equation is different still. Between online and offline media platforms, the average American spends one more hour per day than they did just two years ago – almost 11 hours a day in total.
Simultaneously, from 2005 to 2015, the average amount of time Americans spent reading for personal interest on weekend days and holidays fell by six minutes to 21 minutes per day and 17 minutes on normal work days – a 22% decrease in a decade.
. . . .
I believe the advent of the data feedback loop from users, now a reality with all digital media, will prove the game changer. Software can now learn on its own, powered by unprecedented computational power and vast data sets of real human behaviour. Imagine a book that gets better and better suited to its audience every time it is read, gradually personalising to fit each person’s preferred narrative direction.
These new self-learning systems will inevitably get very good at hooking us in – and keeping us there.
. . . .
Rather than simply living side by side in harmony, there is a compounding effect on the competition for attention across all the media we consume. Every new entertainment offering and attention-consuming activity essentially raises the bar for all the incumbent things people used to spend time on. We have entered a state of hyper-competition. If everyone increasingly fights for the same attention pool, something must inevitably lose out. And that’s going to be books, if Facebook, Instagram, YouTube and Netflix keep winning.
The true structural issue here is that all services and products compete for the same 24 hours.
Link to the rest at the Bookseller
From Publishers Weekly:
Interactive multimedia storytelling is probably older than recorded human history itself. The famous cave paintings of Lascaux, for example, date from about 17,000 years ago. While we do not know their exact purpose, one can easily imagine a narrator or shaman using them to describe a successful hunt or enact a ritual. Holding a torch, the narrator walks along the walls, recounting a sequence of events, in a kind of early form of cinema.
. . . .
Today, we have interactive digital narratives, also known as video games. This relatively new form of interactive media has evolved into a mature form for the presentation of narrative, and may well represent a possible future for storytelling.
Why should this be interesting or relevant to book publishers? Because it is worth knowing what readers are into these days. According to a 2015 Pew internet study, about half of all American adults play video games: 50% of men and 48% of women play them, and about 10% consider themselves to be gamers. Mary Meeker’s highly regarded “Internet Trends 2017” report describes video games as more engaging than popular forms of social media such as Facebook and Instagram, driving an increase in deep engagement in “an era of perceived disengagement.”
. . . .
The first thing to know is that digital interactive storytelling has matured in recent years. The depth and quality of the writing and emotional experience in some games rivals the best literary narratives—and some are even drawn from them. The international hit Witcher 3: Wild Hunt, for example, is based on a series of novels by Polish novelist Adrzej Sapkowski, adapted for the game medium by developer CD Projekt Red’s Jakub Szamalek.
Second, despite book publishers’ fears that mobile apps are a form of digital distraction, taking readers away from books, interactive digital media can actually drive readers toward text-based storytelling. Twine, for example, bridges the gap between interactive fiction and gaming; it’s an open-source software tool that allows users without programming expertise to create and publish interactive stories. Twine has become so popular that it has begun to be noticed by book publishers. In many ways, it is the digital offspring of the popular Choose Your Own Adventure book series.
Because Twine is free and does not require coding skills, it has become a platform for writers who want to try their hands at interactive fiction. Many Twine games are composed entirely of text. Some are also visual, but in many cases, a branching narrative composed of text is the final published product. As this shows, gamers are open to and interested in text stories.
Link to the rest at Publishers Weekly
From Fast Company:
Like a lot of musicians now, Scott Hansen was pretty skeptical of Spotify. To Hansen, the mastermind of chilled-out electronic music outfit Tycho, the streaming service and the new era it seemed to herald posed troubling questions: Am I getting paid enough to support myself? Is my craft doomed? Is this whole streaming model even sustainable to begin with?
Anxious questions like these remain unresolved for many artists as the music industry reshapes itself and streaming becomes its biggest source of income. Thanks to streaming, record labels are finally seeing their revenue grow after years of decline, but the gains don’t always trickle down to musicians and songwriters. Spotify recently settled a $43 million class action lawsuit over royalty payments that went unpaid to certain artists, likely due to a metadata error. In other cases, the royalties flow, but not always in large sums. One of Spotify’s own executives recently conceded publicly that streaming doesn’t pay artists “enough.”
But for an increasing number of artists, including Tycho’s Hansen, the advantages of the streaming era are beginning to come into sharper focus.
“I definitely think we’re in a better place than we were three years ago, that’s for sure,” says Hansen, by phone from somewhere outside Cologne, a stop on the group’s recent European tour. The tour’s been a big success, he says, something he attributes, in part, to an experimental marketing project at Spotify. (The group’s recent Grammy nomination likely didn’t hurt either.) By aiming email and web ads at listeners who seemed likely to care the most about their music, the program helped the group sell about 1,600 tickets in Europe, Hansen says, and a total of more than 4,000 tickets this year. “I see it getting better as more people adopt that way of consuming music,” says Hansen.
. . . .
As Spotify seeks to build better relationships with record companies and music publishers—and as artists seek new sources of cash—the company’s executives and artists say the efforts are already helping, providing precise data to help artists make smarter career decisions, and leading to bigger ticket and merchandise sales. Collectively, the company says, the relatively young effort has generated millions in additional revenue for musicians–results that, after years of industry upheaval, are hard to ignore.
. . . .
Of course, not every artist can realistically expect this kind of free marketing push from Spotify. In an attempt to build stronger personal ties to the recording industry, promote and develop artists on Spotify, and bolster the company’s artist-facing tools, Spotify hired investor and former Lady Gaga manager Troy Carter last year as its global head of creator services. Carter’s job, interfacing with artists, managers, and labels, is about being “better partners,” he says “whether it’s helping you co-market a product on our platform, helping you understand how the international market works, or how our playlisting works.”
Link to the rest at Fast Company
Conversational interfaces have reduced user experience down to a few lines of text. With bots, UX becomes conversational, products talk back, and personas now go both ways. Every bot has a voice — which means every bot needs a personality.
If conversational computing means personality is the new user experience, how do we approach the design of these nuanced digital entities?
. . . .
Chatbots and voice assistants are for humans. Conversational interfaces exist for better interactions between humans and computers. So then, how can we personalise these conversations to be more life-like, intimate, and representative of human interaction? Through personality. Building a rich and detailed personality makes your chatbot more relatable, believable, and relevant to your users.
Investing in personality informs every touch point of a chatbot. Personality creates a deeper understanding of the bot’s end goal, and how it will communicate through choice of language, mood, tone, and style. Seeing a bot as a lifeless piece of technology is a mistake. People project human traits onto everything — but now these objects talk back. Whether you like it or not, your users will still assign a personality to your bot if one hasn’t been explicitly designed.
Link to the rest at Prototypr.io
PG says Alexa definitely has a personality. However, he’s not certain whether she has moods or not.
From Nikkei Asian Review:
Japanese electronic book distributor Media Do will develop an artificial intelligence-based automatic translation system to make its e-books available for English-speaking readers.
The company hopes to reach a broader market and promote digitization at a time when Japan’s book market is shrinking.Media Do has teamed up with two Tokyo-based AI startups for the project — Internet Research Institute and A.I. Squared. Media Do will invest about 1.1 billion yen ($9.82 million) to acquire about 20% of each company through a third-party share allotment at the end of this month.
Both startups have developed unique technology for summarizing and translating text. The summarization technology analyzes the relationships between words and sentences in a given piece of text and extracts key sentences to create a summary.
The translation technology “learns” set phrases in both Japanese and English — on top of vocabulary and grammar — to enhance the quality of its translations, a process known as deep learning.
Link to the rest at Nikkei Asian Review
From The Washington Examiner:
Circulation of daily newspapers has dropped to a 77-year low, signaling an end to print and a shift to all-digital delivery, according to a new industry review.
The Pew Research Center said that circulation has reached a new low of 34.6 million, six million less than papers sold in 1940.
. . . .
The estimated total U.S. daily newspaper circulation (print and digital combined) in 2016 was 35 million for weekday and 38 million for Sunday, both of which fell 8% over the previous year. Declines were highest in print circulation: Weekday print circulation decreased 10% and Sunday circulation decreased 9%.
Link to the rest at The Washington Examiner
But readers will always prefer their books on paper.
From The Wall Street Journal:
Millard “Mickey” Drexler, the fashion genius whose ability to spot trends reshaped how Americans dress, has a humbling admission. He missed what might be the biggest trend of all—how quickly technology would change the retail industry.
“I’ve never seen the speed of change as it is today,” the 72-year-old chairman and chief executive of J.Crew Group Inc. said in an interview at his New York office. “If I could go back 10 years, I might have done some things earlier.”
The retail veteran, who redefined Gap Inc. in the 1990s and then transformed J.Crew into a household name, is now scrambling to keep the company he took private in a leveraged buyout from ending up in bankruptcy.
Sales at J.Crew Group stores open at least a year have fallen for the past 10 quarters—the kind of slump that got Mr. Drexler ousted from the Gap in 2002. This time he owns 10% of the company, and it is lenders, not the founding family, that are putting on the pressure.
For decades, fashion was essentially a hit or miss business. Merchants like Mr. Drexler would make bets on what people would be wearing a year in advance, since that’s how long it took to design and produce items. Hits guaranteed handsome returns until the next season.
Now, competitors with high-tech, data-driven supply chains can copy styles faster and move them into stores in a matter of weeks. Online marketplaces drive down prices, and design details such as nicer buttons and richer colors are less apparent on the internet. Social media adds fuel to the style churn—consumers want a new outfit for every Instagram post.
“The rules of the game have changed,” said Janet Kloppenburg, president of JJK Research, a retail-focused research firm. “It’s not just about product anymore. It’s also about speed and pricing.”
Mr. Drexler’s plan is to emphasize lower prices, pivot toward more digital marketing and adopt a more accessible image. “We became a little too elitist in our attitude,” he said.
Many visionaries focus on doing what they do best, even when the ground shifts beneath them. From newspapers to television, successful companies have been upended by disruptive technologies. Facebook Inc. is now the world’s largest publisher; Netflix Inc. is worth twice as much as CBS Corp.
“The incumbent leaders never see it coming,” said Clayton Christensen, the Harvard Business School professor who introduced the theory of disruptive innovation 20 years ago. “They focus on their best customers and try to provide what they need, but the customers who first defect [to new technology] are usually the least profitable.”
. . . .
The New York City native, who doesn’t have his own Instagram, Facebook or Twitter account, was sharing thoughts with employees on a loudspeaker—hooked up through his phone—before Twitter was launched. He paid attention to firsthand shopper feedback on frequent visits to stores long before Amazon.com Inc. was collecting user data. He was also selling clothes online before many other specialty retailers. Nearly half of J.Crew’s sales now come from the web.
But Mr. Drexler didn’t appreciate how the quality of garments could easily get lost in a sea of options online, where prices drive decisions, or how social media would give rise to disposable fashion. Online, price has more impact than the sensory qualities of clothing. “You go into a store—I love this, I love this, I love this,” he said. “You go online and you just don’t get the same sense and feel of the goods because you’re looking at a picture.”
. . . .
“The days of people wearing head-to-toe J.Crew are over,” said Carla Casella, an analyst at J.P. Morgan Chase & Co.
Zinniah Munoz, a 20-year-old makeup artist in New York City, said she would rather buy a style at a lower price than pay extra money for a brand name. “I’m always vigilant of not posting the same style twice” on Instagram or Facebook, she added.
. . . .
TPG co-founder David Bonderman recently acknowledged J.Crew and its peers are struggling with declining mall traffic and the shift to online shopping. “The internet has proven much more resilient and much more important than most of us thought a decade ago,” he said at a conference earlier this month.
Link to the rest at The Wall Street Journal (Link may expire)
From Bloomberg Technology:
As Coca-Cola Co. Chief Executive Officer James Quincey settles into his new job, he’s facing a challenge that most of his predecessors never worried about: digital disruption.
Consumers are increasingly shopping online, spending more time on mobile apps, and getting groceries delivered to their homes. And that’s hitting Coca-Cola in ways you might not expect, Quincey said in an interview from his office in Atlanta.
When shoppers skip trips to the local mall and get their clothes at Amazon, they also forgo buying Coke at a vending machine or food court. So while the decline of retailers has mostly focused on bankrupt apparel chains and shuttered storefronts, a brand like Coca-Cola is suffering as well.
“Digital is changing the way you behave,” he said. “It affects other categories that are not the primary reason you thought about making the shopping trip.”
Turning Coke into a winner of the digital age — rather than another brick-and-mortar victim — is a key priority for Quincey.
. . . .
The disruptive power of tech has been especially pronounced in some overseas markets, including China. When Quincey was chief operating officer in early 2016, he saw sales in that country slump — hurt by a decline in sales to noodle shops and other restaurants.
The shops themselves weren’t the problem — they were still selling large quantities of food — but more customers were ordering online and having their meals delivered. The problem for Coca-Cola: The restaurants offered glass bottles and sizes that weren’t suited to being transported via scooter.
Link to the rest at Bloomberg Technology