On the Future Of Newspapers

From The Falls Church News-Press:

“The internet dissected your daily newspaper into its constituent parts, letting readers find the news they want without ever buying a paper or visiting a homepage – and handing the most lucrative part…, the advertising business, to companies such as Meta and Google that don’t produce news.”

In one succinct sentence, Washington Post opinion writer Megan McArdle told just about the whole story of the demise of local news in her column entitled, “The Great Age of Cord Cutting is Approaching Its End” published in the Post this Tuesday.

Our founder, owner and editor Nicholas F. Benton will be addressing the monthly luncheon meeting of the Falls Church Chamber of Commerce on just this reality, its implications and what can be done about it this coming Tuesday, February 20, at the Italian Cafe in Falls Church. He will bring his more than 33 years of experience making the Falls Church News-Press work for more than 1,700 consecutive weekly editions delivered to every household in The Little City to bear on this question that is vital to our democracy.
The landscape for local news in Northern Virginia has changed dramatically over those more than three decades, and the News-Press has endured to become just about the only general news source in the region that still comes out in print.

How to understand what that means for the community it serves, and for those who have lost such a benefit over the years, as well as how and what needs to happen to ensure it continues to get done will be our editor’s subject. The new book, “Life and Times of the Falls Church News-Press,” by the late Charlie Clark, will be available for sale as a resource at the talk, which will also be recorded.

A good newspaper is more than just a chronicle of events in a community, but serves as a vital glue for the components that not only make up, but also seek to advance a community’s ability to provide for its public’s needs, especially as they involve core human and democratic values. For Mr. Benton, this has taken the form of continually shining a light on the community needs for, among other things, smart development, affordable housing and above all education of the young.

The internet and strictly digital sources have unwittingly contributed to the undermining of this approach by shattering information into countless discrete categories, thus disabling the ability of a community’s citizens to function from the standpoint of an overview of these combined values and needs.

Benton and the News-Press since its founding in 1991 have operated from the standpoint of advocating for those who cannot advocate for themselves, and that has meant promoting education for the young by encouraging the kind of economic development that can pay for a quality educational system. It has meant taking sides in opposition to those who resist such developments for selfish reasons, be they big corporate interests or citizens against constructive change.

Link to the rest at The Falls Church News-Press

When PG was in high school, he was the high school sports reporter (and the only sports reporter) for a very small local newspaper that was mimeographed weekly and delivered free to all the mailboxes in town and the surrounding area.

He saw no conflict of interest in writing about games in which he had participated. However, he seldom mentioned his name, in large part because he was not a particularly outstanding player, even on teams comprised of small, slow white boys with a smattering of members of the local Sioux tribe.

As humble as it was, the newspaper, along with school activities, was about all that made the little town a community. When the local schools were closed and students were bussed to schools in a larger nearby town a few years after PG graduated, that little town began to lose population and has continued to decline into just a group of run-down and unoccupied houses.

PG never heard what happened to the lady who ran the newspaper.

What Is Disruptive Innovation?

From The Harvard Business Review:

The theory of disruptive innovation, introduced in these pages in 1995, has proved to be a powerful way of thinking about innovation-driven growth. Many leaders of small, entrepreneurial companies praise it as their guiding star; so do many executives at large, well-established organizations, including Intel, Southern New Hampshire University, and Salesforce.com.

Unfortunately, disruption theory is in danger of becoming a victim of its own success. Despite broad dissemination, the theory’s core concepts have been widely misunderstood and its basic tenets frequently misapplied. Furthermore, essential refinements in the theory over the past 20 years appear to have been overshadowed by the popularity of the initial formulation. As a result, the theory is sometimes criticized for shortcomings that have already been addressed.

There’s another troubling concern: In our experience, too many people who speak of “disruption” have not read a serious book or article on the subject. Too frequently, they use the term loosely to invoke the concept of innovation in support of whatever it is they wish to do. Many researchers, writers, and consultants use “disruptive innovation” to describe any situation in which an industry is shaken up and previously successful incumbents stumble. But that’s much too broad a usage.

The problem with conflating a disruptive innovation with any breakthrough that changes an industry’s competitive patterns is that different types of innovation require different strategic approaches. To put it another way, the lessons we’ve learned about succeeding as a disruptive innovator (or defending against a disruptive challenger) will not apply to every company in a shifting market. If we get sloppy with our labels or fail to integrate insights from subsequent research and experience into the original theory, then managers may end up using the wrong tools for their context, reducing their chances of success. Over time, the theory’s usefulness will be undermined.

This article is part of an effort to capture the state of the art. We begin by exploring the basic tenets of disruptive innovation and examining whether they apply to Uber. Then we point out some common pitfalls in the theory’s application, how these arise, and why correctly using the theory matters. We go on to trace major turning points in the evolution of our thinking and make the case that what we have learned allows us to more accurately predict which businesses will grow.

First, a quick recap of the idea: “Disruption” describes a process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Specifically, as incumbents focus on improving their products and services for their most demanding (and usually most profitable) customers, they exceed the needs of some segments and ignore the needs of others. Entrants that prove disruptive begin by successfully targeting those overlooked segments, gaining a foothold by delivering more-suitable functionality—frequently at a lower price. Incumbents, chasing higher profitability in more-demanding segments, tend not to respond vigorously. Entrants then move upmarket, delivering the performance that incumbents’ mainstream customers require, while preserving the advantages that drove their early success. When mainstream customers start adopting the entrants’ offerings in volume, disruption has occurred. (See the exhibit “The Disruptive Innovation Model.”)

Is Uber a Disruptive Innovation?

Let’s consider Uber, the much-feted transportation company whose mobile application connects consumers who need rides with drivers who are willing to provide them. Founded in 2009, the company has enjoyed fantastic growth (it operates in hundreds of cities in 60 countries and is still expanding). It has reported tremendous financial success (the most recent funding round implies an enterprise value in the vicinity of $50 billion). And it has spawned a slew of imitators (other start-ups are trying to emulate its “market-making” business model). Uber is clearly transforming the taxi business in the United States. But is it disrupting the taxi business?

According to the theory, the answer is no. Uber’s financial and strategic achievements do not qualify the company as genuinely disruptive—although the company is almost always described that way. Here are two reasons why the label doesn’t fit.

Disruptive innovations originate in low-end or new-market footholds.

Disruptive innovations are made possible because they get started in two types of markets that incumbents overlook. Low-end footholds exist because incumbents typically try to provide their most profitable and demanding customers with ever-improving products and services, and they pay less attention to less-demanding customers. In fact, incumbents’ offerings often overshoot the performance requirements of the latter. This opens the door to a disrupter focused (at first) on providing those low-end customers with a “good enough” product.

In the case of new-market footholds, disrupters create a market where none existed. Put simply, they find a way to turn nonconsumers into consumers. For example, in the early days of photocopying technology, Xerox targeted large corporations and charged high prices in order to provide the performance that those customers required. School librarians, bowling-league operators, and other small customers, priced out of the market, made do with carbon paper or mimeograph machines. Then in the late 1970s, new challengers introduced personal copiers, offering an affordable solution to individuals and small organizations—and a new market was created. From this relatively modest beginning, personal photocopier makers gradually built a major position in the mainstream photocopier market that Xerox valued.

A disruptive innovation, by definition, starts from one of those two footholds. But Uber did not originate in either one. It is difficult to claim that the company found a low-end opportunity: That would have meant taxi service providers had overshot the needs of a material number of customers by making cabs too plentiful, too easy to use, and too clean. Neither did Uber primarily target nonconsumers—people who found the existing alternatives so expensive or inconvenient that they took public transit or drove themselves instead: Uber was launched in San Francisco (a well-served taxi market), and Uber’s customers were generally people already in the habit of hiring rides.

Uber has quite arguably been increasing total demand—that’s what happens when you develop a better, less-expensive solution to a widespread customer need. But disrupters start by appealing to low-end or unserved consumers and then migrate to the mainstream market. Uber has gone in exactly the opposite direction: building a position in the mainstream market first and subsequently appealing to historically overlooked segments.

Disruptive innovations don’t catch on with mainstream customers until quality catches up to their standards.

Disruption theory differentiates disruptive innovations from what are called “sustaining innovations.” The latter make good products better in the eyes of an incumbent’s existing customers: the fifth blade in a razor, the clearer TV picture, better mobile phone reception. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers.

Disruptive innovations, on the other hand, are initially considered inferior by most of an incumbent’s customers. Typically, customers are not willing to switch to the new offering merely because it is less expensive. Instead, they wait until its quality rises enough to satisfy them. Once that’s happened, they adopt the new product and happily accept its lower price. (This is how disruption drives prices down in a market.)

Most of the elements of Uber’s strategy seem to be sustaining innovations. Uber’s service has rarely been described as inferior to existing taxis; in fact, many would say it is better. Booking a ride requires just a few taps on a smartphone; payment is cashless and convenient; and passengers can rate their rides afterward, which helps ensure high standards. Furthermore, Uber delivers service reliably and punctually, and its pricing is usually competitive with (or lower than) that of established taxi services. And as is typical when incumbents face threats from sustaining innovations, many of the taxi companies are motivated to respond. They are deploying competitive technologies, such as hailing apps, and contesting the legality of some of Uber’s services.

Why Getting It Right Matters

Readers may still be wondering, Why does it matter what words we use to describe Uber? The company has certainly thrown the taxi industry into disarray: Isn’t that “disruptive” enough? No. Applying the theory correctly is essential to realizing its benefits. For example, small competitors that nibble away at the periphery of your business very likely should be ignored—unless they are on a disruptive trajectory, in which case they are a potentially mortal threat. And both of these challenges are fundamentally different from efforts by competitors to woo your bread-and-butter customers.

As the example of Uber shows, identifying true disruptive innovation is tricky. Yet even executives with a good understanding of disruption theory tend to forget some of its subtler aspects when making strategic decisions. We’ve observed four important points that get overlooked or misunderstood:

1. Disruption is a process.

The term “disruptive innovation” is misleading when it is used to refer to a product or service at one fixed point, rather than to the evolution of that product or service over time. The first minicomputers were disruptive not merely because they were low-end upstarts when they appeared on the scene, nor because they were later heralded as superior to mainframes in many markets; they were disruptive by virtue of the path they followed from the fringe to the mainstream.

Because disruption can take time, incumbents frequently overlook disrupters.

Most every innovation—disruptive or not—begins life as a small-scale experiment. Disrupters tend to focus on getting the business model, rather than merely the product, just right. When they succeed, their movement from the fringe (the low end of the market or a new market) to the mainstream erodes first the incumbents’ market share and then their profitability. This process can take time, and incumbents can get quite creative in the defense of their established franchises. For example, more than 50 years after the first discount department store was opened, mainstream retail companies still operate their traditional department-store formats. Complete substitution, if it comes at all, may take decades, because the incremental profit from staying with the old model for one more year trumps proposals to write off the assets in one stroke.

The fact that disruption can take time helps to explain why incumbents frequently overlook disrupters. For example, when Netflix launched, in 1997, its initial service wasn’t appealing to most of Blockbuster’s customers, who rented movies (typically new releases) on impulse. Netflix had an exclusively online interface and a large inventory of movies, but delivery through the U.S. mail meant selections took several days to arrive. The service appealed to only a few customer groups—movie buffs who didn’t care about new releases, early adopters of DVD players, and online shoppers. If Netflix had not eventually begun to serve a broader segment of the market, Blockbuster’s decision to ignore this competitor would not have been a strategic blunder: The two companies filled very different needs for their (different) customers.

However, as new technologies allowed Netflix to shift to streaming video over the internet, the company did eventually become appealing to Blockbuster’s core customers, offering a wider selection of content with an all-you-can-watch, on-demand, low-price, high-quality, highly convenient approach. And it got there via a classically disruptive path. If Netflix (like Uber) had begun by launching a service targeted at a larger competitor’s core market, Blockbuster’s response would very likely have been a vigorous and perhaps successful counterattack. But failing to respond effectively to the trajectory that Netflix was on led Blockbuster to collapse.

2. Disrupters often build business models that are very different from those of incumbents.

Consider the healthcare industry. General practitioners operating out of their offices often rely on their years of experience and on test results to interpret patients’ symptoms, make diagnoses, and prescribe treatment. We call this a “solution shop” business model. In contrast, a number of convenient care clinics are taking a disruptive path by using what we call a “process” business model: They follow standardized protocols to diagnose and treat a small but increasing number of disorders.

One high-profile example of using an innovative business model to effect a disruption is Apple’s iPhone. The product that Apple debuted in 2007 was a sustaining innovation in the smartphone market: It targeted the same customers coveted by incumbents, and its initial success is likely explained by product superiority. The iPhone’s subsequent growth is better explained by disruption—not of other smartphones but of the laptop as the primary access point to the internet. This was achieved not merely through product improvements but also through the introduction of a new business model. By building a facilitated network connecting application developers with phone users, Apple changed the game. The iPhone created a new market for internet access and eventually was able to challenge laptops as mainstream users’ device of choice for going online.

Link to the rest at The Harvard Business Review

PG realized he has been throwing around disruptive innovation and related terms for quite a while, assuming (like a tech-head) that everyone understood the concept in the same way he did.

While PG doesn’t necessarily accept all of the factors the HBR describes as necessary to qualify as disruptive, he thinks the article will help educate visitors to TPV who have better things to do than to follow this and that business trend.

Palworld breaks 1 million concurrent players on Steam and rockets onto its top 5 all-time most played games list, blowing past Elden Ring and Cyberpunk 2077

From Windows Central:

  • Just a few days after its launch, Pocketpair’s new “Pokémon with guns” open world survival creature capture game Palworld is already breaking Steam records.
  • Specifically, it reached 730,000 concurrent players earlier this morning, making it the 10th most played game in Steam history.
  • That number continued to soar until Palworld peaked at 855,425 players, putting it within striking distance of Baldur’s Gate 3’s record of 875,343.
  • In addition to being available on Steam, you can also play Palworld on Xbox or on PC through the Microsoft Store client. Notably, it’s on Xbox Game Pass as well.
  • Update: Palworld‘s numbers have only ballooned even further throughout the weekend, with the game now peaking at 1,281,669 players on Steam.

Palworld has only continued to climb upwards since I originally wrote this article, with the game breaking 1 million players and hitting a new peak of 1,281,669 on Steam this morning. That blows through both Elden Ring’s and Cyberpunk 2077’s records of 953,426 and 1,054,388 concurrent players, respectively, and puts Palworld in fifth place on Steam’s top 5 all-time most played games list.

Link to the rest at Windows Central

Fact Check: Has Palworld Copied Pokemon’s Designs?

From The Sports Rush:

After the upcoming Indiana Jones title was accused of copying Uncharted, the newly-released Palworld has come under plagiarism allegations. Fans claim the game has plagiarized numerous features from Nintendo’s famous Pokemon series.

Pocket Pair developed Palworld, an open-world action-adventure game. The game has multiple unique creatures called “Pals” that players can battle and capture. Later, the those creatures help you fight others, travel, and construct bases. Since its announcement, many people have compared the concept of Pals to the “Pocket Monsters” from the Pokemon franchise.

This 2024-released game became an instant hit on Steam, becoming the most-played game on the platform within 24 hours. Palworld’s gameplay is similar to Ark: Survival Evolved. However, it was difficult to overlook the similarities with Game Freak’s masterwork. Fans even analyzed how the Pals possibly stole Pokemon’s designs. Because of these issues, many fans nicknamed the game “Pokemon with Guns.”

Pocket Pair’s history with generative AI has worsened the situation

The case of Palworlds plagiarising Pokemon worsened when the developer Pocket Pair’s relationship with generative AI surfaced. The studio previously released a game called AI: Art Imposter, an AI drawing party game. The players can instruct the AI to make images without requiring any aesthetic skills to create beautiful artwork.

Furthermore, Pocket Pair CEO Takuro Mizobe has complimented generative AI, recognizing its enormous potential. In an old tweet, Mizobe stated that generative AI technology might one day be powerful enough to make art without violating copyright laws. Many artists are lately denouncing AI for taking over their work and exploiting their artworks without permission to train AI technology.

Link to the rest at The Sports Rush and thanks to F. for both tips.

PG says that more than one disruptive technology has resulted in a lot of thrown elbows by an upset incumbent.

All the Jobs AI Is Coming for, According to a UK Study

From Lifehacker:

The question of whether AI will eventually take jobs away from us meatbags is nothing new. However, following ChatGPT’s launch late last year, the speed at which AI has caught on has surprised almost everybody, even those working in the space. And far from a question to consider in the far-off (or even near-term) future, jobs are already being affected: Some layoffs this year came due to companies believing AI could replace certain roles, while other companies froze hiring for similar reasons.

So, how do you know if your job is one of the ones at risk? A recent study could give you the answer (and you might not like it).

This UK study reveals the jobs “most exposed” to AI—and what that means

Assessing the random actions of various companies and getting lost in speculation do us no good. For a substantive and thoughtful discussion on the topic, there is already traditional research ongoing into how AI will affect the job market, including this recent study out of the U.K. The study, developed by the UK’s Department for Education, estimates that 10–30% of jobs are automatable with AI—which, depending on your general outlook on AI, may sound like a lot, or less than you’d expect.

The study investigated the job functions and qualifications for various sectors of the workforce, looking for whether the following ten AI applications could aid in those jobs:

  • Abstract strategy games
  • Real-time video games
  • Image recognition
  • Visual question answering
  • Image generation
  • Reading comprehension
  • Language modeling
  • Translation
  • Speech recognition
  • Instrumental track recognition

Depending on how relevant each of these 10 functions were to a particular role, the study generated an AI Occupational Exposure (AIOE) score for the role. The higher the score, the more “exposure” that role may have to artificial intelligence.

In the initial findings, the study determined that “professional occupations,” including sectors like finance, law, and business management, tended to be more exposed to AI. In fact, they specifically found that the finance and insurance sectors were the most exposed. Building off this discovery, it seems the more advanced the qualifications necessary for the role, the more AI exposure that role tends to have. In general, if your job requires more education and more advanced training, chances are it pairs with well AI.

The reverse is true, of course—except for security guards, interestingly enough. The study says there is such an emergence of security technology that although the role requires low education and work experience, it is more exposed to AI than other jobs of its kind.

None of this is necessarily a bad thing. As the study points out, the International Labor Organization has found most jobs are only partially exposed to AI, so the odds are decent employees in these roles will benefit from AI exposure, rather than have their jobs fully replaced by the technology.

Which jobs are most exposed to AI

Taking all this into consideration, the study breaks down the top 20 occupation types most exposed to AI, as well as most exposed to large language modeling (LLMs). It’s a long list, including sectors like consulting, telephone sales, psychologists, legal professionals, teachers, and payroll managers.

As stated above, the study finds that finance and insurance are the most exposed to AI of any job sector. The other most exposed sectors include information and communication; professional, scientific and technical, property; public administration and defense; and education.

Just as interesting as the list of occupation types most exposed is the list of those least exposed. Many of these roles require manual labor that cannot be replicated by AI or technology in general, such as sports players, roofers, fork-lift truck drivers, painters, window cleaners, and bricklayers:

. . . .

Will AI truly replace any jobs, according to the study?

Interestingly enough, the study is almost exclusively focused on AI exposure, rather than on jobs threatened by the technology. That said, they do have a list of 16 job types that are considered “high automation occupations,” which a pessimist could infer to mean jobs that could be one day replaced by automation.

  • Authors, writers and translators
  • Bank and post office clerks
  • Bookkeepers, payroll managers and wages clerks
  • Brokers Call and contact centre occupations
  • Customer service occupations n.e.c.
  • Finance officers
  • Financial administrative occupations n.e.c
  • Human resources administrative occupations
  • Librarians
  • Market research interviewers
  • Other administrative occupations n.e.c.
  • Pensions and insurance clerks and assistants
  • Telephone salespersons
  • Travel agents
  • Typists and related keyboard occupations

You might notice some overlap between this list and the list of jobs most exposed to AI. That’s because the study notes that these jobs all have high AIOE scores, both for exposure to AI and LLMs.

Link to the rest at Lifehacker

Dead Links

From Public Books:

On May 8, 2023, a Twitter user expressed sadness over the loss of a dead loved one’s Twitter account: “My sister died 10 years ago, and her Twitter hasn’t been touched since then. It’s now gone because of Elon Musk’s newest farce of a policy. Fuck you @elonmusk, your nonsense has taken away a monument to my sister’s mark on this earth.”

Soon after Twitter’s new deletion policy took hold, Google made an announcement of its own. A May 16, 2023, blog post stated that Google would start deleting inactive personal accounts: “if a Google Account has not been used or signed into for at least 2 years, we may delete the account and its contents.” Much like Twitter, Google blamed security issues as the company’s main concern. (Long-inactive accounts are less likely than regularly accessed ones to have two-factor authentication and may become compromised, spewing spam or other unpleasant content out into the world.) However, this policy change invalidates Google’s earlier promise to store your data forever for free. For example, Google Photos claimed in 2015:

Google Photos gives you a single, private place to keep a lifetime of memories, and access them from any device. They’re automatically backed up and synced, so you can have peace of mind that your photos are safe, available across all your devices.

And when we say a lifetime of memories, we really mean it. With Google Photos, you can now backup and store unlimited, high-quality photos and videos, for free.

Google Photos ended its free unlimited storage in 2021.

Tech titans gained power and wealth from the accumulation of data, but that doesn’t mean they are equipped to be long-term stewards of personal and collective memories. Even the longest-lived social media platforms have undergone tremendous changes, and some, like Twitter (now X), teeter on the precipice of oblivion. And many companies, it seems, would rather eschew their responsibilities as digital caregivers. They gobbled up massive amounts of user data for model building and to attract advertisers, and they can just as easily decide to free themselves of their obligations to preserve such data.

For ordinary users, personal data may seem permanent, something that can follow them across the life cycle. Yet such permanence doesn’t always align with corporate interests in and interpretations of data. Today, Big Tech companies are no longer willing to maintain data in perpetuity. We are perhaps reaching the limits of what the cloud can afford.

Tech companies, whether fledgling digital estate-planning startups or massive multinational corporations, are ill equipped to broker the intergenerational transfer of digital remains because of their short attention spans. Moreover, corporations often propose the deactivation or deletion of dormant accounts to avoid liability for any security issues that might arise from keeping them online. Twitter has repeatedly planned to deactivate such accounts, but up until this latest policy shift, user pushback and press attention prevented it from becoming a reality. Under Elon Musk’s chaotic ownership, this time the plan was carried out, at least in part. One petty billionaire had the power to delete long-standing memorials to the dead. Such deletions can also carry their own political implications, such as freeing up handles for right-wing politicians, one possible incentive for Musk’s decision, although simultaneously upsetting the loved ones of dead users.

Despite Big Tech’s tendency to ignore the dead, however, death seems to haunt data infrastructures. In my book, Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond, I discuss the thorny problem of maintaining the data of the dead, which requires enacting care both at scale and over time.

Here I explore the politics and ethics of endless posthumous data storage, especially at a time when the climate impact of the proverbial cloud is a pressing concern amid the rise of generative AI and other high-energy workloads.

Over the decades, platforms have grappled with the problem of retaining and caring for the data of the dead. Digital remains are complex inheritances, because they depend on the longevity and commercial viability of corporate platforms and proprietary systems. Consider how the remains of the dead might well encompass everything from email, blog, and social media accounts to the ambient forms of metadata that track individuals and their networks. All this—when users die or platform infrastructures break down—becomes digital remains.

Commercial platforms can provide the scaffolding for sacred communion with the dead. But such relationships depend on the whims of platform owners and the design decisions of technologists.

Link to the rest at Public Books

I was fired by a client for using AI. I’m not going to stop because it’s doubled my output, but I’m more clear about how I use it.

From Insider:

I work a full-time job in marketing and do freelance writing on the side. 

I was juggling a lot when a longtime client commissioned me for a three-month project. It entailed writing a series of how-to guides and 56 articles for their site.

Since I couldn’t clone myself, I tried what I thought would be the next best thing: I used AI.

Convinced that I could use the new technology to meet extremely tight deadlines, I started using Jasper.ai to produce up to 20 pieces in a month for this client.

. . . .

I was using AI to clone myself as a writer

I essentially used Jasper.ai as an extension of myself.

I’d let Jasper write articles of up to 2,500 words. I used it more than alternatives such as ChatGPT or Bard because it has pre-built templates that function as prompts. 

If I needed to expand on a sentence, I’d use Jasper’s “paragraph generator” or “commands” tool. If I needed to rewrite a sentence, I’d click on “content improver.” These features helped me overcome writer’s block and quickly build out long-form articles.

Jasper did most of the work and I did minimal editing.

After working together for months, my client started using one of the first AI-content detectors. Upon discovering the content I gave them was AI-generated, they terminated our agreement and paid me less than 40% of the original fee after I’d delivered all the articles we’d agreed on.

While this was not the outcome I intended, it shifted my mindset on how to use AI to keep clients rather than lose them.

I learned a valuable lesson the hard way — AI is a tool, not something that should replace you.

Looking back, I know things weren’t right when I was letting AI do the work and not communicating this to my client.

. . . .

Here’s how I use AI differently now:

AI is now a crucial part of my initial discussions with new clients

I ask if the client’s OK with me using AI-writing tools. If not, great; I won’t use it. If they see the value or don’t care whether I use them, then I’ll use them to enhance the quality and depth of what I write.

I use AI to enhance my draft

Some writers use AI to write a draft, then edit it to sound more human. I use it the other way around.

I draft the article first, then use an AI tool to enhance it and ensure I’ve maintained the client’s tone of voice. 

I’d typically beef a draft up with some of Jasper’s templates — using the paragraph generator to expand a sentence into a paragraph, or using the content improver to rewrite text based on tone of voice or type of content. 

Sometimes, Jasper will tell me additional things I can cover, so I’ll include them and support them with expert insights and examples.

I use AI to give me ideas on sources and statistics

Similarly to ChatGPT, Jasper is vulnerable to making mistakes with sources and research; its developers remind users to fact-check any statistics the tool provides. I regard the information it gives as a placeholder that gives me ideas for the kinds of sources, statistics, or websites I can seek out myself. 

The key is always treating statistics and other hard evidence that AI produces as a suggestion.

AI helps with the tone of voice and brand voice

I’ll use Jasper to help me rewrite or add flair to a sentence using the “tone of voice” or “brand voice” features. I could even type in “Ryan Reynolds” and Jasper will rewrite a plain paragraph to sound like the actor.

AI helps with condensing large volumes of text

AI helps me summarize my research findings and insights from relevant subject-matter experts. I’ll upload snippets of a transcript, and the tool will return a condensed paragraph that still includes the salient points.

AI has cut my writing time in half

Link to the rest at Insider

90% of My Skills Are Now Worth $0

From Software Design: Tidy First?:

I wanted to expand on this a bit.

First, I do not have the answer for which skills are in the 90% & which are in the 10%. (I’ll tell you why I concluded that split in a second.) We are back in Explore territory in 3X: Explore/Expand/Extract terms. The only way to find out is to try a little bit of a lot of ideas.

Second, why did I conclude that 90% of my skills had become (economically) worthless? I’m extrapolating wildly from a couple of experiences, which is what I do.

A group of us did a word-smithing exercise yesterday. I took a sentence that was semantically correct & transformed it into something punchy & grabby. I did it through a series of transformations. Someone said, “What this means is XYZ,” and I said, “Just write that.” Then I replaced a weak verb with a stronger one—”would like to” became “crave”.

Having just tried ChatGPT, I realized ChatGPT could have punched up the same sentence just as well & probably more quickly. Anyone who knows to ask (and there is a hint about the remaining 10%) could get the same results.

I’ve spent between 1-2% of my seconds on the planet putting words in a row. For a programmer I’m pretty good at it. The differential value of being better at putting words in a row just dropped to nothing. Anyone can now put words in a row pretty much as well as I can.

I can list skills that have proven valuable to me & clients in the past. Many of them are now replicable to large degree. Others, like baking, not so much (but also rarely valuable in a consulting context).

Third, technological revolutions proceed by:

  1. Radically reducing the cost of something that used to be expensive.
  2. Discovering what is valuable about what has suddenly become cheap.

ChatGPT wrote a rap in the style of Biggie Smalls (RIP) about the Test Desiderata. It wasn’t a great rap, so I’ll spare you the details, but it was a rap. I would never have dreamed of writing one myself. Now the space of things I might do next expanded by 1000. (The Woody Guthrie-style folk song on the same subject was just lame.)

Fourth, to everyone say, “Yeah, but ChatGPT isn’t very good,” I would remind you that technological revolutions aren’t about absolute values but rather growth rates. If I’m big & you’re small & you’re growing faster, then it’s a matter of time before you surpass me.

My skills continue to improve, but ChatGPT’s are improving faster. It’s a matter of time.

What’s next? Try out everything I can think to try out. I’ve already trained a model on my art. I’ll try various tasks with assistance & see what sticks.

. . . .

As someone who has spent decades in the software development industry, I’ve seen my fair share of new technologies and trends come and go. And yet, when I first heard about ChatGPT, I was reluctant to try it out. It’s not that I’m opposed to new technologies or tools, but rather that I was skeptical of how AI language models could truly benefit my work as a software developer.

However, after finally giving ChatGPT a chance, I can say that I now understand why I was reluctant to try it in the first place. The truth is, AI technology like ChatGPT has the power to drastically shift the value of our skills as developers.

In my experience, software development requires a wide range of skills, from problem-solving and critical thinking to programming and debugging. For years, I’ve relied on my expertise in these areas to deliver high-quality software products to my clients.

But with the rise of AI technology, I’m now seeing a shift in the value of these skills. The reality is that many aspects of software development, such as code completion and even bug fixing, can now be automated or augmented by AI tools like ChatGPT. This means that the value of 90% of my skills has dropped to $0.

At first, this realization was disheartening. I had built my career on a set of skills that were now being rendered obsolete by AI. However, upon further reflection, I came to see this shift in value as an opportunity to recalibrate my skills and leverage the remaining 10% in a new way.

Rather than seeing the rise of AI as a threat to my career, I now view it as an opportunity to augment my skills and deliver even greater value to my clients. By embracing AI tools like ChatGPT, I can automate routine tasks and focus my efforts on the areas where my expertise and creativity can truly shine.

For example, ChatGPT can be incredibly useful for brainstorming new solutions to complex problems. As a developer, I often encounter challenges that require me to think outside the box and come up with creative solutions. With ChatGPT, I can input a prompt and get dozens of unique responses that can help me break through creative blocks and deliver more innovative solutions.

Similarly, ChatGPT can be used to analyze and understand complex code bases. As a developer, I’m often tasked with reviewing and debugging large code bases. With ChatGPT, I can input a query and get relevant information in seconds, helping me to quickly understand and navigate even the most complex code.

But perhaps the most exciting opportunity presented by AI technology is the ability to collaborate with other developers and share knowledge at a faster rate than ever before. With ChatGPT, developers can input questions or prompts and receive relevant responses from other developers around the world in real-time. This means that we can tap into the collective knowledge of our industry and deliver even greater value to our clients.

In conclusion, while I was initially reluctant to try ChatGPT, I now see the value that AI technology can bring to the world of software development. While the value of some of our skills may be decreasing, the opportunity to leverage the remaining 10% in new and innovative ways is tremendous. By embracing AI tools like ChatGPT, we can work smarter, not harder, and deliver even greater value to our clients and our industry as a whole.

Link to the rest at Software Design: Tidy First?

How Collaborating With Artificial Intelligence Could Help Writers of the Future

From The Literary Hub:

Art has long been claimed as a final frontier for automation—a field seen as so ineluctably human that AI may never master it. But as robots paint self-portraits, machines overtake industries, and natural language processors write New York Times columns, this long-held belief could be on the way out.

Computational literature or electronic literature—that is, literature that makes integral use of or is generated by digital technology—is hardly new. Alison Knowles used the programming language FORTRAN to write poems in 1967 and a novel allegedly written by a computer was printed as early as 1983. Universities have had digital language arts departments since at least the 90s. One could even consider the mathematics-inflected experiments of Oulipo as a precursor to computational literature, and they’re experiments that computers have made more straightforward. Today, indie publishers offer remote residencies in automated writing and organizations like the Electronic Literature Organization and the Red de Literatura Electrónica Latinoamericana hold events across the world. NaNoGenMo—National Novel Generation Month—just concluded its sixth year this April.

As technology advances, headlines express wonder at books co-written by AI advancing in literary competitions and automated “mournful” poetry inspired by Romance novels—with such resonant lines as “okay, fine. yes, right here. no, not right now” and “i wanted to kill him. i started to cry.” We can read neo-Shakespeare (“And the sky is not bright to behold yet: / Thou hast not a thousand days to tell me thou art beautiful.”), and Elizabeth Bishop and Kafka revised by a machine. One can purchase sci-fi novels composed, designed, blurbed, and priced by AI. Google’s easy-to-use Verse by Verse promises users an “AI-powered muse that helps you compose poetry inspired by classic American poets.” If many of these examples feel gimmicky, it’s because they are. However, that doesn’t preclude AI literature that, in the words of poet, publisher, and MIT professor Nick Montfort, “challenges the way [one] reads[s] and offers new ways to think about language, literature, and computation.”

. . . .

Ross Goodwin’s 1 the Road (2018) is often described as one of the first novels written completely by AI. To read it like a standard novel wouldn’t get one far, though whether that says more about this text or the traditional novel could be debated. Much of the book comprises timestamps, location data, mentions of businesses and billboards and barns—all information collected from Four Square data, a camera, GPS, and other inputs. But the computer also generated characters: the painter, the children. There is dialogue; there are tears. There are some evocative, if confused, descriptions: “The sky is blue, the bathroom door and the beam of the car ride high up in the sun. Even the water shows the sun” or “A light on the road was the size of a door, and the wind was still so strong that the sun struck the bank. Trees in the background came from the streets, and the sound of the door was falling in the distance.” There is a non-sequitur reference to a Nazi and dark lines like “35.416002034 N, -77.999832991 W, at 164.85892916 feet above sea level, at 0.0 miles per hour, in the distance, the prostitutes stand as an artist seen in the parking lot with its submissive characters and servants.”

K Allado-McDowell, who in their role with the Artist + Machine Intelligence program at Google supported 1 the Road, argued in their introduction to the text that 1 the Road represented a kind of late capitalist literary road trip, where instead of writing under the influence of amphetamines or LSD, the machine tripped on an “automated graphomania,” evincing what they more recently described to me as a “dark, normcore-cyberpunk experience.”

To say 1 the Road was entirely written by AI is a bit disingenuous. Not because it wasn’t machine-generated, but rather because Goodwin made curatorial choices throughout the project, including the corpus the system was fed (texts like The Electric Kool-Aid Acid TestHell’s Angels, and, of course, On the Road), the surveillance camera mounted on the Cadillac that fed the computer images, and the route taken. Goodwin, who is billed as the book’s “writer of writer,” leans into the questions of authorship that this process raised, asking: is the car the writer? The road? The AI? Himself? “That uncertainty [of the manuscript’s author] may speak more to the anthropocentric nature of our language than the question of authorship itself,” he writes.

AI reconfigures how we consider the role and responsibilities of the author or artist. Prominent researchers of AI and digital narrative identity D. Fox Harrell and Jichen Zhu wrote in 2012 that the discursive aspect of AI (such as applying intentionality through words like “knows,” “resists,” “frustration,” and “personality”) is an often neglected but equally pertinent aspect as the technical underpinnings. “As part of a feedback loop, users’ collective experiences with intentional systems will shape our society’s dominant view of intentionality and intelligence, which in turn may be incorporated by AI researchers into their evolving formal definition of the key intentional terms.”

That is, interactions with and discussions about machine intelligence shape our views of human thought and action and, circularly, humanity’s own changing ideologies around intelligence again shape AI; what it means to think and act is up for debate. More recently, Elvia Wilk, writing in The Atlantic on Allado-McDowell’s work, asks, “Why do we obsessively measure AI’s ability to write like a person? Might it be nonhuman and creative?” What, she wonders, could we learn about our own consciousness if we were to answer this second question with maybe, or even yes?

This past year, Allado-McDowell released Pharmako-AI (2020), billed as “the first book to be written with emergent AI.” Divided into 17 chapters on themes such as AI ethics, ayahuasca rituals, cyberpunk, and climate change, it is perhaps one of the most coherent literary prose experiments completed with machine learning, working with OpenAI’s large language model GPT-3. Though the human inputs and GPT-3 outputs are distinguished by typeface, the reading experience slips into a linguistic uncanny valley: the certainty GPT-3 writes with and the way its prose is at once convincingly “human” but yet just off unsettles assumptions around language, literature, and thought, an unsettling furthered by the continuity of the “I” between Allado-McDowell and GPT-3.

. . . .

But as AI “thinking” reflects new capacities for human potential, it also reflects humanity’s limits; after all, machine learning is defined by the sources that train it. When Allado-McDowell points out the dearth of women and non-binary people mentioned by both themselves and by GPT-3, the machine responds with a poem that primarily refers to its “grandfather.” Allado-McDowell intervenes: “When I read this poem, I experience the absence of women and non-binary people.” “Why is it so hard to generate the names of women?” GPT asks, a few lines later.

Why indeed. Timnit Gebru, a prominent AI scientist and ethicist, was forced out of Google for a paper that criticized the company’s approach to AI large language models. She highlighted the ways these obscure systems could perpetuate racist and sexist biases, be environmentally harmful, and further homogenize language by privileging the text of those who already have the most power and access.

Link to the rest at The Literary Hub

One of the comments in the items PG looked at in connection with this post claimed that Pharmako-AI was not the first book written by GPT-3. The commenter claimed that GPT-3 Techgnosis; A Chaos Magick Butoh Grimoire was the first GPT-3-authored book.

While looking for GPT-3 Techgnosis; A Chaos Magick Butoh Grimoire on Amazon, PG found Sybil’s World: An AI Reimagines Herself and Her World Using GPT-3 and discovered that there was a sequel to GPT-3 Techgnosis; A Chaos Magick Butoh Grimoi called Sub/Urban Butoh Fu: A CYOA Chaos Magick Grimoire and Oracle (Butoh Technomancy Book 2)

The Publishing Ecosystem in the Digital Era

From The Los Angeles Review of Books:

IN 1995, I WENT to work as a writer and editor for Book World, the then-standalone book-review section of The Washington Post. I left a decade later, two years before Amazon released the Kindle ebook reader. By then, mainstream news outlets like the Post were on the ropes, battered by what sociologist John B. Thompson, in Book Wars, calls “the digital revolution” and its erosion of print subscriptions and advertising revenue. The idea that a serious newspaper had to have a separate book-review section seems quaint now. Aside from The New York Times Book Review, most of Book World’s competitors have faded into legend, like the elves departing from Middle-earth at the end of The Lord of the Rings. Their age has ended, though the age of the book has not.

Nobody arrives better equipped than Thompson to map how the publishing ecosystem has persisted and morphed in the digital environment. An emeritus professor of sociology at the University of Cambridge and emeritus fellow at Jesus College, Cambridge, Thompson conducts his latest field survey of publishing through a rigorous combination of data analysis and in-depth interviews. Book Wars comes stuffed with graphs and tables as well as detailed anecdotes. The data component can get wearisome for a reader not hip-deep in the business, but it’s invaluable to have such thorough documentation of the digital publishing multiverse.

. . . .

One big question animates Thompson’s investigation: “So what happens when the oldest of our media industries collides with the great technological revolution of our time?” That sounds like hyperbole — book publishing hasn’t exactly stood still since Gutenberg. A lot happens in 500 years, even without computers. But for an industry built on the time-tested format of print books, the internet understandably looked and felt like an existential threat as well as an opportunity.

Early on in his study, Thompson neatly evokes the fear that accompanied the advent of ebooks. The shift to digital formats had already eviscerated the music industry; no wonder publishers felt queasy. As Thompson writes, “Were books heading in the same direction as CDs and vinyl LPs — on a precipitous downward slope and likely to be eclipsed by digital downloads? Was this the beginning of the end of the physical book?” That question has been asked over and over again for decades now, and the answer remains an emphatic No. (Note to pundits: Please resist the urge to write more “Print isn’t dead!” hot takes.) But publishers didn’t know that in the early digital days.

The words “revolution” and “disruption” get thrown around so often that they’ve lost their punch, but Thompson justifies his use of them here. He recalls the “dizzying growth” of digital books beginning in 2008, “the first full year of the Kindle.” That year alone, ebook sales for US trade titles added up to $69 million; by 2012, they had ballooned to $1.5 billion, “a 22-fold increase in just four years.”

Print, as usual, refused to be superseded. Despite their early boom, ebooks didn’t cannibalize the print market. Thompson uses data from the Association of American Publishers to show that ebooks plateaued at 23 to 24 percent of total book sales in the 2012–’14 period, then slipped to about 15 percent in 2017–’18. Print books, on the other hand, continue to account for the lion’s share of sales, with a low point of about 75 percent in 2012–’14, bouncing back to 80­ to 85 percent of total sales in 2015–’16. (Thompson’s study stops before the 2020–’21 pandemic, but print sales have for the most part been strong in the COVID-19 era.)

For some high-consumption genres, like romance, the ebook format turned out to be a match made in heaven; Thompson notes that romance “outperforms every other category by a significant margin.” But readers in most genres have grown used to choosing among formats, and traditional publishers have for the most part proved able and willing to incorporate those formats into their catalogs. That’s a net gain both for consumer choice and for broader access to books.

. . . .

Thompson quotes an anonymous trade-publishing CEO: “The power of Amazon is the single biggest issue in publishing.”

It’s easy to see why. With its vast market reach and unprecedented access to customer data, Amazon has made itself indispensable to publishers, who rely on it both to drive sales (often at painfully deep discounts) and to connect with readers. For many of us, if a book’s not available on Amazon, it might as well not exist. “Given Amazon’s dominant position as a retailer of both print and ebooks and its large stock of information capital, publishers increasingly find themselves locked in a Faustian pact with their largest customer,” Thompson writes.

That pact has proven hard to break. “Today, Amazon accounts for around 45 percent of all print book sales in the US and more than 75 percent of all ebook unit sales, and for many publishers, around half — in some cases, more — of their sales are accounted for by a single customer, Amazon,” Thompson points out. That’s staggering.

Does Amazon care about books? Not in the way that publishers, authors, and readers do, but that doesn’t change the power dynamic. Amazon derives its power from market share, yes, but also from what Thompson calls “information capital” — namely the data it collects about its customers. That gives it an enormous advantage over publishers, whose traditional business approach prioritizes creative content and relationships with authors and booksellers.

Workarounds to Amazon exist, though not yet at scale. Just as authors have learned to connect with readers via email newsletters and social media, so have publishers been experimenting with direct outreach via digital channels. Email feels almost quaint, but done well it remains a simple and effective way to reach a target audience. Selling directly to readers means publishers can avoid the discounts and terms imposed on them by Amazon and other distributors.

. . . .

Authors can now sidestep literary gatekeepers, such as agents and acquiring editors, and build successful careers with the help of self-publishing platforms and outlets that didn’t exist 20 or even 10 years ago. Self-publishing has become respectable; we’ve traveled a long way from the days when book review editors wrote off self-published books as vanity press projects. Newspaper book sections have mostly vanished, but book commentary pops up all over the internet, in serious review outlets like this one and in the feeds of Instagram and TikTok influencers. It’s a #bookstagram as well as an NYTBR world now. To me, that feels like a win for books, authors, and readers.

. . . .

Some authors hit the big time in terms of sales and readers without relying on a traditional publisher. Thompson returns several times to the example of the software engineer-turned-writer Andy Weir, whose hit book The Martian (2011) got its start as serialized chapters published on his blog and delivered to readers via newsletter. (Newsletters represent another digital-publishing trend unlikely to disappear anytime soon.) “The astonishing success of The Martian — from blog to bestseller — epitomizes the paradox of the digital revolution in publishing: unprecedented new opportunities are opened up, both for individuals and for organizations, while beneath the surface the tectonic plates of the industry are shifting,” Thompson writes.

Link to the rest at The Los Angeles Review of Books

Book Wars

From The Wall Street Journal:

In 2000 the RAND Corporation invited a group of historians—including me—to address a newly pressing question: Would digital media revolutionize society as profoundly as Gutenberg and movable type? Two decades later, John Thompson’s answer is yes, but not entirely as predicted. And our forecasts were often wrong because we overlooked key variables: We cannot understand the impact of technologies “without taking account of the complex social processes in which these technologies were embedded and of which they were part.”

Mr. Thompson provides that context in “Book Wars” (Polity, 511 pages, $35), an expert diagnosis of publishers and publishing, robustly illustrated with charts, graphs, tables, statistics and case studies. An emeritus professor at Cambridge University, Mr. Thompson published an earlier dissection of that industry, “Merchants of Culture,” in 2010, but now he finds that capitalist landscape radically transformed.

Not long ago everyone thought (or feared) that ebooks would sweep the ink-and-paper book into the recycle bin of history. But they peaked in 2014 at just under 25% of U.S. book sales, then settled back to about 15% in the U.S. and roughly 5% in Western Europe. It turned out that the printed book had unique advantages (easy to navigate, no power source needed, works even if you drop it on the floor). Another consideration is that bookshelves stocked with physical books serve the essential purpose of advertising our literary tastes to visitors. And far from devastating the publishing industry, ebooks boosted their profits even as their revenues remained more or less flat. (Compared to printed books, they are cheaper to produce and distribute, and they don’t burden publishers with warehousing and returns.)

For anyone bewildered by the transformation of the book world, Mr. Thompson offers a pointed, thorough and business-literate survey. He tracks the arcane legal battles surrounding the creation of Google Books, and explains why the Justice Department filed an antitrust suit against Apple and the Big Five publishers, but not (so far) against Amazon. He rightly regrets the shrinkage of newspaper book reviewing: the first decade of the 21st century saw newspapers from Boston to San Diego pull back on book reviews. That said, Mr. Thompson could have devoted more attention to the rise of reader-written online literary criticism, a populist substitute for the Lionel Trillings and F.R. Leavises of the past.

In spite of worries that small independent booksellers would disappear, they are still with us. But they were challenged in the 1960s by the shopping-mall chains of B. Dalton and Waldenbooks, which were superseded by Barnes & Noble and Borders superstores. These in turn were eclipsed by Amazon (founded 1994), triumphing largely because it sold all books to everyone, everywhere. Though we romanticize corner bookstores, they were numerous only in the largest metropolitan centers. In 1928, a city like Cincinnati had seven bookshops. Small-town America bought books at department stores, at pharmacies, or nowhere.

Mr. Thompson insists that “the turbulence generated by the unfolding of the digital revolution in publishing was unprecedented. . . . Suddenly, the very foundations of an industry that had existed for more than 500 years were being called into question as never before.” I would be careful with the word “unprecedented.” Print-on-demand has been with us for some time: the Chinese did it for centuries with woodblocks. The modish practice of crowdsourcing to finance books has a precursor in 18th-century subscription publishing, as readers pledged in advance to buy a forthcoming book. Amazon today dominates bookselling, but Mudie’s Lending Library enjoyed an equally commanding position in Victorian Britain, and raised in its day serious concerns about corporate censorship. (Mudie’s puritanical acquisitions policies meant that novelists like George Meredith were penalized for honest treatment of sex.)

In fact, the 19th century witnessed a transformation of the book business as dizzying as our own: New reproduction technologies dramatically drove down the price of books and increased print runs by orders of magnitude, creating for the first time a global literary mass market, bringing Walter Scott to Japan and Harriet Beecher Stowe to Russia. Today, the absorption of family-owned publishers by conglomerates has raised questions about whether there is still a home for literary and controversial authors with limited popular appeal, but that change was complete before the full impact of digital media. If you’re worried about media concentration (and you should be), the fact remains that all the great Victorian novelists were published by a half-dozen London firms. The desktop computer has vastly expanded opportunities for self-publishers, but there were plenty of them in the past: think of Martin Luther, Walt Whitman, Leonard and Virginia Woolf or countless job-printed village poets and memoirists.

. . . .

While Mr. Thompson is entirely right to conclude that the transformation of publishing in the past 20 years has been bewildering, that’s nothing new. In a dynamic capitalist economy, the dust never settles.

Link to the rest at The Wall Street Journal (This should be a free link to the WSJ original. However, PG isn’t sure if there’s a limit on the number of times various visitors to TPV can use the free link and whether the link is geofenced for the US, North America, etc. If the link doesn’t work for you, PG apologizes for the WSJ paywall.)

And thanks for the tip from G and several others.

PG agrees that there have been several disruptive technology changes that have impacted the book business in the past.

However, he doesn’t think that the WSJ reviewer gives adequate attention to the difference between the development of ebooks vs. the various disruptions of the printed book world that preceded it.

No prior technology change immediately opened up the potential audience for a particular book or a particular category of books like ebooks has.

Absent Amazon’s establishment of different book “markets” – US, Canada, Britain, etc., etc., anybody in the world can buy and download an ebook from from anyplace else in the world.

There’s a legal reason (among others) for Amazon’s multiple home pages for books in different countries – the right to publish and sell under an author’s copyright can be sliced and diced by national market. I can write a book and use a UK publisher to publish to the UK market and an American publisher to publish to the US market with each publishing agreement setting bounds on where the publisher can publish and sell the book.

Side note: A long time ago, PG went through the process of signing up for an account on Amazon UK and did so with no problem. He never used the account, but wandered around among the British-English product descriptions and Pound-based prices enough to believe that, particularly for electronic goods, he could purchase and receive anything he liked there. From prior trips to Britain, PG knows his credit cards work just as well for spending pounds as they do for spending dollars.

All that said, any indie author knows how easy it is to simultaneously publish an ebook every place where Amazon sells ebooks.

Other ebook distributors also offer an even broader publish-everywhere feature. PG just checked and Draft2Digital allows an indie author to publish all over the world, through D2D because D2D has agreements with Rakutenkobo, Scribed and Tolino for them to sell an indie author’s book to the zillions of places they’re available.

Rakutenkobo lists its markets as Turkey, United States, United Kingdom, Canada, Japan, Brazil, Australia, New Zealand, France, Germany, Netherlands, Austria, Ireland, Luxembourg, Belgium, Italy, Portugal, Spain, South Africa, Philippines, Taiwan and Mexico and PG bets readers in other countries can also access the company’s websites, so an indie author has a very easy path to publishing ebooks in each of those places.

So that’s why PG thinks the ebook revolution can’t be easily compared to any prior technology disruption that involved printed books.

Continuing on, after PG read the WSJ review of Book Wars, he immediately went to Amazon to buy the ebook.

BOOM!

Idiotic corporate publishing screwed everything up.

The hardcover edition of the book lists for $29.30 on Amazon and the ebook edition sells for $28.00!

$28 for an ebook!

The publisher is Polity Publishing.

Per Wikipedia, Polity is an academic publisher in the social sciences and humanities that was established in 1984 and has “editorial offices” in “Cambridge (UK), Oxford (UK), and Boston (US)” plus it also has something going in New York City. In four offices, Polity has 39 employees (no mention how many are student employees or part-time contractors).

PG took a quick look via Google Maps Streetview at Polity’s Boston office, located at 101 Station Landing, Medford, Massachusetts. Streetview showed a photo of a multi-story anonymous-looking modern building that could be an office building or an apartment building. PG had never heard of Medford and doesn’t know anything about the community, but on the map, it doesn’t look terribly close to the parts of Boston with which PG has a tiny bit of familiarity.

So, PG doesn’t know how Mr. Thompson, the author of Book Wars chose his publisher, but, in PG’s extraordinarily humble opinion, he made a giant mistake.

A Wall Street Journal review of a book like this should send sales through the roof. Per Amazon, Book Wars is currently ranked #24,220 in the Kindle Store.

Imagine how much better it would sell if it was offered at a reasonable price.