Understanding humanoid robots

From TechCrunch:

Robots made their stage debut the day after New Year’s 1921. More than half-a-century before the world caught its first glimpse of George Lucas’ droids, a small army of silvery humanoids took to the stages of the First Czechoslovak Republic. They were, for all intents and purposes, humanoids: two arms, two legs, a head — the whole shebang.

Karel Čapek’s play, R.U.R (Rossumovi Univerzální Roboti), was a hit. It was translated into dozens of languages and played across Europe and North America. The work’s lasting legacy, however, was its introduction of the word “robot.” The meaning of the term has evolved a good bit in the intervening century, as Čapek’s robots were more organic than machine.

Decades of science fiction have, however, ensured that the public image of robots hasn’t strayed too far from its origins. For many, the humanoid form is still the platonic robot ideal — it’s just that the state of technology hasn’t caught up to that vision. Earlier this week, Nvidia held its own on-stage robot parade at its GTC developer conference, as CEO Jensen Huang was flanked by images of a half-dozen humanoids.

While the notion of the concept of the general-purpose humanoid has, in essence, been around longer than the word “robot,” until recently, the realization of the concept has seemed wholly out of grasp. We’re very much not there yet, but for the first time, the concept has appeared over the horizon.

What is a “general-purpose humanoid?”

Before we dive any deeper, let’s get two key definitions out of the way. When we talk about “general-purpose humanoids,” the fact is that both terms mean different things to different people. In conversations, most people take a Justice Potter “I know it when I see it” approach to both in conversation.

For the sake of this article, I’m going to define a general-purpose robot as one that can quickly pick up skills and essentially do any task a human can do. One of the big sticking points here is that multi-purpose robots don’t suddenly go general-purpose overnight.

Because it’s a gradual process, it’s difficult to say precisely when a system has crossed that threshold. There’s a temptation to go down a bit of a philosophical rabbit hole with that latter bit, but for the sake of keeping this article under book length, I’m going to go ahead and move on to the other term.

I received a bit of (largely good-natured) flack when I referred to Reflex Robotics’ system as a humanoid. People pointed out the plainly obvious fact that the robot doesn’t have legs. Putting aside for a moment that not all humans have legs, I’m fine calling the system a “humanoid” or more specifically a “wheeled humanoid.” In my estimation, it resembles the human form closely enough to fit the bill.

A while back, someone at Agility took issue when I called Digit “arguably a humanoid,” suggesting that there was nothing arguable about it. What’s clear is that robot isn’t as faithful an attempt to recreate the human form as some of the competition. I will admit, however, that I may be somewhat biased having tracked the robot’s evolution from its precursor Cassie, which more closely resembled a headless ostrich (listen, we all went through an awkward period).

Another element I tend to consider is the degree to which the humanlike form is used to perform humanlike tasks. This element isn’t absolutely necessary, but it’s an important part of the spirit of humanoid robots. After all, proponents of the form factor will quickly point out the fact that we’ve built our worlds around humans, so it makes sense to build humanlike robots to work in that world.

Adaptability is another key point used to defend the deployment of bipedal humanoids. Robots have had factory jobs for decades now, and the vast majority of them are single-purpose. That is to say, they were built to do a single thing very well a lot of times. This is why automation has been so well-suited for manufacturing — there’s a lot of uniformity and repetition, particularly in the world of assembly lines.

Brownfield vs. Greenfield

The terms “greenfield” and “brownfield” have been in common usage for several decades across various disciplines. The former is the older of two, describing undeveloped land (quite literally, a green field). Developed to contrast the earlier term, brownfield refers to development on existing sites. In the world of warehouses, it’s the difference between building something from scratch or working with something that’s already there.

There are pros and cons of both. Brownfields are generally more time and cost-effective, as they don’t require starting from scratch, while greenfields afford to opportunity to built a site entirely to spec. Given infinite resources, most corporations will opt for a greenfield. Imagine the performance of a space built ground-up with automated systems in mind. That’s a pipedream for most organizers, so when it comes time to automate, a majority of companies seek out brownfield solutions — doubly so when they’re first dipping their toes into the robotic waters.

Given that most warehouses are brownfield, it ought come as no surprise that the same can be said for the robots designed for these spaces. Humanoids fit neatly into this category — in fact, in a number of respects, they are among the brownest of brownfield solutions. This gets back to the earlier point about building humanoid robots for their environments. You can safely assume that most brownfield factories were designed with human workers in mind. That often comes with elements like stairs, which present an obstacle for wheeled robots. How large that obstacle ultimately is depends on a lot of factors, including layout and workflow.

Baby Steps

Call me a wet blanket, but I’m a big fan of setting realistic expectations. I’ve been doing this job for a long time and have survived my share of hype cycles. There’s an extent to which they can be useful, in terms of building investor and customer interest, but it’s entirely too easy to fall prey to overpromises. This includes both stated promises around future functionality and demo videos.

I wrote about the latter last month in a post cheekily titled, “How to fake a robotics demo for fun and profit.” There are a number of ways to do this, including hidden teleoperation and creative editing. I’ve heard whispers that some firms are speeding up videos, without disclosing the information. In fact, that’s the origin of humanoid firm 1X’s name — all of their demos are run in 1X speed.

Most in the space agree that disclosure is important — even necessary — on such products, but there aren’t strict standards in place. One could argue that you’re wading into a legal gray area if such videos play a role in convincing investors to plunk down large sums of money. At the very least, they set wildly unrealistic expectations among the public — particularly those who are inclined to take truth-stretching executives’ words as gospel.

That can only serve to harm those who are putting in the hard work while operating in reality with the rest of us. It’s easy to see how hope quickly diminishes when systems fail to live up to those expectations.

The timeline to real-world deployment contains two primary constraints. The first is mechatronic: i.e. what the hardware is capable of. The second is software and artificial intelligence. Without getting into a philosophical debate around what qualifies as artificial general intelligence (AGI) in robots, one thing we can certainly say is that progress has — and will continue to be gradual.

As Huang noted at GTC the other week, “If we specified AGI to be something very specific, a set of tests where a software program can do very well — or maybe 8% better than most people — I believe we will get there within five years.” That’s on the optimistic end of the timeline I’ve heard from most experts in the field. A range of five to 10 years seems common.

Before hitting anything resembling AGI, humanoids will start as single-purpose systems, much like their more traditional counterparts. Pilots are designed to prove out that these systems can do one thing well at scale before moving onto the next. Most people are looking at tote moving for that lowest-hanging fruit. Of course, your average Kiva/Locus AMR can move totes around all day, but those systems lack the mobile manipulators required to move payloads on and off themselves. That’s where robot arms and end effectors come in, whether or not they happen to be attached to something that looks human.

. . . .

Two legs to stand on

At this point, the clearest path to AGI should look familiar to anyone with a smartphone. Boston Dynamics’ Spot deployment provides a clear real-world example of how the app store model can work with industrial robots. While there’s a lot of compelling work being done in the world of robot learning, we’re a ways off from systems that can figure out new tasks and correct mistakes on the fly at scale. If only robotics manufacturers could leverage third-party developers in a manner similar to phonemakers.

Interest in the category has increased substantially in recent months, but speaking personally, the needle hasn’t moved too much in either direction for me since late last year. We’ve seen some absolutely killer demos, and generative AI presents a promising future. OpenAI is certainly hedging its bets, first investing in 1X and — more recently — Figure.

A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the mood led to the invention of products we use on Earth to this day.

Link to the rest at TechCrunch

PG notes that the robot below is using OpenAI, a general-purpose AI program.

https://youtube.com/shorts/nmHzvQr3kYE?si=Q2KjwJXLINEtyiVe

On the Future Of Newspapers

From The Falls Church News-Press:

“The internet dissected your daily newspaper into its constituent parts, letting readers find the news they want without ever buying a paper or visiting a homepage – and handing the most lucrative part…, the advertising business, to companies such as Meta and Google that don’t produce news.”

In one succinct sentence, Washington Post opinion writer Megan McArdle told just about the whole story of the demise of local news in her column entitled, “The Great Age of Cord Cutting is Approaching Its End” published in the Post this Tuesday.

Our founder, owner and editor Nicholas F. Benton will be addressing the monthly luncheon meeting of the Falls Church Chamber of Commerce on just this reality, its implications and what can be done about it this coming Tuesday, February 20, at the Italian Cafe in Falls Church. He will bring his more than 33 years of experience making the Falls Church News-Press work for more than 1,700 consecutive weekly editions delivered to every household in The Little City to bear on this question that is vital to our democracy.
The landscape for local news in Northern Virginia has changed dramatically over those more than three decades, and the News-Press has endured to become just about the only general news source in the region that still comes out in print.

How to understand what that means for the community it serves, and for those who have lost such a benefit over the years, as well as how and what needs to happen to ensure it continues to get done will be our editor’s subject. The new book, “Life and Times of the Falls Church News-Press,” by the late Charlie Clark, will be available for sale as a resource at the talk, which will also be recorded.

A good newspaper is more than just a chronicle of events in a community, but serves as a vital glue for the components that not only make up, but also seek to advance a community’s ability to provide for its public’s needs, especially as they involve core human and democratic values. For Mr. Benton, this has taken the form of continually shining a light on the community needs for, among other things, smart development, affordable housing and above all education of the young.

The internet and strictly digital sources have unwittingly contributed to the undermining of this approach by shattering information into countless discrete categories, thus disabling the ability of a community’s citizens to function from the standpoint of an overview of these combined values and needs.

Benton and the News-Press since its founding in 1991 have operated from the standpoint of advocating for those who cannot advocate for themselves, and that has meant promoting education for the young by encouraging the kind of economic development that can pay for a quality educational system. It has meant taking sides in opposition to those who resist such developments for selfish reasons, be they big corporate interests or citizens against constructive change.

Link to the rest at The Falls Church News-Press

When PG was in high school, he was the high school sports reporter (and the only sports reporter) for a very small local newspaper that was mimeographed weekly and delivered free to all the mailboxes in town and the surrounding area.

He saw no conflict of interest in writing about games in which he had participated. However, he seldom mentioned his name, in large part because he was not a particularly outstanding player, even on teams comprised of small, slow white boys with a smattering of members of the local Sioux tribe.

As humble as it was, the newspaper, along with school activities, was about all that made the little town a community. When the local schools were closed and students were bussed to schools in a larger nearby town a few years after PG graduated, that little town began to lose population and has continued to decline into just a group of run-down and unoccupied houses.

PG never heard what happened to the lady who ran the newspaper.

What Is Disruptive Innovation?

From The Harvard Business Review:

The theory of disruptive innovation, introduced in these pages in 1995, has proved to be a powerful way of thinking about innovation-driven growth. Many leaders of small, entrepreneurial companies praise it as their guiding star; so do many executives at large, well-established organizations, including Intel, Southern New Hampshire University, and Salesforce.com.

Unfortunately, disruption theory is in danger of becoming a victim of its own success. Despite broad dissemination, the theory’s core concepts have been widely misunderstood and its basic tenets frequently misapplied. Furthermore, essential refinements in the theory over the past 20 years appear to have been overshadowed by the popularity of the initial formulation. As a result, the theory is sometimes criticized for shortcomings that have already been addressed.

There’s another troubling concern: In our experience, too many people who speak of “disruption” have not read a serious book or article on the subject. Too frequently, they use the term loosely to invoke the concept of innovation in support of whatever it is they wish to do. Many researchers, writers, and consultants use “disruptive innovation” to describe any situation in which an industry is shaken up and previously successful incumbents stumble. But that’s much too broad a usage.

The problem with conflating a disruptive innovation with any breakthrough that changes an industry’s competitive patterns is that different types of innovation require different strategic approaches. To put it another way, the lessons we’ve learned about succeeding as a disruptive innovator (or defending against a disruptive challenger) will not apply to every company in a shifting market. If we get sloppy with our labels or fail to integrate insights from subsequent research and experience into the original theory, then managers may end up using the wrong tools for their context, reducing their chances of success. Over time, the theory’s usefulness will be undermined.

This article is part of an effort to capture the state of the art. We begin by exploring the basic tenets of disruptive innovation and examining whether they apply to Uber. Then we point out some common pitfalls in the theory’s application, how these arise, and why correctly using the theory matters. We go on to trace major turning points in the evolution of our thinking and make the case that what we have learned allows us to more accurately predict which businesses will grow.

First, a quick recap of the idea: “Disruption” describes a process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Specifically, as incumbents focus on improving their products and services for their most demanding (and usually most profitable) customers, they exceed the needs of some segments and ignore the needs of others. Entrants that prove disruptive begin by successfully targeting those overlooked segments, gaining a foothold by delivering more-suitable functionality—frequently at a lower price. Incumbents, chasing higher profitability in more-demanding segments, tend not to respond vigorously. Entrants then move upmarket, delivering the performance that incumbents’ mainstream customers require, while preserving the advantages that drove their early success. When mainstream customers start adopting the entrants’ offerings in volume, disruption has occurred. (See the exhibit “The Disruptive Innovation Model.”)

Is Uber a Disruptive Innovation?

Let’s consider Uber, the much-feted transportation company whose mobile application connects consumers who need rides with drivers who are willing to provide them. Founded in 2009, the company has enjoyed fantastic growth (it operates in hundreds of cities in 60 countries and is still expanding). It has reported tremendous financial success (the most recent funding round implies an enterprise value in the vicinity of $50 billion). And it has spawned a slew of imitators (other start-ups are trying to emulate its “market-making” business model). Uber is clearly transforming the taxi business in the United States. But is it disrupting the taxi business?

According to the theory, the answer is no. Uber’s financial and strategic achievements do not qualify the company as genuinely disruptive—although the company is almost always described that way. Here are two reasons why the label doesn’t fit.

Disruptive innovations originate in low-end or new-market footholds.

Disruptive innovations are made possible because they get started in two types of markets that incumbents overlook. Low-end footholds exist because incumbents typically try to provide their most profitable and demanding customers with ever-improving products and services, and they pay less attention to less-demanding customers. In fact, incumbents’ offerings often overshoot the performance requirements of the latter. This opens the door to a disrupter focused (at first) on providing those low-end customers with a “good enough” product.

In the case of new-market footholds, disrupters create a market where none existed. Put simply, they find a way to turn nonconsumers into consumers. For example, in the early days of photocopying technology, Xerox targeted large corporations and charged high prices in order to provide the performance that those customers required. School librarians, bowling-league operators, and other small customers, priced out of the market, made do with carbon paper or mimeograph machines. Then in the late 1970s, new challengers introduced personal copiers, offering an affordable solution to individuals and small organizations—and a new market was created. From this relatively modest beginning, personal photocopier makers gradually built a major position in the mainstream photocopier market that Xerox valued.

A disruptive innovation, by definition, starts from one of those two footholds. But Uber did not originate in either one. It is difficult to claim that the company found a low-end opportunity: That would have meant taxi service providers had overshot the needs of a material number of customers by making cabs too plentiful, too easy to use, and too clean. Neither did Uber primarily target nonconsumers—people who found the existing alternatives so expensive or inconvenient that they took public transit or drove themselves instead: Uber was launched in San Francisco (a well-served taxi market), and Uber’s customers were generally people already in the habit of hiring rides.

Uber has quite arguably been increasing total demand—that’s what happens when you develop a better, less-expensive solution to a widespread customer need. But disrupters start by appealing to low-end or unserved consumers and then migrate to the mainstream market. Uber has gone in exactly the opposite direction: building a position in the mainstream market first and subsequently appealing to historically overlooked segments.

Disruptive innovations don’t catch on with mainstream customers until quality catches up to their standards.

Disruption theory differentiates disruptive innovations from what are called “sustaining innovations.” The latter make good products better in the eyes of an incumbent’s existing customers: the fifth blade in a razor, the clearer TV picture, better mobile phone reception. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers.

Disruptive innovations, on the other hand, are initially considered inferior by most of an incumbent’s customers. Typically, customers are not willing to switch to the new offering merely because it is less expensive. Instead, they wait until its quality rises enough to satisfy them. Once that’s happened, they adopt the new product and happily accept its lower price. (This is how disruption drives prices down in a market.)

Most of the elements of Uber’s strategy seem to be sustaining innovations. Uber’s service has rarely been described as inferior to existing taxis; in fact, many would say it is better. Booking a ride requires just a few taps on a smartphone; payment is cashless and convenient; and passengers can rate their rides afterward, which helps ensure high standards. Furthermore, Uber delivers service reliably and punctually, and its pricing is usually competitive with (or lower than) that of established taxi services. And as is typical when incumbents face threats from sustaining innovations, many of the taxi companies are motivated to respond. They are deploying competitive technologies, such as hailing apps, and contesting the legality of some of Uber’s services.

Why Getting It Right Matters

Readers may still be wondering, Why does it matter what words we use to describe Uber? The company has certainly thrown the taxi industry into disarray: Isn’t that “disruptive” enough? No. Applying the theory correctly is essential to realizing its benefits. For example, small competitors that nibble away at the periphery of your business very likely should be ignored—unless they are on a disruptive trajectory, in which case they are a potentially mortal threat. And both of these challenges are fundamentally different from efforts by competitors to woo your bread-and-butter customers.

As the example of Uber shows, identifying true disruptive innovation is tricky. Yet even executives with a good understanding of disruption theory tend to forget some of its subtler aspects when making strategic decisions. We’ve observed four important points that get overlooked or misunderstood:

1. Disruption is a process.

The term “disruptive innovation” is misleading when it is used to refer to a product or service at one fixed point, rather than to the evolution of that product or service over time. The first minicomputers were disruptive not merely because they were low-end upstarts when they appeared on the scene, nor because they were later heralded as superior to mainframes in many markets; they were disruptive by virtue of the path they followed from the fringe to the mainstream.

Because disruption can take time, incumbents frequently overlook disrupters.

Most every innovation—disruptive or not—begins life as a small-scale experiment. Disrupters tend to focus on getting the business model, rather than merely the product, just right. When they succeed, their movement from the fringe (the low end of the market or a new market) to the mainstream erodes first the incumbents’ market share and then their profitability. This process can take time, and incumbents can get quite creative in the defense of their established franchises. For example, more than 50 years after the first discount department store was opened, mainstream retail companies still operate their traditional department-store formats. Complete substitution, if it comes at all, may take decades, because the incremental profit from staying with the old model for one more year trumps proposals to write off the assets in one stroke.

The fact that disruption can take time helps to explain why incumbents frequently overlook disrupters. For example, when Netflix launched, in 1997, its initial service wasn’t appealing to most of Blockbuster’s customers, who rented movies (typically new releases) on impulse. Netflix had an exclusively online interface and a large inventory of movies, but delivery through the U.S. mail meant selections took several days to arrive. The service appealed to only a few customer groups—movie buffs who didn’t care about new releases, early adopters of DVD players, and online shoppers. If Netflix had not eventually begun to serve a broader segment of the market, Blockbuster’s decision to ignore this competitor would not have been a strategic blunder: The two companies filled very different needs for their (different) customers.

However, as new technologies allowed Netflix to shift to streaming video over the internet, the company did eventually become appealing to Blockbuster’s core customers, offering a wider selection of content with an all-you-can-watch, on-demand, low-price, high-quality, highly convenient approach. And it got there via a classically disruptive path. If Netflix (like Uber) had begun by launching a service targeted at a larger competitor’s core market, Blockbuster’s response would very likely have been a vigorous and perhaps successful counterattack. But failing to respond effectively to the trajectory that Netflix was on led Blockbuster to collapse.

2. Disrupters often build business models that are very different from those of incumbents.

Consider the healthcare industry. General practitioners operating out of their offices often rely on their years of experience and on test results to interpret patients’ symptoms, make diagnoses, and prescribe treatment. We call this a “solution shop” business model. In contrast, a number of convenient care clinics are taking a disruptive path by using what we call a “process” business model: They follow standardized protocols to diagnose and treat a small but increasing number of disorders.

One high-profile example of using an innovative business model to effect a disruption is Apple’s iPhone. The product that Apple debuted in 2007 was a sustaining innovation in the smartphone market: It targeted the same customers coveted by incumbents, and its initial success is likely explained by product superiority. The iPhone’s subsequent growth is better explained by disruption—not of other smartphones but of the laptop as the primary access point to the internet. This was achieved not merely through product improvements but also through the introduction of a new business model. By building a facilitated network connecting application developers with phone users, Apple changed the game. The iPhone created a new market for internet access and eventually was able to challenge laptops as mainstream users’ device of choice for going online.

Link to the rest at The Harvard Business Review

PG realized he has been throwing around disruptive innovation and related terms for quite a while, assuming (like a tech-head) that everyone understood the concept in the same way he did.

While PG doesn’t necessarily accept all of the factors the HBR describes as necessary to qualify as disruptive, he thinks the article will help educate visitors to TPV who have better things to do than to follow this and that business trend.

Palworld breaks 1 million concurrent players on Steam and rockets onto its top 5 all-time most played games list, blowing past Elden Ring and Cyberpunk 2077

From Windows Central:

  • Just a few days after its launch, Pocketpair’s new “Pokémon with guns” open world survival creature capture game Palworld is already breaking Steam records.
  • Specifically, it reached 730,000 concurrent players earlier this morning, making it the 10th most played game in Steam history.
  • That number continued to soar until Palworld peaked at 855,425 players, putting it within striking distance of Baldur’s Gate 3’s record of 875,343.
  • In addition to being available on Steam, you can also play Palworld on Xbox or on PC through the Microsoft Store client. Notably, it’s on Xbox Game Pass as well.
  • Update: Palworld‘s numbers have only ballooned even further throughout the weekend, with the game now peaking at 1,281,669 players on Steam.

Palworld has only continued to climb upwards since I originally wrote this article, with the game breaking 1 million players and hitting a new peak of 1,281,669 on Steam this morning. That blows through both Elden Ring’s and Cyberpunk 2077’s records of 953,426 and 1,054,388 concurrent players, respectively, and puts Palworld in fifth place on Steam’s top 5 all-time most played games list.

Link to the rest at Windows Central

Fact Check: Has Palworld Copied Pokemon’s Designs?

From The Sports Rush:

After the upcoming Indiana Jones title was accused of copying Uncharted, the newly-released Palworld has come under plagiarism allegations. Fans claim the game has plagiarized numerous features from Nintendo’s famous Pokemon series.

Pocket Pair developed Palworld, an open-world action-adventure game. The game has multiple unique creatures called “Pals” that players can battle and capture. Later, the those creatures help you fight others, travel, and construct bases. Since its announcement, many people have compared the concept of Pals to the “Pocket Monsters” from the Pokemon franchise.

This 2024-released game became an instant hit on Steam, becoming the most-played game on the platform within 24 hours. Palworld’s gameplay is similar to Ark: Survival Evolved. However, it was difficult to overlook the similarities with Game Freak’s masterwork. Fans even analyzed how the Pals possibly stole Pokemon’s designs. Because of these issues, many fans nicknamed the game “Pokemon with Guns.”

Pocket Pair’s history with generative AI has worsened the situation

The case of Palworlds plagiarising Pokemon worsened when the developer Pocket Pair’s relationship with generative AI surfaced. The studio previously released a game called AI: Art Imposter, an AI drawing party game. The players can instruct the AI to make images without requiring any aesthetic skills to create beautiful artwork.

Furthermore, Pocket Pair CEO Takuro Mizobe has complimented generative AI, recognizing its enormous potential. In an old tweet, Mizobe stated that generative AI technology might one day be powerful enough to make art without violating copyright laws. Many artists are lately denouncing AI for taking over their work and exploiting their artworks without permission to train AI technology.

Link to the rest at The Sports Rush and thanks to F. for both tips.

PG says that more than one disruptive technology has resulted in a lot of thrown elbows by an upset incumbent.

All the Jobs AI Is Coming for, According to a UK Study

From Lifehacker:

The question of whether AI will eventually take jobs away from us meatbags is nothing new. However, following ChatGPT’s launch late last year, the speed at which AI has caught on has surprised almost everybody, even those working in the space. And far from a question to consider in the far-off (or even near-term) future, jobs are already being affected: Some layoffs this year came due to companies believing AI could replace certain roles, while other companies froze hiring for similar reasons.

So, how do you know if your job is one of the ones at risk? A recent study could give you the answer (and you might not like it).

This UK study reveals the jobs “most exposed” to AI—and what that means

Assessing the random actions of various companies and getting lost in speculation do us no good. For a substantive and thoughtful discussion on the topic, there is already traditional research ongoing into how AI will affect the job market, including this recent study out of the U.K. The study, developed by the UK’s Department for Education, estimates that 10–30% of jobs are automatable with AI—which, depending on your general outlook on AI, may sound like a lot, or less than you’d expect.

The study investigated the job functions and qualifications for various sectors of the workforce, looking for whether the following ten AI applications could aid in those jobs:

  • Abstract strategy games
  • Real-time video games
  • Image recognition
  • Visual question answering
  • Image generation
  • Reading comprehension
  • Language modeling
  • Translation
  • Speech recognition
  • Instrumental track recognition

Depending on how relevant each of these 10 functions were to a particular role, the study generated an AI Occupational Exposure (AIOE) score for the role. The higher the score, the more “exposure” that role may have to artificial intelligence.

In the initial findings, the study determined that “professional occupations,” including sectors like finance, law, and business management, tended to be more exposed to AI. In fact, they specifically found that the finance and insurance sectors were the most exposed. Building off this discovery, it seems the more advanced the qualifications necessary for the role, the more AI exposure that role tends to have. In general, if your job requires more education and more advanced training, chances are it pairs with well AI.

The reverse is true, of course—except for security guards, interestingly enough. The study says there is such an emergence of security technology that although the role requires low education and work experience, it is more exposed to AI than other jobs of its kind.

None of this is necessarily a bad thing. As the study points out, the International Labor Organization has found most jobs are only partially exposed to AI, so the odds are decent employees in these roles will benefit from AI exposure, rather than have their jobs fully replaced by the technology.

Which jobs are most exposed to AI

Taking all this into consideration, the study breaks down the top 20 occupation types most exposed to AI, as well as most exposed to large language modeling (LLMs). It’s a long list, including sectors like consulting, telephone sales, psychologists, legal professionals, teachers, and payroll managers.

As stated above, the study finds that finance and insurance are the most exposed to AI of any job sector. The other most exposed sectors include information and communication; professional, scientific and technical, property; public administration and defense; and education.

Just as interesting as the list of occupation types most exposed is the list of those least exposed. Many of these roles require manual labor that cannot be replicated by AI or technology in general, such as sports players, roofers, fork-lift truck drivers, painters, window cleaners, and bricklayers:

. . . .

Will AI truly replace any jobs, according to the study?

Interestingly enough, the study is almost exclusively focused on AI exposure, rather than on jobs threatened by the technology. That said, they do have a list of 16 job types that are considered “high automation occupations,” which a pessimist could infer to mean jobs that could be one day replaced by automation.

  • Authors, writers and translators
  • Bank and post office clerks
  • Bookkeepers, payroll managers and wages clerks
  • Brokers Call and contact centre occupations
  • Customer service occupations n.e.c.
  • Finance officers
  • Financial administrative occupations n.e.c
  • Human resources administrative occupations
  • Librarians
  • Market research interviewers
  • Other administrative occupations n.e.c.
  • Pensions and insurance clerks and assistants
  • Telephone salespersons
  • Travel agents
  • Typists and related keyboard occupations

You might notice some overlap between this list and the list of jobs most exposed to AI. That’s because the study notes that these jobs all have high AIOE scores, both for exposure to AI and LLMs.

Link to the rest at Lifehacker

How Collaborating With Artificial Intelligence Could Help Writers of the Future

From The Literary Hub:

Art has long been claimed as a final frontier for automation—a field seen as so ineluctably human that AI may never master it. But as robots paint self-portraits, machines overtake industries, and natural language processors write New York Times columns, this long-held belief could be on the way out.

Computational literature or electronic literature—that is, literature that makes integral use of or is generated by digital technology—is hardly new. Alison Knowles used the programming language FORTRAN to write poems in 1967 and a novel allegedly written by a computer was printed as early as 1983. Universities have had digital language arts departments since at least the 90s. One could even consider the mathematics-inflected experiments of Oulipo as a precursor to computational literature, and they’re experiments that computers have made more straightforward. Today, indie publishers offer remote residencies in automated writing and organizations like the Electronic Literature Organization and the Red de Literatura Electrónica Latinoamericana hold events across the world. NaNoGenMo—National Novel Generation Month—just concluded its sixth year this April.

As technology advances, headlines express wonder at books co-written by AI advancing in literary competitions and automated “mournful” poetry inspired by Romance novels—with such resonant lines as “okay, fine. yes, right here. no, not right now” and “i wanted to kill him. i started to cry.” We can read neo-Shakespeare (“And the sky is not bright to behold yet: / Thou hast not a thousand days to tell me thou art beautiful.”), and Elizabeth Bishop and Kafka revised by a machine. One can purchase sci-fi novels composed, designed, blurbed, and priced by AI. Google’s easy-to-use Verse by Verse promises users an “AI-powered muse that helps you compose poetry inspired by classic American poets.” If many of these examples feel gimmicky, it’s because they are. However, that doesn’t preclude AI literature that, in the words of poet, publisher, and MIT professor Nick Montfort, “challenges the way [one] reads[s] and offers new ways to think about language, literature, and computation.”

. . . .

Ross Goodwin’s 1 the Road (2018) is often described as one of the first novels written completely by AI. To read it like a standard novel wouldn’t get one far, though whether that says more about this text or the traditional novel could be debated. Much of the book comprises timestamps, location data, mentions of businesses and billboards and barns—all information collected from Four Square data, a camera, GPS, and other inputs. But the computer also generated characters: the painter, the children. There is dialogue; there are tears. There are some evocative, if confused, descriptions: “The sky is blue, the bathroom door and the beam of the car ride high up in the sun. Even the water shows the sun” or “A light on the road was the size of a door, and the wind was still so strong that the sun struck the bank. Trees in the background came from the streets, and the sound of the door was falling in the distance.” There is a non-sequitur reference to a Nazi and dark lines like “35.416002034 N, -77.999832991 W, at 164.85892916 feet above sea level, at 0.0 miles per hour, in the distance, the prostitutes stand as an artist seen in the parking lot with its submissive characters and servants.”

K Allado-McDowell, who in their role with the Artist + Machine Intelligence program at Google supported 1 the Road, argued in their introduction to the text that 1 the Road represented a kind of late capitalist literary road trip, where instead of writing under the influence of amphetamines or LSD, the machine tripped on an “automated graphomania,” evincing what they more recently described to me as a “dark, normcore-cyberpunk experience.”

To say 1 the Road was entirely written by AI is a bit disingenuous. Not because it wasn’t machine-generated, but rather because Goodwin made curatorial choices throughout the project, including the corpus the system was fed (texts like The Electric Kool-Aid Acid TestHell’s Angels, and, of course, On the Road), the surveillance camera mounted on the Cadillac that fed the computer images, and the route taken. Goodwin, who is billed as the book’s “writer of writer,” leans into the questions of authorship that this process raised, asking: is the car the writer? The road? The AI? Himself? “That uncertainty [of the manuscript’s author] may speak more to the anthropocentric nature of our language than the question of authorship itself,” he writes.

AI reconfigures how we consider the role and responsibilities of the author or artist. Prominent researchers of AI and digital narrative identity D. Fox Harrell and Jichen Zhu wrote in 2012 that the discursive aspect of AI (such as applying intentionality through words like “knows,” “resists,” “frustration,” and “personality”) is an often neglected but equally pertinent aspect as the technical underpinnings. “As part of a feedback loop, users’ collective experiences with intentional systems will shape our society’s dominant view of intentionality and intelligence, which in turn may be incorporated by AI researchers into their evolving formal definition of the key intentional terms.”

That is, interactions with and discussions about machine intelligence shape our views of human thought and action and, circularly, humanity’s own changing ideologies around intelligence again shape AI; what it means to think and act is up for debate. More recently, Elvia Wilk, writing in The Atlantic on Allado-McDowell’s work, asks, “Why do we obsessively measure AI’s ability to write like a person? Might it be nonhuman and creative?” What, she wonders, could we learn about our own consciousness if we were to answer this second question with maybe, or even yes?

This past year, Allado-McDowell released Pharmako-AI (2020), billed as “the first book to be written with emergent AI.” Divided into 17 chapters on themes such as AI ethics, ayahuasca rituals, cyberpunk, and climate change, it is perhaps one of the most coherent literary prose experiments completed with machine learning, working with OpenAI’s large language model GPT-3. Though the human inputs and GPT-3 outputs are distinguished by typeface, the reading experience slips into a linguistic uncanny valley: the certainty GPT-3 writes with and the way its prose is at once convincingly “human” but yet just off unsettles assumptions around language, literature, and thought, an unsettling furthered by the continuity of the “I” between Allado-McDowell and GPT-3.

. . . .

But as AI “thinking” reflects new capacities for human potential, it also reflects humanity’s limits; after all, machine learning is defined by the sources that train it. When Allado-McDowell points out the dearth of women and non-binary people mentioned by both themselves and by GPT-3, the machine responds with a poem that primarily refers to its “grandfather.” Allado-McDowell intervenes: “When I read this poem, I experience the absence of women and non-binary people.” “Why is it so hard to generate the names of women?” GPT asks, a few lines later.

Why indeed. Timnit Gebru, a prominent AI scientist and ethicist, was forced out of Google for a paper that criticized the company’s approach to AI large language models. She highlighted the ways these obscure systems could perpetuate racist and sexist biases, be environmentally harmful, and further homogenize language by privileging the text of those who already have the most power and access.

Link to the rest at The Literary Hub

One of the comments in the items PG looked at in connection with this post claimed that Pharmako-AI was not the first book written by GPT-3. The commenter claimed that GPT-3 Techgnosis; A Chaos Magick Butoh Grimoire was the first GPT-3-authored book.

While looking for GPT-3 Techgnosis; A Chaos Magick Butoh Grimoire on Amazon, PG found Sybil’s World: An AI Reimagines Herself and Her World Using GPT-3 and discovered that there was a sequel to GPT-3 Techgnosis; A Chaos Magick Butoh Grimoi called Sub/Urban Butoh Fu: A CYOA Chaos Magick Grimoire and Oracle (Butoh Technomancy Book 2)

The Publishing Ecosystem in the Digital Era

From The Los Angeles Review of Books:

IN 1995, I WENT to work as a writer and editor for Book World, the then-standalone book-review section of The Washington Post. I left a decade later, two years before Amazon released the Kindle ebook reader. By then, mainstream news outlets like the Post were on the ropes, battered by what sociologist John B. Thompson, in Book Wars, calls “the digital revolution” and its erosion of print subscriptions and advertising revenue. The idea that a serious newspaper had to have a separate book-review section seems quaint now. Aside from The New York Times Book Review, most of Book World’s competitors have faded into legend, like the elves departing from Middle-earth at the end of The Lord of the Rings. Their age has ended, though the age of the book has not.

Nobody arrives better equipped than Thompson to map how the publishing ecosystem has persisted and morphed in the digital environment. An emeritus professor of sociology at the University of Cambridge and emeritus fellow at Jesus College, Cambridge, Thompson conducts his latest field survey of publishing through a rigorous combination of data analysis and in-depth interviews. Book Wars comes stuffed with graphs and tables as well as detailed anecdotes. The data component can get wearisome for a reader not hip-deep in the business, but it’s invaluable to have such thorough documentation of the digital publishing multiverse.

. . . .

One big question animates Thompson’s investigation: “So what happens when the oldest of our media industries collides with the great technological revolution of our time?” That sounds like hyperbole — book publishing hasn’t exactly stood still since Gutenberg. A lot happens in 500 years, even without computers. But for an industry built on the time-tested format of print books, the internet understandably looked and felt like an existential threat as well as an opportunity.

Early on in his study, Thompson neatly evokes the fear that accompanied the advent of ebooks. The shift to digital formats had already eviscerated the music industry; no wonder publishers felt queasy. As Thompson writes, “Were books heading in the same direction as CDs and vinyl LPs — on a precipitous downward slope and likely to be eclipsed by digital downloads? Was this the beginning of the end of the physical book?” That question has been asked over and over again for decades now, and the answer remains an emphatic No. (Note to pundits: Please resist the urge to write more “Print isn’t dead!” hot takes.) But publishers didn’t know that in the early digital days.

The words “revolution” and “disruption” get thrown around so often that they’ve lost their punch, but Thompson justifies his use of them here. He recalls the “dizzying growth” of digital books beginning in 2008, “the first full year of the Kindle.” That year alone, ebook sales for US trade titles added up to $69 million; by 2012, they had ballooned to $1.5 billion, “a 22-fold increase in just four years.”

Print, as usual, refused to be superseded. Despite their early boom, ebooks didn’t cannibalize the print market. Thompson uses data from the Association of American Publishers to show that ebooks plateaued at 23 to 24 percent of total book sales in the 2012–’14 period, then slipped to about 15 percent in 2017–’18. Print books, on the other hand, continue to account for the lion’s share of sales, with a low point of about 75 percent in 2012–’14, bouncing back to 80­ to 85 percent of total sales in 2015–’16. (Thompson’s study stops before the 2020–’21 pandemic, but print sales have for the most part been strong in the COVID-19 era.)

For some high-consumption genres, like romance, the ebook format turned out to be a match made in heaven; Thompson notes that romance “outperforms every other category by a significant margin.” But readers in most genres have grown used to choosing among formats, and traditional publishers have for the most part proved able and willing to incorporate those formats into their catalogs. That’s a net gain both for consumer choice and for broader access to books.

. . . .

Thompson quotes an anonymous trade-publishing CEO: “The power of Amazon is the single biggest issue in publishing.”

It’s easy to see why. With its vast market reach and unprecedented access to customer data, Amazon has made itself indispensable to publishers, who rely on it both to drive sales (often at painfully deep discounts) and to connect with readers. For many of us, if a book’s not available on Amazon, it might as well not exist. “Given Amazon’s dominant position as a retailer of both print and ebooks and its large stock of information capital, publishers increasingly find themselves locked in a Faustian pact with their largest customer,” Thompson writes.

That pact has proven hard to break. “Today, Amazon accounts for around 45 percent of all print book sales in the US and more than 75 percent of all ebook unit sales, and for many publishers, around half — in some cases, more — of their sales are accounted for by a single customer, Amazon,” Thompson points out. That’s staggering.

Does Amazon care about books? Not in the way that publishers, authors, and readers do, but that doesn’t change the power dynamic. Amazon derives its power from market share, yes, but also from what Thompson calls “information capital” — namely the data it collects about its customers. That gives it an enormous advantage over publishers, whose traditional business approach prioritizes creative content and relationships with authors and booksellers.

Workarounds to Amazon exist, though not yet at scale. Just as authors have learned to connect with readers via email newsletters and social media, so have publishers been experimenting with direct outreach via digital channels. Email feels almost quaint, but done well it remains a simple and effective way to reach a target audience. Selling directly to readers means publishers can avoid the discounts and terms imposed on them by Amazon and other distributors.

. . . .

Authors can now sidestep literary gatekeepers, such as agents and acquiring editors, and build successful careers with the help of self-publishing platforms and outlets that didn’t exist 20 or even 10 years ago. Self-publishing has become respectable; we’ve traveled a long way from the days when book review editors wrote off self-published books as vanity press projects. Newspaper book sections have mostly vanished, but book commentary pops up all over the internet, in serious review outlets like this one and in the feeds of Instagram and TikTok influencers. It’s a #bookstagram as well as an NYTBR world now. To me, that feels like a win for books, authors, and readers.

. . . .

Some authors hit the big time in terms of sales and readers without relying on a traditional publisher. Thompson returns several times to the example of the software engineer-turned-writer Andy Weir, whose hit book The Martian (2011) got its start as serialized chapters published on his blog and delivered to readers via newsletter. (Newsletters represent another digital-publishing trend unlikely to disappear anytime soon.) “The astonishing success of The Martian — from blog to bestseller — epitomizes the paradox of the digital revolution in publishing: unprecedented new opportunities are opened up, both for individuals and for organizations, while beneath the surface the tectonic plates of the industry are shifting,” Thompson writes.

Link to the rest at The Los Angeles Review of Books

Book Wars

From The Wall Street Journal:

In 2000 the RAND Corporation invited a group of historians—including me—to address a newly pressing question: Would digital media revolutionize society as profoundly as Gutenberg and movable type? Two decades later, John Thompson’s answer is yes, but not entirely as predicted. And our forecasts were often wrong because we overlooked key variables: We cannot understand the impact of technologies “without taking account of the complex social processes in which these technologies were embedded and of which they were part.”

Mr. Thompson provides that context in “Book Wars” (Polity, 511 pages, $35), an expert diagnosis of publishers and publishing, robustly illustrated with charts, graphs, tables, statistics and case studies. An emeritus professor at Cambridge University, Mr. Thompson published an earlier dissection of that industry, “Merchants of Culture,” in 2010, but now he finds that capitalist landscape radically transformed.

Not long ago everyone thought (or feared) that ebooks would sweep the ink-and-paper book into the recycle bin of history. But they peaked in 2014 at just under 25% of U.S. book sales, then settled back to about 15% in the U.S. and roughly 5% in Western Europe. It turned out that the printed book had unique advantages (easy to navigate, no power source needed, works even if you drop it on the floor). Another consideration is that bookshelves stocked with physical books serve the essential purpose of advertising our literary tastes to visitors. And far from devastating the publishing industry, ebooks boosted their profits even as their revenues remained more or less flat. (Compared to printed books, they are cheaper to produce and distribute, and they don’t burden publishers with warehousing and returns.)

For anyone bewildered by the transformation of the book world, Mr. Thompson offers a pointed, thorough and business-literate survey. He tracks the arcane legal battles surrounding the creation of Google Books, and explains why the Justice Department filed an antitrust suit against Apple and the Big Five publishers, but not (so far) against Amazon. He rightly regrets the shrinkage of newspaper book reviewing: the first decade of the 21st century saw newspapers from Boston to San Diego pull back on book reviews. That said, Mr. Thompson could have devoted more attention to the rise of reader-written online literary criticism, a populist substitute for the Lionel Trillings and F.R. Leavises of the past.

In spite of worries that small independent booksellers would disappear, they are still with us. But they were challenged in the 1960s by the shopping-mall chains of B. Dalton and Waldenbooks, which were superseded by Barnes & Noble and Borders superstores. These in turn were eclipsed by Amazon (founded 1994), triumphing largely because it sold all books to everyone, everywhere. Though we romanticize corner bookstores, they were numerous only in the largest metropolitan centers. In 1928, a city like Cincinnati had seven bookshops. Small-town America bought books at department stores, at pharmacies, or nowhere.

Mr. Thompson insists that “the turbulence generated by the unfolding of the digital revolution in publishing was unprecedented. . . . Suddenly, the very foundations of an industry that had existed for more than 500 years were being called into question as never before.” I would be careful with the word “unprecedented.” Print-on-demand has been with us for some time: the Chinese did it for centuries with woodblocks. The modish practice of crowdsourcing to finance books has a precursor in 18th-century subscription publishing, as readers pledged in advance to buy a forthcoming book. Amazon today dominates bookselling, but Mudie’s Lending Library enjoyed an equally commanding position in Victorian Britain, and raised in its day serious concerns about corporate censorship. (Mudie’s puritanical acquisitions policies meant that novelists like George Meredith were penalized for honest treatment of sex.)

In fact, the 19th century witnessed a transformation of the book business as dizzying as our own: New reproduction technologies dramatically drove down the price of books and increased print runs by orders of magnitude, creating for the first time a global literary mass market, bringing Walter Scott to Japan and Harriet Beecher Stowe to Russia. Today, the absorption of family-owned publishers by conglomerates has raised questions about whether there is still a home for literary and controversial authors with limited popular appeal, but that change was complete before the full impact of digital media. If you’re worried about media concentration (and you should be), the fact remains that all the great Victorian novelists were published by a half-dozen London firms. The desktop computer has vastly expanded opportunities for self-publishers, but there were plenty of them in the past: think of Martin Luther, Walt Whitman, Leonard and Virginia Woolf or countless job-printed village poets and memoirists.

. . . .

While Mr. Thompson is entirely right to conclude that the transformation of publishing in the past 20 years has been bewildering, that’s nothing new. In a dynamic capitalist economy, the dust never settles.

Link to the rest at The Wall Street Journal (This should be a free link to the WSJ original. However, PG isn’t sure if there’s a limit on the number of times various visitors to TPV can use the free link and whether the link is geofenced for the US, North America, etc. If the link doesn’t work for you, PG apologizes for the WSJ paywall.)

And thanks for the tip from G and several others.

PG agrees that there have been several disruptive technology changes that have impacted the book business in the past.

However, he doesn’t think that the WSJ reviewer gives adequate attention to the difference between the development of ebooks vs. the various disruptions of the printed book world that preceded it.

No prior technology change immediately opened up the potential audience for a particular book or a particular category of books like ebooks has.

Absent Amazon’s establishment of different book “markets” – US, Canada, Britain, etc., etc., anybody in the world can buy and download an ebook from from anyplace else in the world.

There’s a legal reason (among others) for Amazon’s multiple home pages for books in different countries – the right to publish and sell under an author’s copyright can be sliced and diced by national market. I can write a book and use a UK publisher to publish to the UK market and an American publisher to publish to the US market with each publishing agreement setting bounds on where the publisher can publish and sell the book.

Side note: A long time ago, PG went through the process of signing up for an account on Amazon UK and did so with no problem. He never used the account, but wandered around among the British-English product descriptions and Pound-based prices enough to believe that, particularly for electronic goods, he could purchase and receive anything he liked there. From prior trips to Britain, PG knows his credit cards work just as well for spending pounds as they do for spending dollars.

All that said, any indie author knows how easy it is to simultaneously publish an ebook every place where Amazon sells ebooks.

Other ebook distributors also offer an even broader publish-everywhere feature. PG just checked and Draft2Digital allows an indie author to publish all over the world, through D2D because D2D has agreements with Rakutenkobo, Scribed and Tolino for them to sell an indie author’s book to the zillions of places they’re available.

Rakutenkobo lists its markets as Turkey, United States, United Kingdom, Canada, Japan, Brazil, Australia, New Zealand, France, Germany, Netherlands, Austria, Ireland, Luxembourg, Belgium, Italy, Portugal, Spain, South Africa, Philippines, Taiwan and Mexico and PG bets readers in other countries can also access the company’s websites, so an indie author has a very easy path to publishing ebooks in each of those places.

So that’s why PG thinks the ebook revolution can’t be easily compared to any prior technology disruption that involved printed books.

Continuing on, after PG read the WSJ review of Book Wars, he immediately went to Amazon to buy the ebook.

BOOM!

Idiotic corporate publishing screwed everything up.

The hardcover edition of the book lists for $29.30 on Amazon and the ebook edition sells for $28.00!

$28 for an ebook!

The publisher is Polity Publishing.

Per Wikipedia, Polity is an academic publisher in the social sciences and humanities that was established in 1984 and has “editorial offices” in “Cambridge (UK), Oxford (UK), and Boston (US)” plus it also has something going in New York City. In four offices, Polity has 39 employees (no mention how many are student employees or part-time contractors).

PG took a quick look via Google Maps Streetview at Polity’s Boston office, located at 101 Station Landing, Medford, Massachusetts. Streetview showed a photo of a multi-story anonymous-looking modern building that could be an office building or an apartment building. PG had never heard of Medford and doesn’t know anything about the community, but on the map, it doesn’t look terribly close to the parts of Boston with which PG has a tiny bit of familiarity.

So, PG doesn’t know how Mr. Thompson, the author of Book Wars chose his publisher, but, in PG’s extraordinarily humble opinion, he made a giant mistake.

A Wall Street Journal review of a book like this should send sales through the roof. Per Amazon, Book Wars is currently ranked #24,220 in the Kindle Store.

Imagine how much better it would sell if it was offered at a reasonable price.