Where Is My Thinking Machine?

From The Wall Street Journal:

Amazement and alarm have been persistent features, the yin and yang, of discourse about the power of computers since the earliest days of electronic “thinking machines.”

In 1950, a year before the world’s first commercial computer was delivered, the pioneering computer scientist Alan Turing published an essay asking “Can machines think?” A few years later another pioneer, John McCarthy, coined the term “artificial intelligence”—AI. That’s when the trouble began.

The sloppy but clever term “artificial intelligence” has fueled misdirection and misunderstanding: Imagine, in the 1920s, calling the car an artificial horse, or the airplane an artificial bird, or the electric motor an artificial waterwheel.

“Computing” and even “supercomputing” are clear, even familiar terms. But the arrival of today’s machines that do more than calculate at blazing speeds—that instead engage in inference to recognize a face or answer a natural language question—constitutes a distinction with a difference, and raises practical and theoretical questions about the future uses of AI.

The amount of money chasing AI alone suggests something big is happening: annual global venture investing in AI startups has soared from $3 billion a decade ago to $80 billion in 2021. Over half is with U.S. firms, about one-fourth with Chinese. More than six dozen AI startups have valuations north of $1 billion, chasing applications in every corner of the economy from the Holy Grail of self-driving cars to healthcare and shopping, and from shipping to security.

Consider another metric of AI’s ascendance: An Amazon search for “artificial intelligence” yields 50,000 book titles. Joining that legion comes “The Age of AI and Our Human Future.” Its multiple authors—Henry Kissinger, the former secretary of state; Eric Schmidt, the former CEO of Google; and Daniel Huttenlocher, dean of MIT’s Schwarzman College of Computing—say their book is a distillation of group discussions about how AI will “soon affect nearly every field of human endeavor.”

“The Age of AI” doesn’t tackle how society has come to such a turning point. It poses more questions than it answers, and focuses, in the main, on news and disinformation, healthcare and national security, with a smattering about “human identity.” That most AI applications are nascent is evidenced by the authors’ need to fall back frequently on “what if” in an echo of similar discussions nearly everywhere.

The authors are animated by the sense many share that we’re on the cusp of a broad technological tipping point of deep economic and geopolitical consequence. “The Age of AI” makes the indisputable case that AI will add features to products and services that were heretofore impossible.

In an important nod to reality, the authors highlight “AI’s limits,” the problems with adequate and useful datasets and the “brittleness,” especially the “narrowness,” of AI—an AI that can drive a car can’t read a pathology slide, and vice versa.

However, AI’s narrowness is a feature, not a bug. This explains why hundreds of AI startups target narrow applications within a specific business domain. It explains why apps have been so astonishingly successful (consumers have access to more than 200 million apps, most of which are not games). Many are powered by hyper-narrow AI and enabled by a network—the Cloud—of unprecedented reach.

The book’s discursive nature illuminates, if unintentionally, a common challenge in forecasting AI: It’s far easier today than in Turing’s time to drift into the hypothetical.

Link to the rest at The Wall Street Journal (PG apologizes for the paywall, but hasn’t figured out a way around it.)