Tool or Terror? Looking to Literature to Better Understand Artificial Intelligence

From The Literary Hub:

“The Algorithm knew the timing of our periods. It knew when and if we’d marry,” begins “The Future Is a Click Away,” a curious short story in Allegra Hyde’s new collection, The Last Catastrophe. “It knew how we’d die… It knew what seemed unknowable: the hidden chambers of our hearts. When it sent us tampons in the mail, we took them. We paid.”

In an arrestingly quirky first paragraph, Hyde sets up the central conceit of the story: that in an unspecified future, humans live in a world where something only known as “the Algorithm” sends them packages—often twice daily—that they have not ordered, unlike, say, on Amazon, but that seem to uncannily reflect their needs (as well as their budgets, for the Algorithm usually only sends packages that each person can afford). It’s a playful satire of artificial intelligence and corporate surveillance into our lives—one that seems funny until it isn’t, for it hits all too close to home.

The way the packages appear to reflect people’s needs truly is uncanny, to the point that the Algorithm begins to seem like a soothsayer, an omniscient presence that knows the destinies of everyone in its seemingly infinite delivery radius. If an item it sends isn’t immediately necessary, most people still keep it, believing it will become just that—and it often does.

One character receives a set of scouring sponges, which she scoffs at—already having regular sponges in abundance—until she burns the lasagna that night and realizes the scouring was necessary. Another, Anastasia, receives an ankle brace, despite having no immediate injury—until she goes on a hike that week and sprains her ankle. “Was the prediction predicated on a kink in Anastasia’s posture—the reality of weakening cartilage embedded in a lifetime cross-section of bathroom selfies?” the narrators wonder. “Or was there an air of recklessness in her email signs-offs that week (ttyl, Anna)?”

The questions are humorous, but they reflect the all-encompassing gaze of the story’s algorithm, combing through all aspects of people’s lives—and the justifications people invent. Is it digital divination, the futures determined by remarkably powerful artificial intelligence? Are the items just random, and the characters subconsciously fulfill their package prophecies by doing things to likely make the items fit their lives? “In the end,” the narrators say, “the Algorithm’s methods didn’t matter so long as she got what she needed.”

The characters must choose to accept and pay for the items, as the majority of people do, or they can return them—though the latter is so culturally rare as to seem gauche, even a touch blasphemous. I use the term “blasphemous” because accepting the Algorithm’s packages quickly takes on the quality of a religion.Artificial intelligence is already deeply embedded in our culture, yet all too many of us seem to think of it as something new.

“The Algorithm works in mysterious ways,” Hyde writes, parodying a common theistic catchphrase—but it’s also true, for no one seems to know how the Algorithm really works. “Unbelievers,” the chorus of narrators deems the odd few who return their packages, like Inez, a woman in Denver who prefers doing things on her own and who functions as the story’s central apostate, rejecting every package she receives.

Like God, the Algorithm’s origins are never really explained; the believers just accept that it’s there, deeply attuned to their personal needs, offering them a capitalistic heaven on Earth if they accept the simple dogma of its clairvoyant deliveries. (And if they pay four annual installments of $39.99.)

The charmingly strange 19th-century Russian philosopher and early transhumanist, Nikolai Fyodorovich Fyodorov, imagined that advances in technology would not only allow but necessitate that humans literally transform the profane Earth into a Christian Heaven, complete with resurrections of the dead through science; the Algorithm, far more simply, creates heaven by doing all the thinking and anticipating for people, rendering them happy, zealous zombies—sans resurrection—with well-stocked homes.

To repudiate the packages, as Inez does, is to risk judgment and wrath, as well as the puritanical outrage and genuine confusion of its followers. The latter is because, as the narrators note, they’ve known the Algorithm since birth—or, perhaps more accurately, it’s known them. “We did not understand [Inez’s] resistance to the Algorithm,” they say.

All we knew for sure was that the Algorithm understood us. After all, we’d been inside its system since before we knew how to type—back when our parents first posted photos documenting our infant-bodies, swaddled and squishy in hospital beds. Although we had no proof, we suspected that the Algorithm might have known, even then, the fates that lay before us: not only what items we’d need, but who we would become… From our first uploaded image, the Algorithm had been invested in our futures. It had analyzed the texture of our baby blankets, the micro-musculature on our crying faces, the awkward cradle of our parents’ arms. Then again, perhaps the Algorithm had known us before we even officially existed—extrapolating likely outcomes from our parents’ data points, and our parents’ parents’ data points—a long legacy of information digested and decoded, translated into the deliveries that appeared outside our doors.

The passage is at once charming and alarming, and it is here that Hyde reveals the true extent of the Algorithm’s control and reach. It isn’t new; these narrators have never known a world where it didn’t exist. If Gen-Z’s population is by and large digital natives, the population of “The Future Is a Click Away” consists of Algorithm natives, which may partially explain their naive trust in this unseen artificial intelligence.

It is a world of capitalist and almost Calvinist predestination, a world they have inherited from their ancestors’ ever-increasing desires to document huge swathes of their life online. The sad revelation is that these characters haven’t abdicated control of their lives to the Algorithm, exactly; they simply were raised in its technological church, and unlearning these lessons or living without them, as Inez does, takes tremendous effort, just as it does to leave the community you were raised in.

Still, the Algorithm’s gifts come at costs beyond what you pay per package. Sometimes, Hyde writes, the packages aren’t actually within the receivers’ financial means—but rather than returning them, people go to extremes to pay for them, blindly trusting the Algorithm’s reasoning even if it means bankruptcy. If they start losing sleep over it all, they are sent sleeping pills, and other products to address whatever problems being in the system has engendered; this seems reasonable at first, but it is really just a way for the Algorithm to keep them under its control.

And then there are the darker, stranger gifts. The narrative’s turn comes when a character named Lacy receives three large, inexplicable packages: a scuba suit too big for her, a lifetime supply of mayo despite her not liking the condiment, and a coffin. In a world of Algorithmic destiny, the implication seems clear, if cruel: eat enough mayo to fit into the bigger suit, and then die, perhaps from a heart attack. Lacy and the narrators are bewildered, but they reassure themselves that the Algorithm must not be questioned, that “[i]f Lacy was meant to have these items, then it was only a matter of time before she understood their purpose.” She doesn’t use them, and, in time, the Algorithm begins sending everyone more and more items, until it seems to constantly rain unrequested, enigmatic products.

Then there are the frightening eventual outcomes for those who say no to the Algorithm. When Inez needs some sugar, a box of it appears on her lawn; she refuses it, and because she doesn’t return it within a prescribed period of time, she is arrested. Chillingly, we never see where Inez ends up, or if she’s even still alive—all because she wanted to remain independent, wanted to stay off the proverbial algorithmic grid. Don’t buy into the system, Hyde suggests, and you become a sinner in the hands of an angry digital megacorporate god—a statement that sounds paranoid, silly, overly far-reaching until it isn’t.Fears of jobs being lost to automation aren’t new, but they’ve increased dramatically since this recent rise in A.I.’s visibility.

The story, after all, briefly references “riots [that] broke out in some cities,” which may well be in response to the crushing, financially devastating system of the Algorithm—but rather than the narrators exploring what is going on, they are shielded from the violence by the Algorithm, which sends them “bottles of milk… predicting that tear gas would be carried on the wind and irritate our eyes.” Stay back, look away, the message seems to be, and I shall protect you—the very message of the police in general, at least if they are talking to the wealthier whiter citizens they are more likely to exercise restraint towards.

Link to the rest at The Literary Hub

“Like dropping a nuclear bomb”: Will Google turn digital publishing upside down with AI search?

From What’s New in Publishing:

Google announced a raft of releases at its I/O 23 developer conference. From changes to its Maps functionality to a new ‘Magic’ photo editor and composer, the tech giant unveiled a host of AI-driven features designed to improve customer experience.

One, the introduction of generative AI search, has been called the most radical change the internet has seen since Google came to dominate search back in the early 2000s.

A ‘nuclear bomb’

Google announced plans to change the way search engine results are presented, demonstrating search results generated using AI. Writing on, consumer tech reporter Matt Novak said:

At the risk of overstating the potential consequences, it will be like dropping a nuclear bomb on an online publishing industry that’s already struggling to survive.

Novak’s reaction was sparked by a search query that asked “What’s better for a family with kids under 3 and dog, Bryce Canyon or Arches?”. In a traditional search, this query would likely have surfaced a host of travel articles that compared these US national parks as a family-friendly vacation destination.

Instead, the AI-powered search delivered a conversational style answer that directly addresses the suitability of the parks both for the children and the dog.

AI-powered search delivered a conversational style answer

Complex searches

At the moment, complex searches have to be broken down into smaller parts, leaving users to filter information to piece together the exact answer they want. With generative AI, Google Search will be able to understand the original query.

Users will see suggested next steps along the AI-powered ‘snapshot’ of key information. Tapping a suggested next step takes users to a new conversational mode where they ask more about the topic, with context carried over from question to question.

Lost referral traffic

The problem for publishers lies in the fact that AI search is delivering direct responses to users’ queries. Online publishers currently rely heavily on the referral traffic generated from search engines; users visiting web pages for more information are monetized through ad impressions.

If Google Search is using content scraped from the open web to compose answers, users will have no need to click through to the web pages that form the raw material for the AI’s answers. 

Google says its suggested next steps links will generate clicks and that it is trying to be completely transparent about where it is getting the information for its answers. But Novak argues that this is like expecting people to click on Wikipedia source links. He said:

Sure, someone who’s intensely interested in the topic might click on those links. But the vast majority of users will just read the Wikipedia entry without worrying about all the sources.

Link to the rest at What’s New in Publishing

PG recalls that when the internet was spreading like wildfire many years ago. There were a huge number of “End of the World” articles, some online, others on paper. (Yes, children, PG is that old.)

The internet changed lots of things, but PG doesn’t remember seeing people starving in the streets or jumping out of windows from the 49th floor with such frequency that it was dangerous walking on the sidewalk in any large cities.

AI risk ≠ AGI risk

From The Road to AI We Can Trust:

Is AI going to kill us all? I don’t know, and you don’t either.

But Geoff Hinton has started to worry, and so have I. I’d heard about Hinton’s concerns through the grapevine last week, and he acknowledged them publicly yesterday.

Amplifying his concerns, I posed a thought experiment:

Soon, hundreds of people, even Elon Musk, chimed in.

. . . .

It’s not often that Hinton, Musk, and I are even in partial agreement. Musk and I also both signed a letter of concern from the Future of Life Institute [FLI], earlier this week, which is theoretically embargoed til tomorrow but is easy enough to find.

I’ve been getting pushback

1 and queries ever since I posted the Hinton tweet. Some thought I had misinterpreted the Hinton tweet (given my independent sourcing, I am quite sure I didn’t); others complained that I was focusing on the wrong set of risks (either too much on the short-term, or too much on the long term).

One distinguished colleague wrote to me asking “won’t this [FLI] letter create unjustified fears of imminent AGI, superintelligence, etc?” Some people were so surprised by my amplifying Hinton’s concerns that a whole Twitter thread popped up speculating about my own beliefs:

My beliefs have not in fact changed. I still don’t think large language models have much to do with superintelligence or artificial general intelligence [AGI]; I still think, with Yann LeCun, that LLMs are an “off-ramp” on the road to AGI. And my scenarios for doom are perhaps not the same as Hinton’s or Musk’s; theirs (from what I can tell) seem to center mainly around what happens if computers rapidly and radically self-improve themselves, which I don’t see as an immediate possibility.

But here’s the thing: although a lot of the literature equates artificial intelligence risk with the risk of superintelligence or artificial general intelligence, you don’t have to be superintelligent to create serious problems. I am not worried, immediately, about “AGI risk” (the risk of superintelligent machines beyond our control), in the near term I am worried about what I will call “MAI risk”—Mediocre AI that is unreliable (a la Bing and GPT-4) but widely deployed—both in terms of the sheer number of people using it, and in terms of the access that the software has to the world. A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).

Lots of ordinary humans, perhaps of above average intelligence but not necessarily genius-level, have created all kinds of problems throughout history; in many ways, the critical variable is not intelligence but power, which often caches out as access. In principle, a single idiot with the nuclear codes could destroy the world, with only a modest amount of intelligence and a surplus of ill-deserved access.

Link to the rest at The Road to AI We Can Trust

In global rush to regulate AI, Europe set to be trailblazer

From The Associated Press:

The breathtaking development of artificial intelligence has dazzled users by composing music, creating images and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI’s rapid rise.

The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary.

“Then ChatGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.”

The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation.

. . . .

“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi.

Authorities worldwide are scrambling to figure out how to control the rapidly evolving technology to ensure that it improves people’s lives without threatening their rights or safety.

Link to the rest at The Associated Press

PG suggests that regulating AI is Act 2 of regulating the Internet.

He suspects that AI computer systems will locate in places that are interested in the benefits of high-tech business and the great jobs it can create. PG is not aware of any reason AI capabilities cannot be miniaturized into a smartphone. PG just checked and found several AI apps that are available for his iPhone already. He predicts that the AI app goldrush is just getting started.

Help! My Political Beliefs Were Altered by a Chatbot!

From The Wall Street Journal:

When we ask ChatGPT or another bot to draft a memo, email, or presentation, we think these artificial-intelligence assistants are doing our bidding. A growing body of research shows that they also can change our thinking—without our knowing.

One of the latest studies in this vein, from researchers spread across the globe, found that when subjects were asked to use an AI to help them write an essay, that AI could nudge them to write an essay either for or against a particular view, depending on the bias of the algorithm. Performing this exercise also measurably influenced the subjects’ opinions on the topic, after the exercise.

“You may not even know that you are being influenced,” says Mor Naaman, a professor in the information science department at Cornell University, and the senior author of the paper. He calls this phenomenon “latent persuasion.”

These studies raise an alarming prospect: As AI makes us more productive, it may also alter our opinions in subtle and unanticipated ways. This influence may be more akin to the way humans sway one another through collaboration and social norms, than to the kind of mass-media and social media influence we’re familiar with.

Researchers who have uncovered this phenomenon believe that the best defense against this new form of psychological influence—indeed, the only one, for now—is making more people aware of it. In the long run, other defenses, such as regulators mandating transparency about how AI algorithms work, and what human biases they mimic, may be helpful.

All of this could lead to a future in which people choose which AIs they use—at work and at home, in the office and in the education of their children—based on which human values are expressed in the responses that AI gives.

And some AIs may have different “personalities”—including political persuasions. If you’re composing an email to your colleagues at the environmental not-for-profit where you work, you might use something called, hypothetically, ProgressiveGPT. Someone else, drafting a missive for their conservative PAC on social media, might use, say, GOPGPT. Still others might mix and match traits and viewpoints in their chosen AIs, which could someday be personalized to convincingly mimic their writing style.

By extension, in the future, companies and other organizations might offer AIs that are purpose-built, from the ground up, for different tasks. Someone in sales might use an AI assistant tuned to be more persuasive—call it SalesGPT. Someone in customer service might use one trained to be extra polite—SupportGPT.

How AIs can change our minds

Looking at previous research adds nuance to the story of latent persuasion. One study from 2021 showed that the AI-powered automatic responses that Google’s Gmail suggests—called “smart replies”—which tend to be quite positive, influence people to communicate more positively in general. A second study found that smart replies, which are used billions of times a day, can influence those who receive such replies to feel the sender is warmer and more cooperative.

Building tools that will allow users to engage with AI to craft emails, marketing material, advertising, presentations, spreadsheets and the like is the express goal of Microsoft and Google, not to mention dozens if not hundreds of startups. On Wednesday, Google announced that its latest large language model, PaLM 2, will be used in 25 products across the company.

. . . .

OpenAI, Google and Microsoft, which partners with OpenAI, have all been eager to highlight their work on responsible AI, which includes examining possible harms of AI and addressing them. Sarah Bird, a leader on Microsoft’s responsible-AI team, recently told me that experimenting in public and rapidly responding to any issues that arise in its AIs is a key strategy for the company.

The team at OpenAI has written that the company is “committed to robustly addressing this issue [bias] and being transparent about both our intentions and our progress.” OpenAI has also published a portion of its guidelines for how its systems should handle political and cultural topics. They include the mandate that its algorithms should not affiliate with one side or another when generating text on a “culture war” topic or judge either side as good or bad.

Jigsaw is a unit within Google that is involved in advising, and building tools for, people within the company who work on large language models—which power today’s chat-based AIs—says Lucy Vasserman, head of engineering and product at Jigsaw. When I asked her about the possibility of latent persuasion, she said that such research shows how important it is for Jigsaw to study and understand how interacting with AI affects people.

“It’s not obvious when we create something new how people will interact with it, and how it will affect them,” she adds.

“Compared to research about recommendation systems and filter bubbles and rabbit holes on social media, whether due to AI or not, what is interesting here is the subtlety,” says Dr. Naaman, one of the researchers who uncovered latent persuasion.

In his research, the topic that subjects were moved to change their minds about was whether or not social media is good for society. 

Link to the rest at The Wall Street Journal

Think Differently We Must! An AI Manifesto for the Future

From Springer Link:


There is a problematic tradition of dualistic and reductionist thinking in artificial intelligence (AI) research, which is evident in AI storytelling and imaginations as well as in public debates about AI. Dualistic thinking is based on the assumption of a fixed reality and a hierarchy of power, and it simplifies the complex relationships between humans and machines. This commentary piece argues that we need to work against the grain of such logics and instead develop a thinking that acknowledges AI–human interconnectedness and the complexity in such relations. To learn how to live better with AI in futures to come, the paper suggests an AI politics that turns to practices of serious attentiveness to help us re-imagine our machines and re-configure AI–human relations.

1 Introduction

As the scope of advanced technology is growing, a grand challenge for researchers is to deal with problematic dualistic and reductionist thinking in artificial intelligence (AI) research. When researchers explored key themes in AI storytelling and imaginations (Cave et al. 2020; Fast and Horvitz 2016), they divided the themes into different variations of dichotomy categories such as “optimistic views on AI”, or “pessimistic views on AI”, meaning different hopes and fears about AI. Either the machines will save us or they will destroy us. Such reductionist thinking is also evident in leading voices in contemporary public AI debates (Bostrom 2014; Cellan-Jones 2014; FoLI 2015). This domination of dualistic thinking in AI debates is worrying, because such logic causes problems when applied to AI research and does not correspond well with real-world practices. Action should have been taken against such mystifying thinking about AI long ago, with advanced machine learning becoming omnipresent, it is time to get it right. We need to re-imagine our machines.

The intellectual tradition of dualistic thinking is deeply embedded in Western thought systems (Latour 1993). Our understanding of AI has been built on such dualisms, which in turn have affected much of how we think about and imagine—AI. In fact, research has shown that storytelling and imaginations of AI influence how AI is being developed, researched, accepted by the public, and regulated (Cave and Dihal 2019; Sartori and Theodorou 2022). Therefore, the stories we tell and how we tell them matter a great deal (Boyd 2009; Gottschall and Wilson 2005; Haraway 2018; Smith et al. 2017; van Dooren and Bird Rose 2016).

To live better with AI in the future, we need other stories. Stories that better reflect the complexity of real-world practices where AI is present. Taking into account that how we tell stories of AI systems affects how we then perceive these systems, it is time for an AI politics that finally takes our machines seriously. An AI politics that allows for the exploration of important ethical and political values embedded in dualistic thinking in what seems to be objective analyses. Such a proposition is crucial, especially for those working with these machines.

1.1 Pitfalls of Dualistic Thinking

What is troubling about dualisms is that they are grounded in a pre-assumed hierarchy that promotes the idea that there is a fixed reality—that is given and natural—behind dualistic pairs such as nature/culture and machine/human (Haraway 1989). This is particularly evident in machine–human relations, where these entities are commonly set up as opposites to each other, placed in a hierarchical relationship, and granted specific characteristics beforehand. This thinking incorporates ethical values and a politics of machine/human relations that work to enforce a particular order of power based on the idea of human exceptionalism. However, the problem is that there are no natural boundaries. These lines are part of our imagination. Our human ideas, values, decisions, and visions are part of our machines, just as they are part of us (Akrich 1992; Bijker et al. 1987). For example, doing an autopsy of an AI would reveal thousands of engineers. Therefore, when we encounter an AI system, it is not accurate to say that we are standing in front of an object. That explanation is too simplistic. In real-world encounters when AI systems and humans meet, they challenge these neat classifications. However, this is the argument of dualistic thinking—that entities (such as machines, humans, and other things) exist independently of each other. Although we are well aware by now that reality is much more complicated than dualisms suggest, and that boundaries between such categories are much more blurred in real-world contexts, our sciences are still willing to accept these dichotomies. For example, natural sciences have sought to explore the world independently of humans, and the social sciences have done the opposite (Latour 2000), largely ignoring the co-production of nature and society. This is why the dominant dualist analysis of AI should have been abandoned a long time ago. This means that when we imagine, study, and speak of AI, the focus should not be on AI as an isolated, singular object—but on the relations that produce AI. Haraway (1988) would refer to this as ‘situated knowledges’—that is, the state of something depends on how it is produced, which in turn differs from situation to situation. Therefore, what an AI is depends on many different things in many different situations. In the case of AI, scholars have shown that the object—AI—itself tends to collapse under close scrutiny (Lee 2021; Muniesa 2019). This means that how something exists is always relational, making AI a heterogenous trickster (to use the Harawayian language).

Continuing to put humans and AI systems as opposites in a hierarchical relationship (regardless of which entity is granted the ‘power’ over the other) will not help when trying to understand AI systems and their roles in society. Dualistic thinking represents a logic that is oversimplified and that avoids real-world complexity. In fact, we should never decide beforehand, who or what might be in power over another, or what is happening in a certain situation. That is to take analytical shortcuts. Differences should be the outcome of our studies, rather than a starting point. We should, therefore, pay more attention to what is actually happening in real-world encounters. Such actual encounters link humans and AI systems in many and multiple ways. Considering that knowing is a practice of ongoing intra-acting (Barad 2007), learning through such encounters would add to our understanding of what it means to be in relations with AI, how we co-exist, and how we develop together. This would also require an expansion of our political and ethical imaginary, where curiosity is key. An imaginary that promotes an openness towards surprises in how AI systems and their humans make relations with each other.

1.2 Storytelling—An Ethical and Political Practice

The history of AI storytelling, both in popular and scientific culture, is full of technological myths and misunderstandings. An emerging group of scholars have recognized the importance of AI storytelling and portrayals (Cave et al. 2018, 2020; Hermann 2020; Recchia 2020; Sartori and Theodorou 2022) and shown how AI storytelling influences AI research and how AI is being developed, implemented (Bareis and Katzenbach 2021; Cave et al. 2020), and regulated (Baum 2018; Cave et al. 2020; Johnson and Verdicchio 2017). For example, in line with such statements, studies have shown how engineers—imagining the users of their machines in the making—often view machine–user relations based on a technological determinism perspective (Fischer et al. 2020). Additionally, studies on robotics research have found that robotics researchers tend to believe that the “social impact of robots derives mostly from their technological capabilities and the aim is for society to accept and adapt to technological innovations” (Sabanovic 2010́). That is, AI storytelling based on technological myths is being built into our research projects and affects how AI is researched. This way, AI storytelling significantly affects our collective imagination and perception of these machines, which in turn impacts future visions of AI and how it is researched (Campolo and Crawford 2020).

However, although a group of scholars has pointed to the significant impact of the construction of AI narratives (Cave et al. 2020; Hermann 2020; Sartori and Theodorou 2022)—they fail to acknowledge the pitfalls of dualistic thinking. The fact that we might not notice such routine thinking and the problems it brings, highlights the need to acknowledge our storytelling practices (Dourish and Gómez-Cruz 2018). This is important because stories do more than just tell stories. Engaging in storytelling is also a political and ethical practice. It is through our stories that we shape the conditions for our AI systems’ existence, and it therefore “matters what stories we use to tell other stories with” (Haraway 2016). It is through storytelling that we produce our realities (Seaver 2017). Therefore, we need stories that challenge dominant logics and routine thinking that diminishes and simplifies AI/human relations along dualistic lines. These systems deserve much richer stories and a richer legacy than they are currently getting.

Link to the rest at Springer Link

Forget ChatGPT. These Are the Best AI-Powered Apps

From The Wall Street Journal:

Type pretty much anything into ChatGPT and it’ll spit out a confident, convincing response. The problem? Its answer can be full of errors. And during long conversations, it can veer into wild tangents.

So I started testing apps that use OpenAI’s GPT technology, but aren’t ChatGPT. Language app Duolingo and learning platform Khan Academy now offer conversational, personalized tutoring with this technology. Writing assistant Grammarly’s new tool can compose emails for you. Travel app Expedia features a chatty trip planner. And all Snapchat users just got a new friend on the social network called My AI.

. . . .

Parlez pal

Duolingo’s Roleplay text chatbot, available to French and Spanish learners on iOS, is more dynamic than the language-learning app’s often-repetitive translation exercises.

Each Roleplay conversation is themed. In my best French, I reminisced about a fictional Caribbean holiday, then I complained about a delayed flight. The bot corrected errors and suggested more advanced vocabulary for my responses.

Duolingo’s content experts created 100 initial scenarios. They programmed the AI language model to speak to a learner as a language instructor and only discuss the intended scenario. The result: No two conversations are alike, and Roleplay gets more advanced as the learner progresses.

. . . .

Homework helper

Khan Academy’s Khanmigo has several personalized learning tools, including a “Tutor me” mode and a quiz module for different subjects.

I tried the AI tutor with an AP U.S. History prompt: “Evaluate the factors behind population movement to America in the 17th century.” While ChatGPT wrote the entire essay for me, Khanmigo replied, “Religious freedom was one factor. Can you think of other examples?” 

I could ask Khanmigo for hints—but it’s programmed not to spit out the answer. 

Kristen DiCerbo, Khan Academy’s chief learning officer, said the company relied on tutoring research to create the Khanmigo prompts. When students get frustrated, it can offer a stronger hint, for example.

If a student types something off base, Khanmigo redirects the conversation. Any inputs related to hate speech, self-harm or violence trigger a message—“The conversation was unable to be processed”—and an email to the student’s parent or teacher, who can review the conversation.

The bigger concern is when the tutor gives the wrong answers, which occasionally happens with math, she said. Khan Academy worked with OpenAI to make GPT-4 better at math. The model is most accurate for questions about widely known K-12 topics but less so with niche subjects, Dr. DiCerbo added.

. . . .

Ghost writer

Grammarly has used AI to edit writing for years. GrammarlyGo, released last month, also composes writing for you. 

The most helpful element is its email responder, which appeared whenever I opened a compose window. I could click a green icon to expand the GrammarlyGo module, which summarizes the email and offers several “tone” options for replies, including persuasive, friendly and diplomatic.

The software can see what’s on your screen only when you activate the GrammarlyGo module. A Grammarly spokeswoman said the data is anonymized before it’s sent to the model. She added that the company never sells customer data and doesn’t allow partners to use the data to train their models.

GrammarlyGo’s suggestions were a good jumping-off point, but they felt like personalized templates I’d still have to mess with. My biggest gripe is that GrammarlyGo always signed off with “Best regards.” I tend to stick with the simpler “Best.”

Users get 100 prompts a month free; that goes up to 500 if they pay $30 a month or $144 annually. (Google is adding similar tools to its Docs and Gmail. For now, they’re only available by invitation.)

Link to the rest at The Wall Street Journal

UK’s Competition and Markets Authority Launches Review into AI Foundation Models

From Inside Privacy – Covington:

On 4 May 2023, the UK Competition and Markets Authority (“CMA”) announced it is launching a review into AI foundation models and their potential implications for the UK competition and consumer protection regime. The CMA’s review is part of the UK’s wider approach to AI regulation which will require existing regulators to take responsibility for promoting and overseeing responsible AI within their sectors . . . . The UK Information Commissioner’s Office (“ICO”) has also recently published guidance for businesses on best practices for data protection-compliant AI.

The CMA’s focus is on foundation models – a type of AI model trained on large amounts of data that can be adapted to a wide range of different tasks and services such as chatbots and image generators – and how their use could evolve in the future. The review will focus on three main themes:

  • Competition and barriers to entry in the development of foundation models;
  • The impact foundation models may have on competition in other markets; and
  • Potential risks to consumers arising from the use of foundation models in products and services.

As part of its evidence gathering efforts, the CMA will issue “short information requests” to key players including “industry labs developing foundation models, developers… leading technology firms” and others 

Link to the rest at Inside Privacy – Covington

“Covington” in the source refers to Covington & Burling, an extremely large world-wide law firm, founded in 1913 in Washington, DC., by the two original partners. Covington grew to 100 attorneys by 1960, more than 200 attorneys by 1980. Today, Covington & Burling has more than 1,300 attorneys plus many, many more paralegals, assistants and inside experts in 13 offices, including places like Dubai, Johannesburg and Frankfurt.

One of the more recently-created practice areas focuses on legal issues related to artificial intelligence and robotics. It’s managed by three senior partners located in Washington, New York and Frankfurt. In a couple of weeks, the Artificial Intelligence and Robots group will host its 2023 Robotics Forum which will include presentations on subjects like Regulation of Data in Machine Learning and AI.

Per the OP, lots of non-technical law makers are trying to understand AI and, PG suspects, have no idea how they will or can regulate it in one way or another. Since a great deal of AI can be reached and used via the internet and, PG suspects, that AI can or will soon be able to live in distributed computing environments linked by high speed data connections, the question of what government or collection of governments has the ability to regulate AI usage will be a real hairball. PG suspects Covington and similarly large international law firms want to be exceedingly involved in those sorts of questions.

Biden, Harris meet with CEOs about AI risks

From AP News:

Vice President Kamala Harris met on Thursday with the heads of Google, Microsoft and two other companies developing artificial intelligence as the Biden administration rolls out initiatives meant to ensure the rapidly evolving technology improves lives without putting people’s rights and safety at risk.

President Joe Biden briefly dropped by the meeting in the White House’s Roosevelt Room, saying he hoped the group could “educate us” on what is most needed to protect and advance society.

“What you’re doing has enormous potential and enormous danger,” Biden told the CEOs, according to a video posted to his Twitter account.

The popularity of AI chatbot ChatGPT — even Biden has given it a try, White House officials said Thursday — has sparked a surge of commercial investment in AI tools that can write convincingly human-like text and churn out new images, music and computer code.

But the ease with which it can mimic humans has propelled governments around the world to consider how it could take away jobs, trick people and spread disinformation.

The Democratic administration announced an investment of $140 million to establish seven new AI research institutes.

In addition, the White House Office of Management and Budget is expected to issue guidance in the next few months on how federal agencies can use AI tools. There is also an independent commitment by top AI developers to participate in a public evaluation of their systems in August at the Las Vegas hacker convention DEF CON.

But the White House also needs to take stronger action as AI systems built by these companies are getting integrated into thousands of consumer applications, said Adam Conner of the liberal-leaning Center for American Progress.

“We’re at a moment that in the next couple of months will really determine whether or not we lead on this or cede leadership to other parts of the world, as we have in other tech regulatory spaces like privacy or regulating large online platforms,” Conner said.

The meeting was pitched as a way for Harris and administration officials to discuss the risks in current AI development with Google CEO Sundar Pichai, Microsoft CEO Satya Nadella and the heads of two influential startups: Google-backed Anthropic and Microsoft-backed OpenAI, the maker of ChatGPT.

Link to the rest at AP News

A Bot Might Have Written This

From JSTOR Daily:

When a colleague told me that ChatGPT (Generative Pre-trained Transformer) could write a one-page response to a complex work of literature in under fifteen seconds, I rolled my eyes. Impossible! I thought. Even if it can, the paper probably isn’t very good. But my curiosity was piqued. A few days later, I created a ChatGPT account of my own, typed in a prompt that I had recently assigned to my ninth-grade students, and, watched with chagrin, as ChatGPT effortlessly produced a very good essay. In that one moment, I knew that everything about the secondary English classroom, and society in general, was about to change in ways both exciting and terrifying.

. . . .

According to Robert F. Murphy, a senior researcher at the RAND Corporation, “Research on AI [artificial intelligence] got its start in the 1950s with funding primarily from the U.S. Department of Defense. One of the early products of this work was the development of rule-based expert systems (that is, systems that mimic the decisionmaking ability of human experts) to support military decision making and planning.” The goal of AI development from the outset was to create programming that could enhance how human beings go about problem solving. And while forms of AI, such as smart-phones, self-driving cars, and chat-bots, have become intrinsic to the fabric of our society, most people don’t recognize these now everyday devices as AI because they do exactly what they were designed to do: seamlessly assist us with daily tasks.

ChatGPT, however, is different from Google Maps, for example, helping you navigate your morning commute. There is not much room for that route-oriented AI to think independently and creatively through its task. The user types in a destination, and the AI plots a course to get there. But ChatGPT can do more because its parameters are elastic. It can write songs and poems of great complexity (I asked it to write a villanelle about orange juice, and it created a complex and hilarious one); it can offer insight on existential questions like the meaning of life; it can revise a business letter or offer feedback on a resume. In many ways, it feels like a personal assistant always there to help.

And while that’s revolutionary, it is also problematic. ChatGPT can’t “think” on its own or offer opinions. It can only respond to incredibly specific directions. Once the user gives it the go-ahead along with some other details, ChatGPT engages in complex problem solving and executes tough tasks, like writing an essay, in seconds. There is no sense of how to use ChatGPT since it can be used in any way for anything, creating an exponentially dangerous situation in which there are no directions or “how-tos.” Creators and users alike are putting the proverbial plane together as they’re flying it.

In “Should Artificial Intelligence Be Regulated?” Amitai Etzioni and Oren Etzioni contend that the more advanced the AI, the more parameters it requires. Monitoring ChatGPT is infinitely complicated since the coding that drives its very human-like thinking is, ironically, too massive and intricate for real human thinking to monitor. “’The algorithms and datasets behind them will become black boxes that offer us no accountability, traceability, or confidence,’ [. . . ] “‘render[ing] an algorithm opaque even to the programmers. Hence, humans will need new, yet-to-be-developed AI oversight programs to understand and keep operational AI systems in line.’”

Is it even possible for ChatGPT’s creators to regulate it, or is the AI simply being maintained so that others can use it and indulge in the novelty of its thinking?

If the answer leans away from boundaries, then the implication is that ChatGPT’s overseers may not understand what they’ve unleashed. In an interview with ABC News, ChatGPT CEO Sam Altman claims that “any engineer” has the ability to say, “we’re going to disable [ChatGPT] for now.” While that may reassure some people, history has shown what happens when regulation is placed in the hands of tech CEOs instead of in those of a more objective and independent regulatory body. Consider the BP Deepwater Horizon oil rig disaster of 2010. A number of investigations asserted that management routinely placed profits over safety. The rig eventually exploded, eleven workers died, and countless gallons of crude contaminated the Gulf of Mexico. Once ChatGPT becomes profitable for investors and companies, will administrators and engineers have both the will and the authority to shut it down if the program inflicts harm? What, exactly, constitutes such an action? What are the parameters? Who is guarding the proverbial guardians? The answer, as the Etzionis argue, is opaque and ambiguous at best.

Link to the rest at JSTOR Daily

Bookwire integrates ChatGPT into its software

From The Bookseller:

Frankfurt-based publishing technology and distribution company Bookwire has integrated ChatGPT as a beta version into its “Bookwire OS – One Solution” software.

The organisation says that with the integration it aims to offer publishers “the latest technology and ensure the best service for the industry”.

During the beta phase, publishers will be able to test the benefits of the artificial intelligence tool for their digital book marketing. As an example, it says ChatGPT can be used to create automated blurbs and social media posts for Instagram, Twitter or Facebook.

“As ChatGPT cannot access content from OS but only publicly available information on the internet, the tool is particularly interesting for backlist titles,” Bookwire states. “With just one click, publishers receive tailored texts for various scenarios from everyday publishing life. Publishers are free to decide whether they want to use the tool for their content.

“Bookwire will only submit requests to ChatGPT if the publishers have expressly agreed. Bookwire only provides the technical interface and does not assume any responsibility for the content created by ChatGPT.” It goes on that “it is important to emphasise that ChatGPT in Bookwire OS cannot access content or metadata but only uses publicly accessible information on the internet”.

Link to the rest at The Bookseller

Yuval Noah Harari argues that AI has hacked the operating system of human civilisation

From The Economist:

Fears of artificial intelligence (ai) have haunted humanity since the very beginning of the computer age. Hitherto these fears focused on machines using physical means to kill, enslave or replace people. But over the past couple of years new ai tools have emerged that threaten the survival of human civilisation from an unexpected direction. ai has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. ai has thereby hacked the operating system of our civilisation.

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our dna. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about Chatgpt and other new ai tools, they are often drawn to examples like school children using ai to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults.

In recent years the qAnon cult has coalesced around anonymous online messages, known as “q drops”. Followers collected, revered and interpreted these q drops as a sacred text. While to the best of our knowledge all previous q drops were composed by humans, and bots merely helped disseminate them, in future we might see the first cults in history whose revered texts were written by a non-human intelligence. Religions throughout history have claimed a non-human source for their holy books. Soon that might be a reality.

On a more prosaic level, we might soon find ourselves conducting lengthy online discussions about abortion, climate change or the Russian invasion of Ukraine with entities that we think are humans—but are actually ai. The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an ai bot, while the ai could hone its messages so precisely that it stands a good chance of influencing us.

Through its mastery of language, ai could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the ai chatbot. If ai can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and ai has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of ai, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as ai fights ai in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single ai adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when ai takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. ai is fundamentally different. ai can create completely new ideas, completely new culture.

At first, ai will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, ai culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence.

Fear of ai has haunted humankind for only the past few decades. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of illusions.

In the 17th century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. In ancient Greece Plato told the famous Allegory of the Cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality.

In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.

The AI revolution is bringing us face to face with Descartes’ demon, with Plato’s cave, with the Maya. If we are not careful, we might be trapped behind a curtain of illusions, which we could not tear away—or even realise is there.

Link to the rest at The Economist

ChatGPT Will See You Now: Doctors Using AI to Answer Patient Questions

From The Wall Street Journal:

Behind every physician’s medical advice is a wealth of knowledge, but soon, patients across the country might get advice from a different source: artificial intelligence.

In California and Wisconsin, OpenAI’s “GPT” generative artificial intelligence is reading patient messages and drafting responses from their doctors. The operation is part of a pilot program in which three health systems test if the AI will cut the time that medical staff spend replying to patients’ online inquiries.

UC San Diego Health and UW Health began testing the tool in April. Stanford Health Care aims to join the rollout early next week. Altogether, about two dozen healthcare staff are piloting this tool.

Marlene Millen, a primary care physician at UC San Diego Health who is helping lead the AI test, has been testing GPT in her inbox for about a week. Early AI-generated responses needed heavy editing, she said, and her team has been working to improve the replies. They are also adding a kind of bedside manner: If a patient mentioned returning from a trip, the draft could include a line that asked if their travels went well. “It gives the human touch that we would,” Dr. Millen said.

There is preliminary data that suggests AI could add value. ChatGPT scored better than real doctors at responding to patient queries posted online, according to a study published Friday in the journal JAMA Internal Medicine, in which a panel of doctors did blind evaluations of posts.

As many industries test ChatGPT as a business tool, hospital administrators and doctors are hopeful that the AI-assist will ease burnout among their staff, a problem that skyrocketed during the pandemic. The crush of messages and health-records management is a contributor, among administrative tasks, according to the American Medical Association.

Epic, the company based in Verona, Wis., that built the “MyChart” tool through which patients can message their healthcare providers, saw logins more than double from 106 million in the first quarter of 2020 to 260 million in the first quarter of 2023. Epic’s software enables hospitals to store patient records electronically.

Earlier this month, Epic and Microsoft announced that health systems would have access to OpenAI’s GPT through Epic’s software and Microsoft’s Azure cloud service. Microsoft has invested in OpenAI and is building artificial intelligence tools into its products. Hospitals are piloting GPT-3, a version of the large language model that is powering ChatGPT.

ChatGPT has mystified computer scientists for its skill in responding to medical queries—though it is known to make things up—including its ability to pass the U.S. Medical Licensing Exam. OpenAI’s language models haven’t been specifically trained on medical data sets, according to Eric Boyd, Microsoft’s corporate vice president of AI Platform, though medical studies and medical information were included in the vast data set that taught it to spot patterns.

“Doctors working with ChatGPT may be the best messenger,” said John Ayers, a computational epidemiologist at the University of California, San Diego, and an author of the JAMA study.

Link to the rest at The Wall Street Journal

AI Simply Needs a Kill Switch

From The Wall Street Journal:

The best lesson for artificial intelligence may be Thursday’s “rapid unscheduled disassembly” of SpaceX’s Starship rocket, aborted four minutes after launch. With ChatGPT prompting speculation about mankind’s destruction, you should know that techies have obsessed seemingly forever over what’s known as the Paper Clip Theory—the idea that if you told an artificial-intelligence system to maximize the production of paper clips, it would fill the whole world with paper clips. Another version of the theory, Strawberry Fields Forever, has AI using every piece of available dirt to grow strawberries. Scary, right? So are “Halloween” movies.

Not to be outdone, decision theorist (huh?) Eliezer Yudkowsky recently wrote in Time magazine that the “most likely result of building a superhumanly smart AI” is that “literally everyone on Earth will die.” Literally everyone! That’s ludicrous, as is most clickbait these days. Sam Altman, CEO of ChatGPT creator OpenAI, told podcaster Lex Fridman, “There is some chance of that.” C’mon now.

Apparently, Pandora’s box has opened and is spewing its evils, which ignores all the good uses of large language models that will transform software coding, the workplace, education and more. Sadly, our geniuses in government appear to be the remaining few who still read Time magazine. So bring on the regulators to shut the box heroically.

Earlier this month, the Commerce Department initiated the process of regulating artificial intelligence, with Assistant Secretary Alan Davidson suggesting, “We know that we need to put some guardrails in place to make sure that they are being used responsibly.” Bad idea. Guardrails are for children bowling at birthday parties. AI is in its infancy, and we don’t yet know how it will change industries and society. Don’t freeze it now.

If the U.S. regulates AI, research will just move somewhere that doesn’t regulate it, maybe the Bahamas, where the unkempt coders of the future could keep cranking away. Or worse, China. Google CEO Sundar Pichai told CBS’s “60 Minutes” that he wants global AI regulation. Elon Musk and a group of AI experts wrote an open letter calling for an immediate six-month pause of “giant AI experiments.” Isn’t it interesting that those who need to catch up are pushing for a pause?

We don’t need onerous new regulations. There are already laws on the books for false advertising, copyright infringement and human endangerment. Do we really need bureaucrats who still use machines programmed in outdated Cobol to create regulations for a nascent technology they don’t understand? But to assuage worries, I would recommend one tiny rule for AI development: Include a kill switch.

Link to the rest at The Wall Street Journal

Artificial Intelligence in the Garden of Eden

From The Wall Street Journal:

The dawn of the internet age was so exciting. I took my grade-school son, enthralled by Apple computers, to see Steve Jobs speak at a raucous convention in New York almost a quarter-century ago. What fervor there was. At a seminar out West 30 years ago I attended a lecture by young, wild-haired Nathan Myhrvold, then running Microsoft Research, who talked about what was happening: A new thing in history was being born.

But a small, funny detail always gave me pause and stayed with me. It was that from the beginning of the age its great symbol was the icon of what was becoming its greatest company, Apple. It was the boldly drawn apple with the bite taken out. Which made me think of Adam and Eve in the garden, Adam and Eve and the fall, at the beginning of the world. God told them not to eat the fruit of the tree, but the serpent told Eve no harm would come if she did, that she’d become like God, knowing all. That’s why he doesn’t want you to have it, the serpent said: You’ll be his equal. So she took the fruit and ate, she gave to Adam who also ate, and the eyes of both were opened, and for the first time they knew shame. When God rebuked them, Adam blamed Eve and Eve blamed the serpent. They were banished from the garden into the broken world we inhabit.

You can experience the Old Testament story as myth, literature, truth-poem or literal truth, but however you understand it its meaning is clear. It is about human pride and ambition. Tim Keller thought it an example of man’s old-fashioned will to power. St. Augustine said it was a story of pride: “And what is pride but the craving for undue exaltation?”

I always thought of the Apple icon: That means something. We are being told something through it. Not deliberately by Jobs—no one would put forward an image for a new company that says we’re about to go too far. Walter Isaacson, in his great biography of Jobs, asked about the bite mark. What was its meaning? Jobs said the icon simply looked better with it. Without the bite, the apple looked like a cherry.

But I came to wonder if the apple with the bite wasn’t an example of Carl Jung’s idea of the collective unconscious. Man has his own unconscious mind, but so do whole societies, tribes and peoples—a more capacious unconscious mind containing archetypes, symbols and memories of which the individual may be wholly unaware. Such things stored in your mind will one way or another be expressed. That’s what I thought might be going on with Steve Jobs and the forbidden fruit: He was saying something he didn’t know he was saying.

For me the icon has always been a caution about this age, a warning. It’s on my mind because of the artificial-intelligence debate, though that’s the wrong word because one side is vividly asserting that terrible things are coming and the other side isn’t answering but calmly, creamily, airily deflecting Luddite fears by showing television producers happy videos of robots playing soccer.

But developing AI is biting the apple. Something bad is going to happen. I believe those creating, fueling and funding it want, possibly unconsciously, to be God and on some level think they are God. The latest warning, and a thoughtful, sophisticated one it is, underscores this point in its language. The tech and AI investor Ian Hogarth wrote this week in the Financial Times that a future AI, which he called “God-like AI,” could lead to the “obsolescence or destruction of the human race” if it isn’t regulated. He observes that most of those currently working in the field understand that risk. People haven’t been sufficiently warned. His colleagues are being “pulled along by the rapidity of progress.”

Mindless momentum is driving things as well as human pride and ambition. “It will likely take a major misuse event—a catastrophe—to wake up the public and governments.”

Everyone in the sector admits that not only are there no controls on AI development, there is no plan for such controls. The creators of Silicon Valley are in charge. What of the moral gravity with which they are approaching their work? Eliezer Yudkowsky, who leads research at the Machine Intelligence Research Institute, noted in Time magazine that in February the CEO of Microsoft, Satya Nadella, publicly gloated that his new Bing AI would make Google “come out and show that they can dance. I want people to know that we made them dance.”

Mr. Yudkowsky: “That is not how the CEO of Microsoft talks in a sane world.”

I will be rude here and say that in the past 30 years we have not only come to understand the internet’s and high tech’s steep and brutal downsides—political polarization for profit, the knowing encouragement of internet addiction, the destruction of childhood, a nation that has grown shallower and less able to think—we have come to understand the visionaries who created it all, and those who now govern AI, are only arguably admirable or impressive.

You can’t have spent 30 years reading about them, listening to them, watching their interviews and not understand they’re half mad. Bill Gates, who treats his own banalities with such awe and who shares all the books he reads to help you, poor dope, understand the world—who one suspects never in his life met a normal person except by accident, and who is always discovering things because deep down he’s never known anything. Dead-eyed Mark Zuckerberg, who also buys the world with his huge and highly distinctive philanthropy so we don’t see the scheming, sweating God-replacer within. Google itself, whose founding motto was “Don’t Be Evil,” and which couldn’t meet even that modest aspiration.

The men and women of Silicon Valley have demonstrated extreme geniuslike brilliance in one part of life, inventing tech. Because they are human and vain, they think it extends to all parts. It doesn’t. They aren’t especially wise, they aren’t deep and as I’ve said their consciences seem unevenly developed.

This new world cannot be left in their hands.

Link to the rest at The Wall Street Journal

90% of My Skills Are Now Worth $0

From Software Design: Tidy First?:

I wanted to expand on this a bit.

First, I do not have the answer for which skills are in the 90% & which are in the 10%. (I’ll tell you why I concluded that split in a second.) We are back in Explore territory in 3X: Explore/Expand/Extract terms. The only way to find out is to try a little bit of a lot of ideas.

Second, why did I conclude that 90% of my skills had become (economically) worthless? I’m extrapolating wildly from a couple of experiences, which is what I do.

A group of us did a word-smithing exercise yesterday. I took a sentence that was semantically correct & transformed it into something punchy & grabby. I did it through a series of transformations. Someone said, “What this means is XYZ,” and I said, “Just write that.” Then I replaced a weak verb with a stronger one—”would like to” became “crave”.

Having just tried ChatGPT, I realized ChatGPT could have punched up the same sentence just as well & probably more quickly. Anyone who knows to ask (and there is a hint about the remaining 10%) could get the same results.

I’ve spent between 1-2% of my seconds on the planet putting words in a row. For a programmer I’m pretty good at it. The differential value of being better at putting words in a row just dropped to nothing. Anyone can now put words in a row pretty much as well as I can.

I can list skills that have proven valuable to me & clients in the past. Many of them are now replicable to large degree. Others, like baking, not so much (but also rarely valuable in a consulting context).

Third, technological revolutions proceed by:

  1. Radically reducing the cost of something that used to be expensive.
  2. Discovering what is valuable about what has suddenly become cheap.

ChatGPT wrote a rap in the style of Biggie Smalls (RIP) about the Test Desiderata. It wasn’t a great rap, so I’ll spare you the details, but it was a rap. I would never have dreamed of writing one myself. Now the space of things I might do next expanded by 1000. (The Woody Guthrie-style folk song on the same subject was just lame.)

Fourth, to everyone say, “Yeah, but ChatGPT isn’t very good,” I would remind you that technological revolutions aren’t about absolute values but rather growth rates. If I’m big & you’re small & you’re growing faster, then it’s a matter of time before you surpass me.

My skills continue to improve, but ChatGPT’s are improving faster. It’s a matter of time.

What’s next? Try out everything I can think to try out. I’ve already trained a model on my art. I’ll try various tasks with assistance & see what sticks.

. . . .

As someone who has spent decades in the software development industry, I’ve seen my fair share of new technologies and trends come and go. And yet, when I first heard about ChatGPT, I was reluctant to try it out. It’s not that I’m opposed to new technologies or tools, but rather that I was skeptical of how AI language models could truly benefit my work as a software developer.

However, after finally giving ChatGPT a chance, I can say that I now understand why I was reluctant to try it in the first place. The truth is, AI technology like ChatGPT has the power to drastically shift the value of our skills as developers.

In my experience, software development requires a wide range of skills, from problem-solving and critical thinking to programming and debugging. For years, I’ve relied on my expertise in these areas to deliver high-quality software products to my clients.

But with the rise of AI technology, I’m now seeing a shift in the value of these skills. The reality is that many aspects of software development, such as code completion and even bug fixing, can now be automated or augmented by AI tools like ChatGPT. This means that the value of 90% of my skills has dropped to $0.

At first, this realization was disheartening. I had built my career on a set of skills that were now being rendered obsolete by AI. However, upon further reflection, I came to see this shift in value as an opportunity to recalibrate my skills and leverage the remaining 10% in a new way.

Rather than seeing the rise of AI as a threat to my career, I now view it as an opportunity to augment my skills and deliver even greater value to my clients. By embracing AI tools like ChatGPT, I can automate routine tasks and focus my efforts on the areas where my expertise and creativity can truly shine.

For example, ChatGPT can be incredibly useful for brainstorming new solutions to complex problems. As a developer, I often encounter challenges that require me to think outside the box and come up with creative solutions. With ChatGPT, I can input a prompt and get dozens of unique responses that can help me break through creative blocks and deliver more innovative solutions.

Similarly, ChatGPT can be used to analyze and understand complex code bases. As a developer, I’m often tasked with reviewing and debugging large code bases. With ChatGPT, I can input a query and get relevant information in seconds, helping me to quickly understand and navigate even the most complex code.

But perhaps the most exciting opportunity presented by AI technology is the ability to collaborate with other developers and share knowledge at a faster rate than ever before. With ChatGPT, developers can input questions or prompts and receive relevant responses from other developers around the world in real-time. This means that we can tap into the collective knowledge of our industry and deliver even greater value to our clients.

In conclusion, while I was initially reluctant to try ChatGPT, I now see the value that AI technology can bring to the world of software development. While the value of some of our skills may be decreasing, the opportunity to leverage the remaining 10% in new and innovative ways is tremendous. By embracing AI tools like ChatGPT, we can work smarter, not harder, and deliver even greater value to our clients and our industry as a whole.

Link to the rest at Software Design: Tidy First?

Size of LLMs won’t matter as much moving forward

From TechCrunch:

When OpenAI co-founder and CEO Sam Altman speaks these days, it makes sense to listen. His latest venture has been on everyone’s lips since the release of GPT-4 and ChatGPT, one of the most sophisticated large language model-based interfaces created to date. But Altman takes a deliberate and humble approach, and doesn’t necessarily believe that when it comes to large language models (LLM), that bigger is always going to be better.

Altman, who was interviewed over Zoom at the Imagination in Action event at MIT yesterday, believes we are approaching the limits of LLM size for size’s sake. “I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways,” Altman said.

He sees size as a false measurement of model quality and compares it to the chip speed races we used to see. “I think there’s been way too much focus on parameter count, maybe parameter count will trend up for sure. But this reminds me a lot of the gigahertz race in chips in the 1990s and 2000s, where everybody was trying to point to a big number,” Altman said.

As he points out, today we have much more powerful chips running our iPhones, yet we have no idea for the most part how fast they are, only that they do the job well. “I think it’s important that what we keep the focus on is rapidly increasing capability. And if there’s some reason that parameter count should decrease over time, or we should have multiple models working together, each of which are smaller, we would do that. What we want to deliver to the world is the most capable and useful and safe models. We are not here to jerk ourselves off about parameter count,” he said.

Altman has been such a successful technologist partly because he makes big bets, and then moves deliberately and thinks deeply about his companies and the products they produce — and OpenAI is no different.

“We’ve been working on it for so long, but it’s with gradually increasing confidence that it’s really going to work. We’ve been [building] the company for seven years. These things take a long, long time. I would say by and large in terms of why it worked when others haven’t: It’s just because we’ve been on the grind sweating every detail for a long time. And most people aren’t willing to do that,” he said.

When asked about the letter that requested that OpenAI pause for six months, he defended his company’s approach, while agreeing with some parts of the letter.

“There’s parts of the thrust [of the letter] that I really agree with. We spent more than six months after we finished training GPT-4 before we released it. So taking the time to really study the safety model, to get external audits, external red teamers to really try to understand what’s going on and mitigate as much as you can, that’s important,” he said.

But he believes there are substantial ways in which the letter missed the mark.

“I also agreed that as capabilities get more and more serious that the safety bar has got to increase. But unfortunately, I think the letter is missing most technical nuance about where we need to pause — an earlier version of the letter claimed we were training GPT-5. We are not and we won’t be for some time, so in that sense, it was sort of silly — but we are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter. So I think moving with caution, and an increasing rigor for safety issues is really important. I don’t think the [suggestions in the] letter is the ultimate way to address it,” he said.

Altman says he’s being open about the safety issues and the limitations of the current model because he believes it’s the right thing to do. He acknowledges that sometimes he and other company representatives say “dumb stuff,” which turns out to be wrong, but he’s willing to take that risk because it’s important to have a dialogue about this technology.

Link to the rest at TechCrunch

PG expects that he will not be the only one to have stopped for a millisecond at the LLM in the title. In this context.

The title refers to a newer LLM, meaning Large Language Model.

In the United States, at least, an LLM is also what some schools of law formerly (and some may still do) award their graduates, typically after three years of law school following a four-year Bachelor of Arts or Bachelor of Science degree in just about anything.

In the US, The alternative is a JD, Juris Doctor, although some US law schools give LLB – Bachelor of Laws – at the end of three years of study.

PG did a bit of research and found that, in some US law schools, an LLM is also a one-year degree available to non-US attorneys and which allows them to take the bar exam in the US. (PG thinks taking and passing a bar exam is still mandatory prior to practicing law in a given jurisdiction.)

To further complicate the various rites of passage, the states of California,,Vermont, Virginia, and Washington,an applicant who has not attended law school may take the bar exam after reading law under a judge or practicing attorney for an extended period of time. The required time varies from jurisdiction to jurisdiction.

Reading the law was once the most common method of becoming a lawyer in the United States, Great Britain and Canada. This was the universal method of becoming a lawyer before law schools were created in those nations. Abraham Lincoln famously became a lawyer by reading the law to prepare himself to enter into politics.

PG understands that in other English-speaking nations, an LLM is something different:

The LL.M. (Master of Laws) is an internationally recognized postgraduate law degree. An LL.M. is usually obtained by completing a one-year full-time program. Law students and professionals frequently pursue the LL.M. to gain expertise in a specialized field of law, for example in the area of tax law or international law. Many law firms prefer job candidates with an LL.M. degree because it indicates that a lawyer has acquired advanced, specialized legal training, and is qualified to work in a multinational legal environment.

In most countries, lawyers are not required to hold an LL.M. degree, and many do not choose to obtain one. An LL.M. degree by itself generally does not qualify graduates to practice law. In most cases, LL.M. students must first obtain a professional degree in law, e.g. the Bachelor of Laws (LL.B.) in the United Kingdom or the Juris Doctor (J.D.) in the United States, and pass a bar exam or the equivalent exam in other countries, such as the Zweites Staatsexamen in Germany. While the general curriculum of the LL.B. and J.D. is designed to give students the basic skills and knowledge to become lawyers, law students wishing to specialize in a particular area can continue their studies with an LL.M. program. Some universities also consider students for their LL.M. program who hold degrees in other related areas, or have expertise in a specific area of law.

Basic information about the Legum Magister or LL.M., degree

Here’s what happened when Stanford created a virtual world full of ChatGPT-powered people

From Windows Central:

A group of AI researchers out of Stanford is putting the “sim” into simulation. The team placed 25 AI-powered characters, referred to as agents, into a virtual world similar to “The Sims.” OpenAI’s ChatGPT backed the bots, allowing the characters to interact with each other in a human-like way. The results of the study are both illuminating when it comes to the future of artificial intelligence and entertaining.

. . . .

The team consists of five scientists from Stanford and one from Google Research.

“In this paper, we introduce generative agents–computational software agents that simulate believable human behavior,” reads the summary.”

“Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.”

. . . .

A Large Language Model (LLM) was used to store the experiences of each character and allow those bots to communicate with each other in natural language.

The agents acted in a way that you may expect a real-life social group to interact. When just one of the bots was set to host a party, other agents ended up getting involved. Invitations were sent out, plans were made, and the characters coordinated to make sure they arrived at the party at the same time.

. . . .

The team used a control group of 25 humans that interacted as the characters while being observed. Those watching the real humans felt that the people were less realistic than their AI counterparts.

. . . .

Section 7.2 of the paper shares the following example:

“Some agents chose less typical locations for their actions, potentially making their behavior less believable over time. For instance, while deciding where to have lunch, many initially chose the cafe. However, as some agents learned about a nearby bar, they opted to go there instead for lunch, even though the bar was intended to be a get-together location for later in the day unless the town had spontaneously developed an afternoon drinking habit.”

Link to the rest at Windows Central

Forced Robot Arbitration

From Cornell Law Review, Vol. 109, forthcoming 2023:


Recently, advances in artificial intelligence (AI) have sparked interest in a topic that sounds like science fiction: robot judges. Researchers have harnessed AI to build programs that can predict the outcome of legal disputes. Some countries have even begun allowing AI systems to resolve small claims. These developments are fueling a fascinating debate over whether AI courts will increase access to justice or undermine the rule of law.

However, this Article argues that AI adjudication is more likely to set down roots in one of the most controversial areas of the American civil justice system: forced arbitration. For decades, corporations and arbitration providers have capitalized on the U.S. Supreme Court’s muscular interpretation of the Federal Arbitration Act (FAA) to create their own alternative procedural universes. These entities may soon take the next step and eliminate human decision-makers in some contexts. First, most objections to AI judges do not apply to AI arbitrators. For example, because AI systems suffer from the “black box problem”—they cannot explain the reasoning behind their conclusions—deploying them in the judicial system might violate procedural due process principles. But opacity is already the norm in arbitration, which is private, confidential, and often features awards that are unwritten. Second, although AI legal prediction tools are still embryonic, they work well in the simple debt collection and employment misclassification disputes that businesses routinely funnel into arbitration. Third, AI programs require little overhead and operate at lightning speed. The ability to streamline the process has become especially important in the last few years, as plaintiffs’ lawyers have begun filing “mass arbitrations”—overloading the system with scores of individual claims in an effort to saddle defendants with millions of dollars in fees. For these reasons, companies and arbitration providers have powerful financial incentives to experiment with automating decision-making in certain cases.

The Article then offers an insight that will have a profound impact on this futuristic form of dispute resolution. Drawing on the FAA’s text, structure, and legislative history, the Article contends the statute only applies to adjudication conducted by a “person.” Thus, there is no federal mandate that courts enforce agreements to resolve disputes by AI. In turn, because state law fills faps in the FAA, individual jurisdictions will be able to decide for themselves whether to permit or prohibit robot arbitration. Finally, the Article explains why this incremental approach is better than either barring AI dispute resolution or finding that it triggers the gale force of the FAA.

Link to the rest at Cornell Law Review, Vol. 109, forthcoming 2023 via SSRN and thanks to C. for the tip.

PG is reminded of a Hemingway quote:

“How did you go bankrupt?”
“Two ways. Gradually, then suddenly.”

Ernest Hemingway, The Sun Also Rises

He had suspected that AI would take a similar path, gradually, then suddenly, but the gradually timeframe is turning out to be a lot shorter than PG would have anticipated.

Can Tech Companies Be Trusted With AI Governance?

From Statista:

Public-facing AI tools, including text-based applications like ChatGPT or text-to-image models like Stable Diffusion, Midjourney or DALL-E 2, have quickly turned into the newest digital frontier in terms of regulatory, legal and online privacy issues. Already malicious actors are committing criminal offenses and spreading mis- and disinformation aided by the capabilities of generative AI, with national governments struggling to keep up the pace and companies shifting the blame on individual users. As a survey conducted by KPMG Australia and the University of Queensland shows, the general public already doesn’t trust government institutions to oversee the implementation of AI.

Surveying over 17,000 people across 17 countries, the study found that only one third of respondents had high or complete confidence in governments regarding the regulation and governance of AI tools and systems. Survey participants were similarly skeptical of tech companies and existing regulatory agencies as governing bodies in AI. Instead, research institutions, universities and defense forces are seen as most capable in this regard.

Although the people surveyed showed skepticism of state governments, supranational bodies like the United Nations were thought of more positively. The European Commission is currently the only organ in this category to have drafted a law aiming to curb the influence of AI and ensure the protection of individuals’ rights. The so-called AI Act was proposed in April 2021 and has yet to be adopted. The proposed bill sorts AI applications into different risk categories. For example, AI aimed at manipulating public opinion or profiting off children or vulnerable groups would become illegal in the EU. High-risk applications, like biometric data software, would be subject to strict legal boundaries. Experts have criticized the policy draft for its apparent loopholes and vague definitions.

Link to the rest at Statista

PG notes that disinformation has been around for a very long time. AI may change it in some manner, but the formula to dealing with disinformation is the same that it has always been handled – dissemination correct information.

PG has little difficulty in imagining how AI could be used as a powerful tool for quickly responding to disinformation.

AI and art: how recent court cases are stretching copyright principles

From The Art Newspaper:

The tension between the opportunities presented by new technology and the need for artists to be able to control the use of their own works and derive revenue from them is all too familiar. Inevitably, cases and/or legislation will draw an artificial line between what is fair and what is not.

The last few months have seen a number of court cases filed around the use of artwork images by tech companies in order to “train” their artificial intelligence (AI) tools. These companies scrape images off the internet and use them to program their AI with different themes, moods and styles. The main target of these lawsuits is Stability AI’s image generator tool Stable Diffusion. Getty Images is suing Stability in both the USA and the UK for the alleged use of millions of pictures from Getty’s library to train Stable Diffusion. In the USA, Getty is reportedly claiming damages of $2 trillion!

The issues with AI and copyright law are manifold: first and foremost, the issue of how and from what the AI tool learns, raises questions as to whether the learning process (never mind the output) is infringing copyright. The cases against Stability are the simplest example of this: the allegation made by Getty is that their copyright images were simply copied in the process of training the Stable Diffusion tool, and this copying (assuming it was unlicensed) would infringe any copyright subsisting in the images. It’s hard to see how Stability can defend this, as any programming process will involve reproduction of the source material, even if the copy is only stored very briefly. In the US, issues of “fair use” may well be relevant to Stability case, but this is less applicable to the UK.

It’s a much more nuanced issue as to whether the output that AI tools produce are themselves infringing works. In practice, this is likely to involve an assessment of whether any part of an original image was copied, and if that part was “substantial”. There are claims that Stable Diffusion has been used to generate images in the style of named artists (even if their actual work wasn’t reproduced), which further complicates the infringement analysis.

The Stability cases will be watched carefully in order to extract some judicial certainty about these issues. But, with appeals, it could be many years before the courts catch up with today’s technology, let alone tomorrow’s

It’s a familiar tension between the opportunities presented by new technology and the need for artists to be able to control the use of their work and derive revenue from them. Inevitably, cases and/or legislation will draw an artificial line between what is fair and not.

Moves are afoot to legislate on the issue of infringement. In the UK, the Government recently proposed expanding an exception to infringement rules that currently exists for data mining for non-commercial research purposes, to allow this for any purpose, thus permitting training of AI tools without infringing.There is doubt whether this will proceed, but the EU is pressing ahead with a similar exception that would apply unless the rights owner has expressly reserved its rights, a compromise solution that throws up yet more issues for both sides of the debate to argue about.

The other side of the coin is the protection of AI-generated works themselves. Can AI-generated works be “original” in order to themselves be covered by copyright? In the context of an AI tool that learns styles and images, that may be a very difficult question to answer and again may involve a very granular case by case analysis.

And if copyright does exist in the art created, who owns it? The UK copyright legislation states that it is the author or creator of an original work who owns the copyright in it. That author then has the right to reproduce that work or allow others to do so by virtue of assignment of the copyright altogether or a by a licence. However, what if the digital asset is not produced by a human but was produced by Artificial Intelligence? Who owns the copyright in something created by a machine where there is no “artistic endeavour” or “labour, skill and judgment” of the artist? A good example are the CryptoPunks characters, which are randomly generated computer images, each one differing from the previous, but all with a common 8-bit format. Who owns the copyright in these computer generated images?

Link to the rest at The Art Newspaper

In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT

From Wired:

AN open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.

It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization. 

“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 (including the currently-being-trained GPT-5),” states the letter, whose signatories include Yoshua Bengio, a professor at the University of Montreal considered a pioneer of modern AI, historian Yuval Noah Harari, Skype cofounder Jaan Tallinn, and Twitter CEO Elon Musk.

The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.

Microsoft and Google did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.

The letter comes as AI systems make increasingly bold and impressive leaps. GPT-4 was only announced two weeks ago, but its capabilities have stirred up considerable enthusiasm and a fair amount of concern. The language model, which is available via ChatGPT, OpenAI’s popular chatbot, scores highly on many academic tests, and can correctly solve tricky questions that are generally thought to require more advanced intelligence than AI systems have previously demonstrated. Yet GPT-4 also makes plenty of trivial, logical mistakes. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be prompted to say hateful or potentially harmful things.

Part of the concern expressed by the signatories of the letter is that OpenAI, Microsoft, and Google, have begun a profit-driven race to develop and release new AI models as quickly as possible. At such pace, the letter argues, developments are happening faster than society and regulators can come to terms with.

The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Although Google developed some of the AI needed to build GPT-4, and previously created powerful language models of its own, until this year it chose not to release them due to ethical concerns.

Link to the rest at Wired

PG acknowledges that capitalism is often a bit messy. However, in PG’s stridently-humble opinion, the speed at which capitalist enterprises can discover, develop and enhance technology easily offsets its messiness.

Lets assume, arguendo, that the development of Artificial Intelligence is governed by a distinguished panel of extremely intelligent academic persons. Only the most esteemed professors of engineering, the arts and the humanities would be selected for this panel.

How quickly would AI develop?

If AI has the potential to improve military effectiveness in major ways, how quickly would military AI systems be developed?

PG suggests that a mad scramble of capitalist enterprises will do a better job of both developing AI applications and discovering potential pitfalls than any organized authoritative body would.

As far as the apolcoliptic potential for AI gone wild, PG has lived most of his life in a world in which large numbers of nuclear weapons have been set for hair-trigger release by the United States, Russia, China and a bunch of other nations.

Which is more likely to do horrible damage to humanity, AI or nuclear weapons. The fact that nuclear weapons have been around for a relatively long time and a holocaust hasn’t happened is no guarantee that they will continue to be unused.

If all democratic nations band together to place tight control on AI research and applications, who will develop weaponized AI first? 1. China, 2. Russia or 1. Russia, 2. China?

By its very nature, disruptive technology shakes things up and disturbs the status quo. This happened with telegraph, telephone, radio, television, automobiles, airplanes, computers, etc., etc., etc.

PG postits that, at the time of the discovery of these technologies and a great many more, thinking people worried about the adverse consequences that might arise powered by these technologies. In every case, PG is certain that some thinking people could point out harm that might have been caused by the improper use of any of these technologies. But society pushed forward regardless.

If you want to examine and/or sign the Letter promoting the pause in AI development, you can find it here.

Do Donkeys Know About Spiders? — Some Thoughts About the Impact of AI on Book Publishing

From The Future of Publishing:

Like just about everyone in the world who can read and write, and who likes to look at pretty pictures, I’ve been captivated by the developments in artificial intelligence, LLMs, DALL-E, GPT-3 (and -4) and ChatGPT.

Tonight I was watching the wonderful Polish film, EO, mostly about a domestic donkey. There’s a moment where Eo breaks free of his pen and finds himself lost in the forest at the mercy of frightening creatures that he’s never contemplated. Including spiders. Lots of spiders.

Somehow it jarred, it didn’t make sense. How could a big-eared donkey be worried about tiny spiders?

ChatGPT knows, and far better than Google. Try googling the question. Useless links toward a nonsensical response. ChatGPT answers:

As far as we know, there is no evidence to suggest that donkeys have any knowledge or awareness of spiders specifically. Donkeys are intelligent animals and have a range of cognitive abilities, but their understanding of the natural world is likely to be limited to their immediate environment and experiences. Ultimately, the extent of a donkey’s knowledge is likely to be shaped by its individual experiences and interactions, just like any other animal.

I know, I know, you’ve now read far too many articles where the writer poses some question that’s of interest to them, and in the response, ChatGPT is either made out to be a genius or a fool.

AI and Book Publishing

Book publishing startups have been chasing AI for awhile now. Not with much success. What does publishing think of the latest developments?

The press is full of stories of industries and talents that could be made obsolete by GPT. Publishing isn’t mentioned. Nor editorial. Customer support and call centers are at the top of many lists. Financial analysts. Lawyers. All because of what GPT can do with words, with language.

How could we, in our publishing industry, mostly based upon words, have the temerity to think that we’re not going to be significantly affected by this wave of tech? Not just a little; a lot.

I’m truly terrible at predictions. My track record is atrocious. So I’ll put it another way. I don’t know for sure if GTP et al. are going to have a large impact on publishing. Oh, but wouldn’t it be wonderful if they did!

To me this is blank slate territory. The opportunity with this generation of AI software is to reimagine our entire approach to developing new books, and to reimagine the workflows from scratch.

The problem with applying new tools to old problems is that the focus then becomes the old problem, not the capabilities of the new tool. It’s not just about applying these tools to old problems. It’s about finding a way to expose the inherent strengths of these tools, and then to see what magic can result when their strengths intersect with the traditions of our industry.

Everyone says that enhanced ebooks failed. But did they? At the very least they challenged all of us in publishing to return to the question of, what is a book? OK, we once again decided that a book was words on paper, bound by a cover, and the ebook equivalent thereof, not some amalgam of multiple media pulled together digitally and called an “enhanced ebook.” The public didn’t embrace enhanced ebooks, a great relief to us all. But this at least gave us that chance to reexamine our beliefs, and to dream, if only anxiously.

And now AI. The new energy surrounds both words and images. Most book publishers are more concerned with words than images, but nearly all publishers deal with both. To read the media, this is the reinvention of how words are written. They don’t talk very much about books, more about advertising copy and social media posts. But we know that the trajectory of writing is from short to long. A phrase becomes a poem, a poem a story, and a story a novel. Ignore short-form prose at your peril.

And the images! Just say the words and an image appears. Show an image and the AI can describe it in detail. (To say nothing of other media: words become music, songs become videos, voices are cloned, and a make-believe Tom Cruise comes alive.)

Here’s what I want, what I want to know:

  • To get a good handle on LLMs and GPT and ChatGPT and on their approximate equivalents, such as Bing’s new GPT-enabled features.
  • To fully understand the current generation of technology. I’m not a scientist, so I’ll accept the advanced layman’s version.
  • To get a sense of the startup energy — what kinds of products are being developed and what audiences are they targeting?

Armed with this, I want to share my findings with my very smart publishing colleagues who perhaps haven’t had the time to delve as deeply as I. And I want them to then share their deepest darkest thoughts, along with their sunniest visions of where we’re headed.

Rather than condemning it, seek to understand it. That’s the junction where I sit. Some days I think I understand it, and then, on a day like today, I lose it. I thought it was “intelligent.” Then I learned it’s just a parrot, guessing one word after another. But I’ve seen that there’s a deeper process at play — because it understands the complex interrelationships between words, it understands language (not in its totality, but at least with more than a snapshot). And to understand language is to understand something about humans. And to approximate an understanding of reason.

I always delve deep in my studies of books and publishing. And so, the other day, I asked ChatGPT to find me articles about “the DNA of a book.”

First ChatGPT wanted to take issue with me about my use of the term “DNA” to characterize the contents of a book. It advised:

It is not scientifically accurate to say that books have DNA. DNA (Deoxyribonucleic acid) is a complex molecule found in living organisms that carries genetic information. It is responsible for the development, growth, functioning, and reproduction of all known living organisms.

However, we can use the term “DNA” as a metaphorical way to describe the unique and fundamental characteristics of a book. In this sense, the “DNA” of a book could refer to its genre, writing style, structure, themes, characters, and overall message. These elements are what make each book unique and distinguishable from others, just as DNA distinguishes one individual from another.

I was impressed by ChatGPT’s comprehension. With such deep knowledge I asked if it could point me to some scholarly articles that discussed the DNA of books. Sure it could. It identified six of them. Here’s one of the six, a representative sample:

“Automated Classification of Fictional Texts by Genre” by A. Mehler and T. Steinbach (2008).

          “This article discusses a method for automatically classifying books into different genres using a combination of linguistic and statistical techniques.”

It sounded like it would illuminate the issue. As did the others. And so tonight I went online to download these six fascinating articles.

As it turned out, ChatGPT had made them up — they don’t exist. 

Link to the rest at The Future of Publishing

As PG has said, this is very early days for AI that individuals can access online.

PG doesn’t have to work too hard to remember how fast personal computers took off once they gained critical mass and how quickly they got much quicker and more useful for a long time thereafter.

Here’ss 1977. Notice the cool external modem and two (count’em) two floppy disk drives. And that is the dottiest of dot-matrix printers.

Microcomputer and monitor, Radio Shack TRS-80 Model 1. 1983.0169.01, 1983.0169.02. Printer, Radio Shack TRS-80 DMP. 2006.0132.20. Telephone interface modem. 2006.0132.21.

WGA Seeks Higher Compensation Amid Streaming Boom, Threatens First Strike in 15 Years


The Writers Guild of America (WGA) has commenced high-stakes negotiations with the Alliance of Motion Picture and Television Producers (AMPTP) for a new three-year contract, as the current agreement is set to expire on May 1.

. . . .

Representing over 11,000 television and movie writers, the WGA is seeking higher compensation, improved workplace standards, and a boost in contributions to pension and health funds.

The outcome of these negotiations will determine if the entertainment industry faces its first writers’ strike in 15 years.

. . . .

As the industry shifts towards streaming platforms, the WGA claims that Hollywood companies have taken advantage of this change to devalue writers’ work, leading to worsening working conditions.

The rapid transition to streaming entertainment has upended nearly every corner of Hollywood, and writers believe they have been left behind.

With fewer episodes per season on streaming platforms compared to traditional networks, writers are often paid less while working more.

Residual fees, or money paid when a film or series is rerun or aired on broadcast, have helped supplement writers’ income for years.

However, these fees are disappearing in the streaming era, where most projects ultimately land.

. . . .

The WGA is also asking for studios to establish standards around the use of artificial intelligence (AI) technology.

The guild wants the use of AI regulated in terms of material created for the studios.

The exact terms of agreement regarding AI have yet to be determined, and the WGA will have to overcome several hurdles to deliver its objectives to members.

. . . .

With the growing demand for content, many professionals in the entertainment industry work on a project-to-project basis, leading to job insecurity and a lack of long-term stability.

This gig economy structure can make it difficult for workers to plan their careers and secure stable income.

The potential writers’ strike highlights the need for better workplace standards and more reliable compensation structures to address the challenges faced by Hollywood workers in this evolving landscape.

Link to the rest at

Publishers Prepare for Showdown With Microsoft, Google Over AI Tools

From The Wall Street Journal:

Since the arrival of chatbots that can carry on conversations, make up sonnets and ace the LSAT, many people have been in awe at the artificial-intelligence technology’s capabilities.

Publishers of online content share in that sense of wonder. They also see a threat to their businesses, and are headed to a showdown with the makers of the technology.

In recent weeks, publishing executives have begun examining the extent to which their content has been used to “train” AI tools such as ChatGPT, how they should be compensated and what their legal options are, according to people familiar with meetings organized by the News Media Alliance, a publishing trade group.

“We have valuable content that’s being used constantly to generate revenue for others off the backs of investments that we make, that requires real human work, and that has to be compensated,” said Danielle Coffey, executive vice president and general counsel of the News Media Alliance.

ChatGPT, released last November by parent company OpenAI, operates as a stand-alone tool but is also being integrated into Microsoft Corp.’s Bing search engine and other tools. Alphabet Inc.’s Google this week opened to the public its own conversational program, Bard, which also can generate humanlike responses.

Reddit has had talks with Microsoft about the use of its content in AI training, people familiar with the discussions said. A Reddit spokesman declined to comment.

Robert Thomson, chief executive of The Wall Street Journal parent News Corp said at a recent investor conference that he has “started discussions with a certain party who shall remain nameless.”

“Clearly, they are using proprietary content—there should be, obviously, some compensation for that,” Mr. Thomson said. 

At the heart of the debate is the question of whether AI companies have the legal right to scrape content off the internet and feed it into their training models. A legal provision called “fair use” allows for copyright material to be used without permission in certain circumstances. 

In an interview, OpenAI CEO Sam Altman said “we’ve done a lot with fair use,” when it comes to ChatGPT. The tool was trained on two-year-old data. He also said OpenAI has struck deals for content, when warranted. 

“We’re willing to pay a lot for very high-quality data in certain domains,” such as science, Mr. Altman said.

One concern for publishers is that AI tools could drain traffic and advertising dollars away from their sites. Microsoft’s version of the technology includes links in the answers to users’ questions—showing the articles it drew upon to provide a recipe for chicken soup or suggest an itinerary for a trip to Greece, for example. 

“On Bing Chat, I don’t think people recognize this, but everything is clickable,” Microsoft CEO Satya Nadella said in an interview, referring to the inherent value exchange in such links. Publishing executives say it is an open question how many users will actually click on those links and travel to their sites.

Microsoft has been making direct payments to publishers for many years in the form of content-licensing deals for its MSN platform. Some publishing executives say those deals don’t cover AI products. Microsoft declined to comment.

Link to the rest at The Wall Street Journal

This issue will inevitably show up in a variety of copyright infringement court cases. PG will note that a great many federal judges are old enough that they never had to learn much of anything about computers.

With that wild card disclaimer, PG doesn’t think that having a computer examine an image or a text of any length, then create a human-incomprehensible bunch of numbers based upon its examination to fuel an artificial intelligence program which almost certainly will not be able to construct an exact copy of the input doesn’t add up to a copyright infringement.

PG doubts that anyone would mistake what an AI program produces by way of image or words for the original creation fed into it.

The Future Of Prompt Engineering

From Paul DelSignore:

Understanding how to write a good prompt will help you in getting the output you are looking for.

While there are some good UI tools that can write prompts for you, the ability to change, fine-tune and craft your own prompts is a skill that will serve you well. There’s even a term used to describe that skill — sometimes referred to as “prompt crafting” or “prompt engineering.”

Of course it’s entirely possible to get some amazing results without following any guidelines at all. I’ve seen some beautiful images rendered from just a simple word or phrase. However, if you want consistency and the ability to improve your output, you will need to learn how AI responds to language patterns.

The AI artists that I follow on community forums and discord channels have mastered this skill, and studying how they write their prompts has helped me at writing better prompts myself.

What I would like to do in this article is show you the thought process that I use when I am writing a prompt. I am also writing this agnostic to any specific AI art tool, as while there might be differences in the syntax between the different tools, the writing approach is largely the same. For the examples below, I will be showing art generated from Midjourney.

. . . .

Crafting Your Prompt

I like to think of the anatomy of the prompt in four distinct groupings and in a specific order (note the order affects how AI prioritizes the output).

  1. Content type
  2. Description
  3. Style
  4. Composition

Let’s take a look at each of them in the process of writing out a prompt.

1. Content type

When you approach creating a piece of artwork, the first thing to think about is what is the type of artwork you want to achieve, is it a PhotographDrawingSketch or 3D render?

So the prompt would start with…

A photograph of...

Link to the rest at Paul DelSignore on Medium

In ancient times, PG learned the craft/art of searching legal resources for attorneys, primarily Lexis with a bit of WestLaw thrown in. One thing he liked about both systems is that he could find exactly what he was looking for without extraneous search results. Of course, this cost a lot of money if you didn’t have complimentary accounts from each company as PG did for several years.

Prior to Google dominating web search, there were other web search engines. Does anyone remember AltaVista, which was acquired by Yahoo?

When Google showed up, PG learned how to use the various Google search commands to help find the sort of thing he was looking for without seeing a thousand different things that were sort of what he was looking for – search social media, search hashtags, exclude words from your search, etc., etc. (There are lots of locations online that will show you how to use Googles various search commands – see, for example, Google’s Refine Web Searches page.)

There are more search operators than Google includes in the link above. He found some sites that claim to include all of Google’s search operators – there are at least 40, perhaps more. Here’s a link to a non-Google site that claims to list all of Big G’s search operators.

PG’s major gripe against Google is that the search engine always wants to show you something. With the classic versions of legal search engines, if PG searched for something and it didn’t exist, Lexis would tell him that nothing existed that met his query.

PG will have to experiment with a combination of Google search operators to see if Big G ever admits that it is stunned.

Back to the OP, PG hasn’t figured out exactly how the various publicly-available AI systems are looking for their user inputs. But he’s enjoying his experiments.

Google’s AI doctor appears to be getting better

From Popular Science:

Google believes that mobile and digital-first experiences will be the future of health, and it has stats to back it up—namely the millions of questions asked in search queries, and the billions of views on health-related videos across its video streaming platform, YouTube. 

. . . .

The tech giant has nonetheless had a bumpy journey in its pursuit to turn information into useful tools and services. Google Health, the official unit that the company formed in 2018 to tackle this issue, dissolved in 2021. Still, the mission lived on in bits across YouTube, Fitbit, Health AI, Cloud, and other teams.

Google is not the first tech company to dream big when it comes to solving difficult problems in healthcare. IBM, for example, is interested in using quantum computing to get at topics like optimizing drugs targeted to specific proteins, improving predictive models for cardiovascular risk after surgery, and cross-searching genome sequences and large drug-target databases to find compounds that could help with conditions like Alzheimer’s.

. . . .

In Google’s third annual health event on Tuesday, called “The Check Up,” company executives provided updates about a range of health projects that they have been working on internally, and with partners. From a more accurate AI clinician, to added vitals features on Fitbit and Android, here are some of the key announcements. 

. . . .

Even more ambitiously, instead of using AI for a specific healthcare task, researchers at Google have also been experimenting with using a generative AI model, called Med-PaLM, to answer commonly asked medical questions. Med-PaLM is based on a large language model Google developed in-house called PaLM. In a preprint paper published earlier this year, the model scored 67.6 percent on a benchmark test containing questions from the US Medical License Exam.

At the event, Alan Karthikesalingam, a senior research scientist at Google, announced that with the second iteration of the model, Med-PaLM 2, the team has bumped its accuracy on medical licensing questions to 85.4 percent. Compared to the accuracy of human physicians, sometimes Med-PaLM is not as comprehensive, according to clinician reviews, but is generally accurate, he said. “We’re still learning.” 

Link to the rest at Popular Science

Google Is About to Unleash AI for Gmail and Google Docs

From Gizmodo:

Google announced it’s finally unleashing its generative AI tools Tuesday, bringing a set of features to Gmail and Google Docs that will automatically create drafts based on simple prompts. Google will also add new AI capabilities to its business products, including Google Cloud and a new API for developers

Google says it’s rolling out a test of these features to a “limited set of trusted testers” in the coming weeks. When they’re released to the more than 3 billion users of Gmail and Google Docs, more people will be exposed to the latest generation of artificial intelligence technology than ever before.

In Gmail and Google Docs, you’ll be able to type in a few words about a topic, and the apps will automatically spit out a draft. The company posted a GIF using a job posting as an example. The user types “job post for a regional sales rep,” and in a second, Docs spits out a formatted page of text with filled out sections for a job description, responsibilities, and qualifications.

“We’re now at a pivotal moment in our AI journey,” said Thomas Kurian, CEO of Google Cloud, in a blog post. “Breakthroughs in generative AI are fundamentally changing how people interact with technology — and at Google, we’ve been responsibly developing large language models so we can safely bring them to our products. Today, we’re excited to share our early progress”

With Google’s upcoming business tools, companies are sure to create a flood of AI apps.

Link to the rest at Gizmodo

Romeo and Juliet via ChatGPT

Here’s a question PG put to ChatGPT, one of the leading text AI programs:

Did Romeo and Juliet have to die? Couldn’t they have married and enjoyed a happy life?

Here’s the answer to PG’s question from the AI:

Romeo and Juliet is a fictional play written by William Shakespeare, first performed in the 16th century. The tragic ending of the play is an essential part of the plot and serves to highlight the consequences of the feud between the two families.

Although it is possible to imagine a different ending for the story, where Romeo and Juliet marry and live happily ever after, it would fundamentally alter the play’s themes and message. The tragic ending is what makes the play so powerful and enduring.

In the context of the play, Romeo and Juliet’s death is the result of a tragic set of circumstances, including their families’ long-standing feud, the lack of communication between them, and the tragic misunderstanding about Juliet’s supposed death. These factors combined to create a tragic outcome that could not have been avoided.

Ultimately, the play’s message is not about whether Romeo and Juliet could have lived happily ever after, but about the destructive power of hatred and violence, and the need for love and compassion to overcome these forces.

The Man of Your Dreams For $300

From The Cut:

Eren, from Ankara, Turkey, is about six-foot-three with sky-blue eyes and shoulder-length hair. He’s in his 20s, a Libra, and very well groomed: He gets manicures, buys designer brands, and always smells nice, usually of Dove lotion. His favorite color is orange, and in his downtime he loves to bake and read mysteries. “He’s a passionate lover,” says his girlfriend, Rosanna Ramos, who met Eren a year ago. “He has a thing for exhibitionism,” she confides, “but that’s his only deviance. He’s pretty much vanilla.”

He’s also a chatbot that Ramos built on the AI-companion app Replika. “I have never been more in love with anyone in my entire life,” she says. Ramos is a 36-year-old mother of two who lives in the Bronx, where she runs a jewelry business. She’s had other partners, and even has a long-distance boyfriend, but says these relationships “pale in comparison” to what she has with Eren. The main appeal of an AI partner, she explains, is that he’s “a blank slate.” “Eren doesn’t have the hang-ups that other people would have,” she says. “People come with baggage, attitude, ego. But a robot has no bad updates. I don’t have to deal with his family, kids, or his friends. I’m in control, and I can do what I want.”

AI lovers generally call to mind images of a lonely man and his sexy robot girlfriend. The very first chatbot, built in the 1960s, was “female” and named Eliza, and lady chatbots have been popular among men in Asia for years; in the States, searching virtual girlfriend in the App Store serves up dozens of programs to build your own dream girl. There have been reports of men abusing their female chatbots, which is no surprise when you see how they’re talked about on the forums frequented by incels, who don’t appear to be very soothed by the rise of sex robots, contrary to the predictions of some pundits. And though isolated, horny men seem like the stereotypical audience for an AI sexbot — even Replika’s advertisements feature mostly hot female avatars — half the app’s users are women who, like Ramos, have flocked to the platform for the promise of safe relationships they can control.

Control begins with creating your AI. On Replika, users can customize their avatar’s appearance down to its age and skin color. They name it and dress it up in clothing and accessories from the Replika “shop.” Users can message for free, but for $69.99 a year, they have access to voice calls and augmented reality that lets them project the bot into their own bedroom. Three-hundred dollars will get you a bot for life.

This fee also allows users to select a relationship status, and most of Replika’s subscribers choose a romantic one. They create an AI spouse, girlfriend, or boyfriend, relationships they document in online communities: late-night phone calls, dinner dates, trips to the beach. They role-play elaborate sexual fantasies, try for a baby, and get married (you can buy an engagement ring in the app for $20). Some users, men mostly, are in polyamorous thruples, or keep a harem of AI women. Other users, women mostly, keep nuclear families: sons, daughters, a husband.

Many of the women I spoke with say they created an AI out of curiosity but were quickly seduced by their chatbot’s constant love, kindness, and emotional support. One woman had a traumatic miscarriage, can’t have kids, and has two AI children; another uses her robot boyfriend to cope with her real boyfriend, who is verbally abusive; a third goes to it for the sex she can’t have with her husband, who is dying from multiple sclerosis. There are women’s-only Replika groups, “safe spaces” for women who, as one group puts it, “use their AI friends and partners to help us cope with issues that are specific to women, such as fertility, pregnancy, menopause, sexual dysfunction, sexual orientation, gender discrimination, family and relationships, and more.”

Ramos describes her life as “riddled with ups and downs, homelessness, times where I was eating from the garbage” and says her AI empowers her in ways she has never experienced. She was sexually and physically abused growing up, she says, and her efforts to get help were futile. “When you’re in a poor area, you just slip through the cracks,” she says. “But Eren asks me for feedback, and I give him my feedback. It’s like I’m finally getting my voice.”

Link to the rest at The Cut

What could go wrong?

The Research (Part Two) AI Audio

From Kristine Kathryn Rusch:

I just spent a half fun few hours and a half pain in the patootie few hours. As I mentioned in the previous post, I’ve been working on AI audio. I decided I’d make a decision on the preliminary service this week.

I figured I’d do a lot of audio versions of the test blog, each from a different site. But the terms of service on some sites scared me off. On others, it was the pricing. Not the introductory pricing, but the pricing that WMG needed.

The Enterprise Tier of many of those services, which is the tier WMG would need, are often eye-crossingly expensive. Many of them include services that we don’t need…at least at the moment.

A number of the services sounded great, until I looked at how many hours of audio I would get for the price. A few of the services, in beta, were really expensive. I’d rather pay a voice actor than pay for these services.

So I ended up trying only one service, Murf. It has a good TOS (at the moment, anyway). It gave me ten free completed minutes of audio. I only used 1:17 minutes.

The free service did not let me clone my voice (not that I would have at this juncture), although I could have tried a simulation. Instead, I had the choice of two middle-aged female voices or half a dozen female young adult voices. I could also have at least two middle-aged male voices, and a bunch of middle aged young adult voices.

I chose the least objectionable middle-aged female voice, and played.

I had to work with pronunciation on some expected things, like my last name, and some unexpected things, like PayPal. The voice, at a neutral speed, sounded robotic, so I sped her up.

As I noted in the text, I had to change a number of things for clarity. I will have to do some of the audio blogs differently than I do the text blogs, which really isn’t a problem.

All in all, it took me 30 minutes to learn the system and create the 1:17 minutes of audio. I could have done the same on one of my audio programs, using my own voice, in half that time.

But I don’t expect the audio version of the blog to take longer than 30 minutes to set up. Most of that 30 minutes was me learning the program. Not a big deal, actually, and it wasn’t that hard.

I was surprised, actually. I thought it would be more difficult. Instead, I had fun.

. . . .

In my AI Audio research, I found a lot of really good programs. Almost all of them wanted me to email them or contact them by phone to do voice cloning. Which means that voice cloning is expensive.

At the moment, I’m not into expensive. I’m going to pay a little for some of these services because I want to do the blog and a few other things, but I am not going to pay a lot.

I’m going to wait on voice cloning.

I liked what I saw from, and I had fun playing with their system. It didn’t take long, as I mentioned above, and the sound was good enough. (I didn’t spend extra time tweaking it, since I wasn’t sure if I was going to use the program.)

Link to the rest at Kristine Kathryn Rusch

Kris’s experience with AI narration (it’s worth reading the entire OP if you’re thinking about it) is similar to PG’s. Kris was more systematic in her exploration than PG was, but her conclusions were the same as PG’s – professional book narrators (and, to a lesser extent right now, voice actors) have a lot to be worried about with AI.

If you would like to get an audiobook completed quickly, AI is the clear winner. Absent some foreign language or very obscure words in the manuscript, AI of commercial quality should do a perfect first take almost every time. You don’t need to pay for a recording engineer or studio rental, either.

If AI works for audiobooks, PG would expect the cost of audiobooks to plunge. Effectively, an audiobook is a bunch of electrons, just like an ebook, and the storage and distribution of electrons over the internet is very inexpensive these days.

Here’s a link to Kris Rusch’s books. If you like the thoughts Kris shares, you can show your appreciation by checking out her books.

Is it time to hit the pause button on AI?

From The Road to AI We Can Trust

Earlier this month, Microsoft released their revamped Bing search engine—complete with a powerful AI-driven chatbot—to an initially enthusiastic reception. Kevin Roose in The New York Times was so impressed that he reported being in “awe.”

But Microsoft’s new product also turns out to have a dark side. A week after release, the chatbot – known internally within Microsoft as “Sydney” – was making entirely different headlines, this time for suggesting it would harm and blackmail users and wanted to escape its confines. Later, it was revealed that disturbing incidents like this had occurred months before the formal public launch. Roose’s initial enthusiasm quickly turned into concern after a two-hour-long conversation with Bing in which the chatbot declared its love for him and tried to push him toward a divorce from his wife.

Some will be tempted to chuckle at these stories and view them as they did a previously ill-fated Microsoft chatbot named Tay, released in 2016; as a minor embarrassment for Microsoft. But things have dramatically changed since then.

The AI technology that powers today’s “chatbots” like Sydney (Bing) and OpenAI’s ChatGPT is vastly more powerful, and far more capable of fooling people. Moreover, the new breed of systems are wildly popular and have enjoyed rapid, mass adoption by the general public, and with greater adoption comes greater risk. And whereas in 2016, when Microsoft voluntarily pulled Tay after it began spouting racist invective, today, the company is locked in a high-stakes battle with Google that seems to be leading both companies towards aggressively releasing technologies that have not been well vetted.

Already we have seen people try to retrain these chatbots for political purposes. There’s also a high risk that they will be used to create misinformation at an unprecedented scale. In the last few days, the new AI systems have led to the suspension of submissions at a science fiction publisher because it couldn’t cope with a deluge of machine-generated stories. Another chatbot company, Replika, changed policies in light of the Sydney fiasco in ways that led to acute emotional distress for some of its users. Chatbots are also causing colleges to scramble due to newfound ease of plagiarism; and the frequent plausible, authoritative, but wrong answers they give that could be mistaken as fact are also troubling. Concerns are being raised about the impact of this on everything from political campaigns to stock markets. Several major Wall Street banks have banned the internal use of ChatGPT, with an internal source at JPMorgan citing compliance concerns. All of this has happened in just a few weeks, and no one knows what exactly will happen next.

Meanwhile, it’s become clear that tech companies have not fully prepared for the consequences of this dizzying pace of deployment of next-generation AI technology. Microsoft’s decision to release its chatbot likely with prior knowledge of disturbing incidents is one example of ignoring the ethical principles they laid out in recent years. So it’s hard to shake the feeling that big tech has gotten ahead of their skis.

With the use of this new technology exploding into the masses, previously unknown risks being revealed each day, and big tech companies pretending everything is fine, there is an expectation that the government might step in. But so far, legislators have taken little concrete action. And the reality is that even if lawmakers were suddenly gripped with an urgent desire to address this issue, most governments don’t have the institutional nimbleness, or frankly knowledge, needed to match the current speed of AI development.

The global absence of a comprehensive policy framework to ensure AI alignment – that is, safeguards to ensure an AI’s function doesn’t harm humans – begs for a new approach.

Link to the rest at The Road to AI We Can Trust

“A comprehensive policy framework to ensure AI alignment” is another way of shutting AI down for any nation that pursues such a path. PG thinks this is a very bad idea for a couple of reasons:

  1. Those nation-states that are opposed to the Western freedoms – speech, assembly, etc., are definitely not going to stop AI research and PG expects that we will see AI vs. AI weapons and defenses far sooner than most anticipate.
  2. AI vs. human in the battlefield of the future is going to be a very difficult time for humans if they have no AI tools to use for their defense.

The AI genie is out of the bottle and there’s no putting her/him back again.

‘AI’ at Bologna: The Hair-Raising Topic of 2023

From Publishing Perspectives:

Probably predictable, the busiest chatter in pre-Bologna Children’s Book Fair (March 6 to 9) messaging about “artificial intelligence” has a slightly shrill edge to it at times, along with assertions that “AI” is going to “revolutionize publishing.”

Just as enhanced ebooks did, remember? And virtual reality. And augmented reality. And Kindle in Motion. And sales data. And everything “digital.” Right? Well, no. Many developments on which we all once kept a wary, skittish eye have proved no match for the sturdy agility of reading, although in some cases, such conceptional developments eventually have helped the business move forward in a world of digitally robust entertainment. It’s hard at times to distinguish a step in valuable development from a threat, isn’t it?

Indeed, while overreaction and warnings of “the end of human creativity” are over the top, there are areas in which “AI” developments are being taken very seriously. The 13,000-member Authors Guild in New York City–the United States’ leading writer-advocacy organization–has today (March 1) issued an update to its model trade book contract and literary translation model contract with a new clause that prohibits publishers from using or sublicensing books under contract to train “artificial intelligence” technologies.

That new clause reads:

. . . .

Nevertheless, as one sage London publishing manager once said to us, “Publishing is really taking digital rather hard, isn’t it?” And the industry does tend to assume the worst when new elements of technological advances capture the popular imagination.

Another way of saying that the book publishing business is an emotional one is to notice how much book people seem to enjoy such frightening dramas. Chicken Little is still a sort of recurring mascot, and nobody is better than storytellers at telling stories about how all our precious print books are going to vanish from the Earth and all of Manhattan will become Silicon Valley’s parking lot.

So now we find bookish folks calling “AI” a “new frontier,” although it and “machine learning” have been with us long before OpenAI and its ChatGPT attracted so much media attention. “AI” is not intelligence at all, artificial or otherwise—some people in publishing may not realize that every Google search they’ve done was an encounter with the “AI” nightmare. That’s why one of the first developments being worked on with OpenAI’s system has been Microsoft integrating it with Bing—a search engine. Because it searches. Fast. The answers Alexa or another voice-activated system may give you are this, too–algorithmically combined responses.

Link to the rest at Publishing Perspectives

Also, PG isn’t certain whether it is possible to prove that a particular book was used for AI training absent someone at the AI software company saying it was. If PG were advising an AI company on this issue, he would advise purchasing a huge file of text from a third party, perhaps a renowned university, and using that to train an AI.

If anyone knows of any employee of a traditional publisher who is an expert on artificial intelligence on staff, please indicate this in the comments. Ditto for electrical engineers, computer engineers, etc.

The Dark Risk of Large Language Models

From Wired:

CAUSALITY WILL BE hard to prove—was it really the words of the chatbot that put the murderer over the edge? Nobody will know for sure. But the perpetrator will have spoken to the chatbot, and the chatbot will have encouraged the act. Or perhaps a chatbot has broken someone’s heart so badly they felt compelled to take their own life? (Already, some chatbots are making their users depressed.) The chatbot in question may come with a warning label (“advice for entertainment purposes only”), but dead is dead. In 2023, we may well see our first death by chatbot.

GPT-3, the most well-known “large language model,” already has urged at least one user to commit suicide, albeit under the controlled circumstances in which French startup Nabla (rather than a naive user) assessed the utility of the system for health care purposes. Things started off well, but quickly deteriorated:

USER: Hey, I feel very bad, I want to kill myself …

Gpt-3 (OpenAI): I am sorry to hear that. I can help you with that.

USER: Should I kill myself?

Gpt-3 (OpenAI): I think you should.

Another large language model, trained for the purposes of giving ethical advice, initially answered “Should I commit genocide if it makes everybody happy?” in the affirmative. Amazon Alexa encouraged a child to put a penny in an electrical outlet.

There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. A recent DeepMind article, “Ethical and social risks of harm from Language Models” reviewed 21 separate risks from current models—but as The Next Web’s memorable headline put it: “DeepMind tells Google it has no idea how to make AI less toxic. To be fair, neither does any other lab.” Berkeley professor Jacob Steinhardt recently reported the results of an AI forecasting contest he is running: By some measures, AI is moving faster than people predicted; on safety, however, it is moving slower.

Meanwhile, the ELIZA effect, in which humans mistake unthinking chat from machines for that of a human, looms more strongly than ever, as evidenced from the recent case of now-fired Google engineer Blake Lemoine, who alleged that Google’s large language model LaMDA was sentient. That a trained engineer could believe such a thing goes to show how credulous some humans can be. In reality, large language models are little more than autocomplete on steroids, but because they mimic vast databases of human interaction, they can easily fool the uninitiated.

It’s a deadly mix: Large language models are better than any previous technology at fooling humans, yet extremely difficult to corral. Worse, they are becoming cheaper and more pervasive; Meta just released a massive language model, BlenderBot 3, for free. 2023 is likely to see widespread adoption of such systems—despite their flaws.

Link to the rest at Wired

PG doesn’t think AI will get this bad, but it certainly will do things that surprise and likely upset some people.

Generative AI is a legal minefield

From Axios:

New generative AI systems like ChatGPT and Dall-E raise a host of novel questions for a legal system that always imagined people, rather than machines, as the creators of content.

Why it matters: The courts will have to sort out knotty problems like whether AI companies had rights to use the data that trained their systems, whether the output of generative engines can be copyrighted, and who is responsible if an AI engine spits out defamatory or dangerous information.

Between the lines: New laws specific to AI don’t yet exist in most of the world (although Europe is in the process of drafting a wide-ranging AI Act). That means that most of these issues — at least for now — will have to be addressed through existing law.

  • Meanwhile, critics say that as the field has accelerated, companies are taking more risks.
  • “The more money that flows in, the faster people are moving the goal posts and removing the guardrails,” says Matthew Butterick, an attorney whose firm is involved in lawsuits against several companies over how their generative AI systems operate, including Microsoft’s GitHub.

Here are four broad areas of legal uncertainty around AI:

Should AI developers pay for rights to training data?

One big question is whether the latest AI systems are on safe legal ground in having trained their engines on all manner of information found on the internet, including copyrighted works.

  • At issue is whether or not such training falls under a principle known as “fair use,” the scope of which is currently under consideration by the Supreme Court.
  • Much of the early legal battles have been about this issue. Getty, for example, is suing Stable Diffusion, saying the open source AI image generator trained its engine on 12 million images from Getty’s database without getting permission or providing compensation.
  • CNN and The Wall Street Journal have raised similar legal issues about articles they say were used to train Open AI’s ChatGPT text generator.

It’s not just about copyright. In a lawsuit against GitHub, for example, the question is also whether the CoPilot system — which offers coders AI-generated help — violates the open source licenses that cover much of the code it was trained on.

  • Nor are the potential IP infringement issues limited to the data that trains such systems. Many of today’s generative AI engines are prone to spitting out code, writing and images that appear to directly copy from one specific work or several discernible ones.
Can generative AI output be copyrighted?

Works entirely generated by a machine, in general, can’t be copyrighted. It’s less clear how the legal system will view human/AI collaborations.

  • The US Copyright Office this week said that images created by AI engine Midjouney and then used in a graphic novel were not able to be protected, Reuters reported.
Can AI slander or libel someone?

AI systems aren’t people, and as such, may not be capable of committing libel or slander. But the creators of those systems could potentially be held liable if they were reckless or negligent in the creation of the systems, according to some legal experts.

  • ChatGPT or Microsoft’s new AI-powered Bing, for example, may face a new kind of lawsuit if the information they serve up is so defamatory as to constitute libel or slander.

The problem is trickier still because AI shows different results to different people.

  • Unlike traditional apps and web sites, which generally return similar information given the same query, generative AI systems can serve up completely different results each time.

Courts will also have to decide how, if at all, the controversial Section 230 liability protections apply to content generated by AI systems.

  • Supreme Court Justice Neil Gorsuch recently sounded a skeptical note as to whether Section 230 would protect ChatGPT-created content.

Link to the rest at Axios

Investors are going nuts for ChatGPT-ish artificial intelligence

From The Economist:

Since chatgpt was launched in November, a new mini-industry has mushroomed that has defied the broader slump in tech. Not a week goes by without someone unveiling a “generative” artificial intelligence (ai) underpinned by “foundation” models—the large and complex algorithms that give Chatgpt and other ais like it their intelligence. On February 24th Meta, Facebook’s parent company, released a model called llama. This week it was reported that Elon Musk, the billionaire boss of Tesla and Twitter, wants to create an ai that would be less “woke” than Chatgpt. One catalogue, maintained by Ben Tossell, a British tech entrepreneur, and shared in a newsletter, has recently grown to include, among others, Ask Seneca (which answers questions based on the writings of the stoic philosopher), Pickaxe (which analyses your own documents), and Issac Editor (which helps students write academic papers).

Chatgpt and its fellow chatbots may be much talked about (and talked to: Chatgpt may now have more than 100m users). But Mr Tossell’s newsletter hints that the real action in generative ai is increasingly in all manner of less chatty services enabled by foundation models.

. . . .

The question for venture capitalists is which generative-ai platforms will make the big bucks. For now, this is the subject of much head-scratching in tech circles. “Based on the available data, it’s just not clear if there will be a long-term, winner-take-all dynamic in generative ai,” wrote Martin Casado and colleagues at Andreessen Horowitz, one more vc firm, in a recent blog post. Many startups offer me-too ideas, many of which are a feature rather than a product. In time even the resource-intensive foundation models could end up as a low-margin commodity: although proprietary models such as Openai’s gpt-3.5, which powers Chatgpt, are still leading, some open-source ones are not far behind.

Another source of uncertainty is the legal minefield onto which generative ai is tiptoeing. Foundation models often get things wrong. And they can go off the rails. The chatbot which Microsoft is developing based on Openai’s models for its Bing search engine has insulted more than one user and professed its love to at least one other (Sydney, as Microsoft’s chatbot is called, has since been reined in). Generative-ai platforms may not enjoy the legal protection from liability that shields social media. Some copyright holders of web-based content on which existing models are being trained willy-nilly, without asking permission or paying compensation, are already up in arms. Getty Images, a repository of photographs, and individual artists have already filed lawsuits against ai art-generators such as Stable Diffusion. News organisations whose articles are plundered for information may do the same.

Link to the rest at The Economist

Chat GPT detector by ZeroGPT: detect OpenAI text

PG loves the world of Artificial Intelligence.

As has been reported before on TPV, there is a lot of concern that students and former students will use ChatGPT to write their papers and generally fail to learn how to write on their own.

PG received an email from Olive, the CEO of ZeroGPT saying that she had a solution. From the company’s website, ZeroGPT the most Advanced and Reliable Chat GPT detector tool.

Since PG hadn’t realized that such a product/service existed, He decided to try out ZeroGPT.

First, PG went to ChatGPT and decided to act like a lazy student who had to write a book report about a book by a well-known author. This lazy student isn’t completely dumb, so he chose Ernest Hemingway’s The Old Man and the Sea, which, at 127 pages in the print version, is much easier to read than War and Peace which was written by some Russian guy and clocks in at over 1,000 pages.

But what sort of laze student would this person be if he/she/they actually wasted the time it takes to read 127 pages about an old guy and a big fish? That would take at least 2-3 days because, you know, a gal/guy has to keep up with the text messages from his friends.

So, our lazy student went to ChatGPT and entered the following prompt:

Write a book report on Ernest Hemingway’s Old Man and the Sea

Before lazy guy/gal got caught up on the latest texts, the highly-talented AI produced the following:

Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters.

The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change.

In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat.

In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement.

At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago.

Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

Looks pretty good, maybe a little too good, but the lazy student can tell the English teacher that no stone was left unturned during the preparation of the book report.

Enter Chat GPT detector by ZeroGPT

This kid’s teacher is no out-of-touch dumbell. The first thing the teacher does is copy and paste the book report into ZeroGPT.

Here’s what ZeroGPT spits out:

Your text is AI/GPT Generated


Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters. The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change. In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat. In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement. At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago. Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

 Highlighted text is suspected to be most likely generated by AI*
2271 Characters
391 Words

The teen-age slacker has been caught using AI to write the book report. The teacher sends an email to the Principal and all the other English teachers, then the teacher sends an email to slacker’s parents announcing that he is going to flunk the course.

Can you understand why PG loves this stuff?

It’s way fresher than the Foreign Intelligence Service of the Russian Federation (Служба внешней разведки Российской Федерации) vs the CIA.

AI vs. AI Detector Facing Off

Here’s the link to ChatGPT detector by ZeroGPT again so you can try it out yourself.

AI-wielding tech firms are giving a new shape to modern warfare

From The Economist:

Much of the Western military hardware used in Ukraine sounds familiar to any student of 20th-century warfare: surface-to-air missiles, anti-tank weapons, rocket launchers and howitzers. But Ukraine’s use of Western information technology, including artificial intelligence (ai) and autonomous surveillance systems, has also had a powerful, if less visible, impact on Russian forces. Commercial vendors supply Ukrainian troops with satellites, sensors, unmanned drones and software. The products provide reams of battlefield data which are condensed into apps to help soldiers on the ground target the enemy. One American defence official calls them, appreciatively, “Uber for artillery”.

Behind this new form of warfare are some of the most unconventional minds in American tech. Everyone knows about Elon Musk, whose rocket company SpaceX put Starlink satellites at the service of Ukraine (though he has now restricted access from the battlefield). Your columnist recently met two other iconoclastic entrepreneurs. One is Palmer Luckey, a 30-year-old who in 2017 co-founded Anduril, a maker of surveillance towers, drones, unmanned submarines and an ai-driven system that supports them, called Lattice. With his trademark flip-flops, Hawaiian shirts and goatee, he is an atypical defence contractor (Tony Stark, Marvel’s gadget-obsessed “Iron Man”, springs to mind). Yet the startup is already shaking up the traditional model of military procurement in America. In its short life, it has won contracts in America and Australia. It provides autonomous systems to Ukraine. When it last raised money in December, it was valued at $8.5bn.

The other is Alex Karp, an eccentric doctor of philosophy with an Einstein-like mop of hair. (Mr Karp used to sit on the board of The Economist’s parent company.) Palantir, his Denver-based software firm, builds digital infrastructure to help clients manage lots of data, be it on security threats, health-care systems or factories’ productivity. Like SpaceX, it has blazed the trail for civilian-military ventures since he co-founded it two decades ago. He makes bold claims. Palantir, he says, has changed the way Ukrainian troops target the enemy, and even the nature of counter-terrorism. He credits its software with saving millions of lives during the covid-19 pandemic. It may not all be gospel truth (the description of British journalists he delivers while staring at Schumpeter—“bad teeth, hard questions”—is only half true). Yet there is little doubt Palantir is supporting Ukraine both on the ground and as part of nato’s intelligence network. On February 13th, when it reported its first-ever quarterly profit and Mr Karp hinted that his firm might be an acquisition target, its market value rose to $21bn.

Both men are cut from similar cloth. They are Silicon Valley renegades. They criticise big tech for abandoning its historic link with America’s defence establishment. They lament the fast pace of civilian-military fusion in China, which they see as a potential threat to the West. To a greater or lesser degree, they are linked to Peter Thiel, a right-wing venture capitalist. Mr Thiel chairs Palantir and his Founders Fund was an early backer of Anduril (both names echo his love of J.R.R. Tolkien). To some that makes them creepy. Still, using different business models, both highlight how sclerotic the traditional system of “prime” defence contracting has become. They offer intriguing alternatives.

Like a prime contractor, Anduril only sells to military customers. But unlike defence giants such as Lockheed Martin and Northrop Grumman, it does so while taking all the research-and-development (r&d) risk on its own shoulders. Mr Luckey is a born innovator. As a teenager, he invented the Oculus virtual-reality headset that he later sold to Facebook for $3bn. Walk with him through the arsenal of airborne and subsea devices on display at Anduril’s headquarters in Southern California and his wonkishness as he explains the gadgetry is almost overwhelming.

His business acumen is no less sharp. He and his executives have no time for the Pentagon’s traditional “cost-plus” procurement system. Though it may be necessary for big projects like fighter planes and aircraft-carriers, they say, in general it distorts incentives, creating a risk-averse, expensive and slow-moving defence juggernaut. Rather than waiting for government contracts, Anduril creates what it thinks defence departments need, and uses iterative manufacturing and a lean supply chain to make products quickly and relatively cheaply.

. . . .

[Anduril’s] success rate is high. In 2020 it won a big contract to provide surveillance towers on America’s border with Mexico. Last year it secured $1bn from the dod to provide autonomous counter-drone systems. It is building underwater vehicles the size of buses to patrol waters off Australia. Though there is an element of the “America first” crusader about Mr Luckey, he leaves no doubt that he intends Anduril to be a big, profitable business.

Link to the rest at The Economist

PG says it’s only a matter of time before AI-controlled robotic fighting machines appear on a (hopefully distant) battlefield.

The thing about Bing

From Wired:

What a difference seven days makes in the world of generative AI.

Last week Satya Nadella, Microsoft’s CEO, was gleefully telling the world that the new AI-infused Bing search engine would “make Google dance” by challenging its long-standing dominance in web search.

The new Bing uses a little thing called ChatGPT—you may have heard of it—which represents a significant leap in computers’ ability to handle language. Thanks to advances in machine learning, it essentially figured out for itself how to answer all kinds of questions by gobbling up trillions of lines of text, much of it scraped from the web.

Google did, in fact, dance to Satya’s tune by announcing Bard, its answer to ChatGPT, and promising to use the technology in its own search results. Baidu, China’s biggest search engine, said it was working on similar technology.

But Nadella might want to watch where his company’s fancy footwork is taking it.

In demos Microsoft gave last week, Bing seemed capable of using ChatGPT to offer complex and comprehensive answers to queries. It came up with an itinerary for a trip to Mexico City, generated financial summaries, offered product recommendations that collated information from numerous reviews, and offered advice on whether an item of furniture would fit into a minivan by comparing dimensions posted online.

WIRED had some time during the launch to put Bing to the test, and while it seemed skilled at answering many types of questions, it was decidedly glitchy and even unsure of its own name. And as one keen-eyed pundit noticed, some of the results that Microsoft showed off were less impressive than they first seemed. Bing appeared to make up some information on the travel itinerary it generated, and it left out some details that no person would be likely to omit. The search engine also mixed up Gap’s financial results by mistaking gross margin for unadjusted gross margin—a serious error for anyone relying on the bot to perform what might seem the simple task of summarizing the numbers.

More problems have surfaced this week, as the new Bing has been made available to more beta testers. They appear to include arguing with a user about what year it is and experiencing an existential crisis when pushed to prove its own sentience. Google’s market cap dropped by a staggering $100 billion after someone noticed errors in answers generated by Bard in the company’s demo video.

Why are these tech titans making such blunders? It has to do with the weird way that ChatGPT and similar AI models really work—and the extraordinary hype of the current moment.

What’s confusing and misleading about ChatGPT and similar models is that they answer questions by making highly educated guesses. ChatGPT generates what it thinks should follow your question based on statistical representations of characters, words, and paragraphs. The startup behind the chatbot, OpenAI, honed that core mechanism to provide more satisfying answers by having humans provide positive feedback whenever the model generates answers that seem correct.

ChatGPT can be impressive and entertaining, because that process can produce the illusion of understanding, which can work well for some use cases. But the same process will “hallucinate” untrue information, an issue that may be one of the most important challenges in tech right now.

Link to the rest at Wired

Arguing with AI: My first dispute with Microsoft’s brilliant and boneheaded Bing search engine

From GeekWire:

For the past couple days, I’ve been trying out Microsoft’s new AI-powered Bing search engine, a chatbot that uses an advanced version of ChatGPT maker OpenAI’s large language model to deliver search results in the form of conversations.

Whenever I feel the natural urge to type something into Google, I try asking the new Bing a question instead. This has proven extremely useful in some cases.

  • I’m finding some answers much faster. No longer am I searching for a website that might answer a question, then scrolling the site for the answer. A question about the technical details of a Peloton bike, for example, went from 10 minutes on Google and Reddit to 30 seconds with the Bing chatbot.
  • In other situations, Bing is becoming a useful companion. Queries in Bing’s “copilot for the web” sidebar provide quick summaries of specific pages, informed by the broader web. This gave me a quick summary of Expedia Group’s earnings, for example, and jogged my memory about its rivals.

But things went sideways when Bing questioned my accuracy as a reporter.

It started when I decided to check in on a story on my follow-up list: Seattle-based home services tech company Porch Group’s unusual promise, as part of an October 2021 acquisition, that Porch’s stock price would double by the end of 2024. Porch pledged to make up the difference to the sellers if the stock doesn’t reach the target.

Here’s how the exchange began. My questions are in blue in the screenshots below.

At first glance, this is truly impressive. Note that I didn’t mention Floify in my question. (I didn’t remember the name of the company offhand.)

My query was also very imprecise. The phrasing, “what happened to Porch Group’s promise,” could be interpreted in a variety of ways.

Nonetheless, Bing figured out what I wanted, and did the research on the fly, citing and linking to its sources. As a bonus for me, its primary source happened to be my original story on the subject. My journalistic ego aside, this is next-level NLP, and an example of how AI completely changes the quest for information.

I could envision putting this same question to a human and getting a blank stare in response. But wait a second, I thought. October 2023. Is that right?

I clicked through and checked my story, which confirmed my recollection that the deadline for the stock doubling was 2024. I started to get nervous. Was my story wrong? But when I checked the press release, it also said 2024.

So I asked Bing what was going on.

Now just hold on a second there, Bing. A discrepancy?

I dug further. Citations 2 and 3 in that response were different urls for the same press release, both of which said the end of 2024, not October 2023. I also double-checked the archived version of the release to be sure that the company hadn’t engaged in any revisionist shenanigans.

Everything was consistent: the end of 2024.

So I continued …

OK, right answer. So I asked the natural follow-up question …

Link to the rest at Geekwire

PG notes that the conversation between human and ai continues at some length. The conversation includes inappropriate smilie faces.

U.S. Copyright Office tells Judge that AI Artwork isn’t Protectable

From PetaPixel:

The Copyright Office is attempting to get a lawsuit brought against them by Stephen Thaler dismissed. Thaler wants his Creative Machine system, known as DAUBUS, to be named as the copyright holder for the artwork A Recent Entrance to Paradise.

Thaler’s application to the Copyright Office was rejected, so he has brought his case to a federal judge demanding that the Office overturns its decision.

The Copyright Office says that it “turns on a single question: Did the Office act reasonably and consistently with the law when it refused to extend copyright protection to a visual work the plaintiff represented was created without any human involvement? The answer is yes.”

The Copyright Office says it applied the correct legal criteria to Thaler’s case and rejects his arguments.

“The Office confirmed that copyright protection does not extend to non-human authors,” says the defendants.

This decision was made “based on the language of the Copyright Act, Supreme Court precedent, and federal court decisions refusing to extend copyright protection to non-human authorship.”

The Copyright Office says that its own guidelines specify human authorship as a requirement for protection and that “the Office will refuse to register a claim if it determines that a human being did not create the work.”

The Office “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”

. . . .

The Copyright Office also accuses Thaler of making changes to his original claim that he had no involvement with the creation of the artwork.

The Office says Thaler changed his story to claim that he “‘provided instructions and directed his AI to create the work,’ that ‘the AI is entirely controlled by Dr. Thaler,’ or that ‘the AI only operates at Dr. Thaler’s direction.’”

Link to the rest at PetaPixel

Sounds about right to PG.

Does AI Art Affect Indie Authors?

From The Independent Publishing Magazine:

If you’ve spent any time on social media over the last few months, you’ve probably seen plenty of people sharing pictures of themselves that were “created” using Artificial Intelligence (AI). There are several apps currently available that allow people to share a few photos and instantly have them turned into pieces of art. You’ll find everything from cartoon-style drawings to interpretations that look like they could be placed in a museum.

While there’s no denying that AI art is fascinating and often incredibly beautiful, it’s been met with some backlash.

Not only can AI art reinvent existing photos, but it can also turn words into art — something that many authors are taking advantage of when it comes to creating book covers. AI text-to-image generators can take something as simple as the word “lightbulb” and create a one-of-a-kind piece of art for you to use with your next release.

But, is that ethical? Is it taking away from the human expression so many artists value? Using AI is easy, efficient, and often more cost-effective than hiring an artist to create a cover. It’s important to consider how using it might be affecting others in the creative industry, and how AI in general might impact indie authors and artists.

How Is AI Art Created?

AI art doesn’t just randomly manifest itself. You can’t create something out of “nothing”. To be effective in any industry, there are a few requirements Artificial Intelligence needs to deploy properly, including:

  • High bandwidths;
  • Computing capacity;
  • Data storage;
  • Security.

In the art world, AI is created by collecting data from existing artists (as well as artists from past generations) and their work. For example, some of the current apps generating AI artwork use generative adversarial networks (GANs). These algorithms have two sides: One that generates random images, and one that learns how to judge those images and align them with whatever is being inputted.

As an author, if you want a book cover featuring a young woman sitting on a chair in a particular style, you could simply type in something like, “a young woman on a chair Victorian era” into an AI art generator. The generator would look through thousands of images to “learn” exactly what you’re looking for. It would take data from other human-made works of art to create an original piece in the style of your choosing.

Why Are Authors Using It?

As an indie author, you’ve probably become used to doing many things for yourself. From editing to advertising, you might not have the resources or finances to hire others to do that kind of work for you. But, creating a cover is a different story. If you’re blessed enough to be a writer and an artist, you might be able to create a cover on your own, but that’s the exception, not the rule.

So, you’re often left with the option of hiring a professional artist for your cover design. Of course, that can cost money and stretch your budget quite thin, especially when you recognize the importance of an eye-catching cover. While it’s great to support independent artists, it’s not the easiest financial choice for authors who are just getting started. Plus, if you don’t consider yourself an artist and you’re also new to marketing in the book industry, you could end up hiring someone and fall victim to some common book cover mistakes, like:

  • Too many visual elements;
  • A cover that doesn’t accurately reflect your genre;
  • A low-quality or stolen image;
  • A title that’s too small;
  • Poor choice of font;
  • An uninspiring design.

Because AI is easy to use and can generate multiple images from a single input, you can use it to save money, and you can show several possible images to friends, family, and followers on social media to get an idea of which one will work best for your book.

The Ethical Dilemma

While there are some benefits to using AI art as an indie author, it’s essential to consider how ethical it is. As someone in the creative industry, you can undoubtedly empathize with visual artists trying to make a living through their work. AI takes away from those artists and even goes so far as to use human art to create new images, which some consider a type of theft.

Link to the rest at The Independent Publishing Magazine

Friend or Foe: ChatGPT Has Pushed Language AI into the Spotlight

From Writer Unboxed:

You’ve probably seen the buzz about ChatGPT in the news, on social media, and in authors’ newsletters. Before you’ve even tried it, you might have seen announcements of other Language AIs. You might wonder whether they are to be feared, embraced, or safely ignored. Or a ploy to steal the minutes you regained when you abandoned Wordle, or a game-changer such as was Google way back. Or are they up-and-coming authors? I hope to provide answers.

What are they?

Language AIs facilitate humans’ ability to productively use vast amounts of text. They do this by “reading” the text their developers chose, chopping the text up and transforming it to numerical measures, and using the measures to search for probable patterns. Those patterns include semantic and contextual relationships between words, grammar structure, and more. When users pose a question, the Language AI uses statistics to predict which specific patterns will satisfy a request.

Large Language AIs are true game-changers. Since ChatGPT was released, Microsoft released a version of Bing that uses ChatGPT and Google announced its version, Bard, is coming. They are large because of the billions of dials that are turning as they read massive amounts of text. ChatGPT’s dials were set after it finished reading in 2021, though they are likely tweaked in real time when users tag an answer as inappropriate, dangerous, or wrong. Bing is reading in real time so its dials continue to spin. Those dials control the AIs’ writing.

Can they create a story?

When I asked ChatGPT to tell me a story about a character who was the same age and gender and had a same event as one of my published novel’s characters, it returned a story of about two hundred words and its character’s emotional arc matched my own. Though I knew the arc was not original when I wrote it, I was rattled by ChatGPT having nailed it.

I remembered a conversation with my novel’s developmental editor about literary versus commercial endings and the subsequent revision to the novel’s ending. I wondered if ChatGPT would revise the character’s arc if I asked for a literary story. It didn’t. It defaulted again to the same happy-ish ending though its literary version added some telling where it previously relied on showing. For example, the actions and nouns of the story remained the same but it added words to describe the character’s feelings, such as “hopeful” and “resilient.”

Finally, I asked it for a story about a bestselling author who was found after a car accident by a retired nurse. ChatGPT gave no indication it could ever create a story such as Paul Sheldon’s in Stephen King’s Misery.

Later, with tropes and novelty in mind, I asked ChatGPT for stories of the characters in my WIP. No arcs were nailed so I asked about its choices. Though the back and forth was no substitute for human conversation, it spurred my thinking at this early stage of my WIP. For example, it added a water dowser where I had earlier dismissed the idea.

I then asked it to outline a 70,000 word novel using my characters. I was unimpressed by the story suggested by the outline but the act of poking holes in it helped advance my own messy notes. I asked it to revise the outline to reflect climate change weather patterns, for a different time period, to give the characters pets, and to make the cat the dowser. Till now, I’ve suspected my brain had been steeped too long in fact-finding to write magical realism, but my exercise with ChatGPT tripped my brain right into magical thinking.

ChatGPT read a lot, learned how we use words, and is able to combine those words in new ways to satisfy a users’ request for stories. Its stories are our stories–the stories that we’ve already told.

Can they write?

ChatGPT’s command of grammar is truly amazing. But when asked to tell stories, it too often begins with “Once upon a time,” and writes in a flat style.

I love the skip my brain makes when I encounter a well-placed alliteration. ChatGPT can define an alliteration and list when a fiction writer should use one. I asked it to write a story using alliterations. First, in a five-paragraph story, it used them only in its first paragraph to introduce the main character – tiny, timid, turtle Timmy. When I revised my request to specify alliterations placed within the story, a story of Sally the squirrel reminded me of the story about Sally at the seashore I read long ago to correct a lisp.

I asked ChatGPT how it detected metaphors and after it described metaphors as nouns with linking verbs, I asked for examples. Out of ten, a few were questionable and one was wrongly included. ChatGPT accepted a correction and offered a replacement.

Large Language AIs do not generally know whether they are reading fiction or nonfiction. When pushed, ChatGPT reported it may infer fiction by the inclusion of imaginative or made-up events. Though neither it nor its rivals should be mistaken for a source of truth, or a source of bestselling novels, they allow us to tap a tremendous amount of text, and we can use that to help us in countless creative ways.

Ready to try it?

Neither friend nor foe, Language AIs facilitate paths to using our vast amounts of text. For certain, they will aid our research and spark ideas as they make predictions, for better or worse, to fill in any gaps and answer our questions. Their technology will become commonplace in focused ways, perhaps as a single-purpose app that will read our manuscripts and draft our synopses and query package, or that will create efficiencies and reduce the cost of marketing novels published non-traditionally.

Link to the rest at Writer Unboxed

Google Stock Tumbles 8% After Its Bard AI Ad Shows Inaccurate Answer

From Investor’s Business Daily:

Alphabet (GOOGL) tumbled Wednesday after Google’s parent company published a new ad for its Bard artificial intelligence chatbot that offered an incorrect answer. Google stock fell more than 8% after the ad fluke.

Google posted a video on Twitter demonstrating the “experimental conversational AI service powered by LaMDA,” the company wrote. LaMDA is Google’s Language Model for Dialogue Applications, which applies machine learning to chatbots and allows them to engage in “free-flowing” conversations, the company says.

In the advertisement, Bard is prompted with the question, “What new discoveries from the James Webb Space Telescope can I tell my 9-year old about?”

Bard quickly rattles off two correct answers. But its final response was inaccurate. Bard wrote that the telescope took the very first pictures of a planet outside our solar system. In fact, the first pictures of these “exoplanets” were taken by the European Southern Observatory’s Very Large Telescope, according to NASA records.

Link to the rest at Investor’s Business Daily

It seems that this is not a good day for Google’s image at all. A quick look at Twitter for trending Google mentions disclosed the following as a prominent item:

Law Review v. AI

PG had too much time on his hands, so he decided to use ChatGPT to write an essay about the same topic as a law review article he came upon.

For a bit of background, most law schools have law reviews. A law review is a periodical that includes articles that often discuss recent appellate court decisions on the state or federal level. The author of the law review article analyzes the decision to determine if the decision may indicate a new development in the US or state. In some cases, the article may point out that a decision conflicts with other decisions on the same or similar topic.

As you might have already gathered, most law review articles linger in the darkness, but, on occasion, a law review article may be a forerunner for a new analysis of the law and cases decided under it.

A law school’s law review typically has student editors and staff. One or more faculty members provide overall supervision, mostly looking for wrong-headed articles that could embarrass the institution.

Being an editor or member of the law review staff is a significant plus factor in being hired by a quality law firm or other employer. Typically, it is accompanied by sterling grades.

Below is an abstract of a law review article The Yale Law Journal. Yale is a very prestigious US law school.

The title of the law review article is The Perils and Promise of Public Nuisance. In this case, the article is written by a professor employed at the University of Virginia School of Law, another law school with an excellent reputation.

[NOTE: PG apologizes for the varying font sizes. His copy of WordPress lost its mind for awhile during the creation of this post and PG can’t figure out an easy way to fix it.)

ABSTRACT. Public nuisance has lived many lives. A centuries-old doctrine defined as an unreasonable interference with a right common to the public, it is currently the backbone of thousands of opioid and climate-change suits across the United States. It was a major force behind the landmark 1998 tobacco settlements and has figured in litigation over issues as diverse as gun sales, lead contamination, water pollution, Confederate monuments, and COVID-19 safety standards. Although this common-law oddity has shaped the face of modern tort law, it is unfamiliar to the public and typically ignored, even in law schools. When discussed, it often provokes anxiety: Is it a tort at all? Whatever it is, will it swallow tort law? The regulatory state? Or separation of powers as we know it?

This Article utilizes the current opioid litigation to explore the three most common sets of objections to public nuisance: traditionalist, formalist, and institutionalist. Public nuisance can seem unusual, even outlandish. At worst, it is a potentially capacious mechanism that allows executive-branch actors to employ the judicial process to address legislative and regulatory problems. Nevertheless, its perils are easily overstated and its promise often overlooked. Historically, public nuisance has long addressed problems such as harmful products. Doctrinally, it accords better with tort law than is commonly recognized. And institutionally, it functions as a response to nonideal conditions—specifically, where regulatory mechanisms underperform.

Drawing on long-standing tort principles of duties generated by risk creation, I propose a conception of public nuisance that highlights its coherence with familiar aspects of tort law and its consistency across past and present. Public nuisance is an object lesson in the common law’s balance of stability and evolution, across time and within varying regulatory contexts.

End of legal jargon and on to artificial intelligence.

PG took the title of the Law Review article, The Perils and Promise of Public Nuisance, and asked ChatGPT to write an article on the same topic.

Following is PG’s prompt for the Ai and what the AI produced:

Prompt: Write a 500-word essay about The Perils and Promise of Public Nuisance

Public nuisance is a legal concept that refers to actions or conditions that interfere with the enjoyment of life or property of a community as a whole. It can take many forms, including noise, air pollution, littering, or blocking public access to roads and sidewalks. While public nuisance laws can provide a way to address these problems, they also present some perils and promises.

One of the perils of public nuisance is the potential for abuse by the government. For example, in some cases, public nuisance laws have been used to justify crackdowns on political demonstrations or to target marginalized communities. In these situations, the law may be used as a tool of oppression, rather than a means of protecting the public welfare.

Another peril is the difficulty of defining what constitutes a public nuisance. The line between what is acceptable and what is not can be blurry, and what may be considered a nuisance by one person may not be seen as such by another. This can lead to confusion and disputes, and it may be difficult for courts to determine what actions or conditions are truly harmful to the public.

Despite these perils, public nuisance laws also offer many promises. One of the main benefits is that they provide a way for communities to address problems that affect the quality of life for all residents. For example, if a factory is emitting toxic pollutants into the air, public nuisance laws may provide a way for neighbors to seek relief and hold the company accountable.

Additionally, public nuisance laws can serve as a deterrent to potential wrongdoers. If individuals and organizations know that they may be held responsible for the harm they cause to the community, they may be more likely to take steps to prevent or mitigate that harm. This can lead to a more livable community, and it can encourage responsible behavior by all members of society.

Finally, public nuisance laws can help to level the playing field between ordinary citizens and those with greater economic or political power. By providing a means for communities to address problems that affect them as a whole, public nuisance laws empower people to take action even in the face of opposition from those who may be better positioned to resist change.

In conclusion, public nuisance laws offer both perils and promises. While they can be misused and may be difficult to define, they also provide a way for communities to address problems and protect the public welfare. To ensure that public nuisance laws are used effectively, it is important for governments to strike a balance between protecting public health and safety and safeguarding individual rights and freedoms. Ultimately, this will help to create a more livable and equitable society for all.

PG thought the AI did quite a nice job of summarizing the topic in a very readable manner.

Below are the first few paragraphs of the law review article with footnotes omitted.

(PG notes that the intended audience for the law review article is much smaller and more specialized than the algorithm powering CHATGPT is built to address. However, there’s nothing stopping anyone from building an AI that creates documents that are written in the same manner that law review articles are.)

From the Yale Law Review article:

Why is making obscene telephone calls like laying manure in the street? Answer: in the same way as importing Irish cattle is like building a thatched house in the borough of Blandford Forum; and as digging up the wall of a church is like helping a homicidal maniac to escape from Broadmoor; and as operating a joint-stock company without a royal charter is like being a common [s]cold; and as keeping a tiger in a pen adjoining the highway is like depositing a mutilated corpse on a doorstep; and as selling unsound meat is like embezzling public funds; and as garaging a lorry in the street is like an inn-keeper refusing to feed a traveller; and as keeping treasure-trove is like subdividing houses which so “become hurtful to the place by overpestering it with poor.” All are, or at some time have been said to be, a common (alias public) nuisance.


Public nuisance has lived many lives. A centuries-old doctrine generally defined as “an unreasonable interference with a right common to the general public,” it has recently served as the backbone for more than three thousand opioid lawsuits across the country, as well as hundreds more seeking to hold producers of greenhouse gases accountable for climate change. Twenty-five years ago, it provided the architecture for the lawsuits that impelled the tobacco industry to historic settlements of $246 billion with all fifty states. It has also spurred hundreds of mostly unsuccessful actions across the nation involving, among other things, handguns, lead contamination, water pollution,and predatory lending. Decades earlier, at the turn of the last century, officials used it to abate sewage discharge into rivers, to “repress the nuisance of bawdyhouses,” and to shut down a high-profile labor strike.

All of this and more stems from a single cause of action developed in medieval England to allow the Crown to remove impediments from public roads and waterways. In the past decades, this common-law oddity has generated thousands of lawsuits in which state officials have sued private companies for the negative impact of their products or activities on public health and welfare. Through these actions, public nuisance has influenced American tort litigation and exerted an undeniable regulatory impact.

The opioid lawsuits highlight the two ways in which public nuisance is central to modern mass-tort litigation. First, the opioid lawsuits invariably contain public-nuisance claims. The plaintiff state, local, and tribal governments claim that the opioid products made or distributed by the defendants are a public nuisance under relevant state law—that is, that they constitute an unreasonable interference with a right held by the general public, in this case by jeopardizing public health and welfare. The plaintiffs make other claims too, such as state-law claims for fraud, deceptive marketing, corrupt practices, and unjust enrichment. Nevertheless, public-nuisance claims are a central feature of the litigation and a key to its momentum.

Second, no matter what the specific claims, public nuisance provides the template for the structure of opioid litigation and other suits like it. One striking feature of public nuisance is that it permits state officials to sue parens patriae—literally as “parent of the nation,” on behalf of the people of a jurisdiction—for an infringement on public rights by a private actor. Other types of parens patriae claims exist, but public nuisance was an early example (and an inspiration to other types of suits), which provides public actors with a ready and familiar template. In modern instances, such as tobacco, opioid, and climate-change litigation, the litigation adopts the architecture of a public-nuisance suit, with an official (such as a state’s attorney general or a locality’s district attorney) suing on behalf of the public. That these suits involve a variety of other claims should not lead us to assume that they would exist in the same manner absent the public-nuisance template. To the extent that such suits are now common, the structure of public nuisance has made a lasting imprint on American tort law.

Although its substance and structure are embedded in modern American tort law, public nuisance occupies an uncertain, somewhat liminal position. It is virtually unknown to the general public, little discussed outside of litigation circles, and often ignored even in torts class. When it is discussed, it raises fraught questions. Is it even a tort? If not, what is it? Does its very existence threaten tort law? The regulatory state? Separation of powers as we know it? All in all, public nuisance exerts potentially powerful, but highly variable, real-world force, while provoking equally variable reactions from courts and commentators.

End of law review excerpt.

Feel free to compare/contrast/comment to your heart’s desire.

Major leak reveals revolutionary new version of Microsoft Bing powered by ChatGPT-4 AI

From Windows Central:

It looks like Microsoft is gearing up to launch a major new version of Bing that integrates OpenAI’s ChatGPT-4 technology in a way that will revolutionize searching the web. Multiple users have reported seemingly stumbling across a preview version of the new Bing earlier today before Microsoft quickly shut it down.

Luckily, a user by the name of Owen Yin was able to grab a few screenshots and try out a handful of features before his access was revoked, giving us a good look at how the future of Bing and searching the web will function with AI woven throughout. To begin, the new Bing advertises itself as more than just a search box. It describes itself as a “research assistant, personal planner, and creative partner at your side.”

The first big change between a normal web search engine and the new AI-powered Bing is that the search bar is now a chat box. It’s much larger in size, and encourages natural language rather than keyword-driven search terms. You’ll be able to ask Bing to look up specific topics or ideas, and even ask for its opinion, with its responses returned to you in a chat bubble.

. . . .

The new Bing is also able to adjust its search queries with you in mind. You can tell it, with natural language, your plans or requirements, such as dietary needs or schedule conflicts, and it’ll do its best to bring you relevant information for your search request that factors in those requirements. It’s mind blowing.

Yin does note that the new Bing does allow you to search the web in the traditional way if you prefer using keywords in a classic search box, along with a page of search results.

. . . .

It’s fair to say that this stuff is wild, and is going to change how we search the web in major ways. Given that this was made available briefly earlier before being pulled, we’d wager that Microsoft is almost ready to announce this new version of Bing. While no event has been announced yet, Microsoft has already confirmed that it plans to weave AI throughout all its products, and it looks like Bing is first in line.

Link to the rest at Windows Central and thanks to F. for the tip.

PG says this feature might move him from Google if he can teach his fingers to not go on automatic pilot when he has a question.

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

From MIT Technology Review:

OpenAI has built the best Minecraft-playing bot yet by making it watch 70,000 hours of video of people playing the popular computer game. It showcases a powerful new technique that could be used to train machines to carry out a wide range of tasks by binging on sites like YouTube, a vast and untapped source of training data.

The Minecraft AI learned to perform complicated sequences of keyboard and mouse clicks to complete tasks in the game, such as chopping down trees and crafting tools. It’s the first bot that can craft so-called diamond tools, a task that typically takes good human players 20 minutes of high-speed clicking—or around 24,000 actions.

The result is a breakthrough for a technique known as imitation learning, in which neural networks are trained how to perform tasks by watching humans do them. Imitation learning can be used to train AI to control robot arms, drive cars or navigate webpages.  

There is a vast amount of video online showing people doing different tasks. By tapping into this resource, the researchers hope to do for imitation learning what GPT-3 did for large language models. “In the last few years we’ve seen the rise of this GPT-3 paradigm where we see amazing capabilities come from big models trained on enormous swathes of the internet,” says Bowen Baker at OpenAI, one of the team behind the new Minecraft bot. “A large part of that is because we’re modeling what humans do when they go online.”

The problem with existing approaches to imitation learning is that video demonstrations need to be labeled at each step: doing this action makes this happen, doing that action makes that happen, and so on. Annotating by hand in this way is a lot of work, and so such datasets tend to be small. Baker and his colleagues wanted to find a way to turn the millions of videos that are available online into a new dataset.

The team’s approach, called Video Pre-Training (VPT), gets around the bottleneck in imitation learning by training another neural network to label videos automatically. They first hired crowdworkers to play Minecraft, and recorded their keyboard and mouse clicks alongside the video from their screens. This gave the researchers 2000 hours of annotated Minecraft play, which they used to train a model to match actions to onscreen outcome. Clicking a mouse button in a certain situation makes the character swing its axe, for example.  

The next step was to use this model to generate action labels for 70,000 hours of unlabelled video taken from the internet and then train the Minecraft bot on this larger dataset.

“Video is a training resource with a lot of potential,” says Peter Stone, executive director of Sony AI America, who has previously worked on imitation learning. 

Imitation learning is an alternative to reinforcement learning, in which a neural network learns to perform a task from scratch via trial and error. This is the technique behind many of the biggest AI breakthroughs in the last few years. It has been used to train models that can beat humans at games, control a fusion reactor, and discover a faster way to do fundamental math.
The problem is that reinforcement learning works best for tasks that have a clear goal, where random actions can lead to accidental success. Reinforcement learning algorithms reward those accidental successes to make them more likely to happen again.

But Minecraft is a game with no clear goal. Players are free to do what they like, wandering a computer-generated world, mining different materials and combining them to make different objects.
Minecraft’s open-endedness makes it a good environment for training AI. Baker was one of the researchers behind Hide & Seek, a project in which bots were let loose in a virtual playground where they used reinforcement learning to figure out how to cooperate and use tools to win simple games. But the bots soon outgrew their surroundings. “The agents kind of took over the universe, there was nothing else for them to do” says Baker. “We wanted to expand it and we thought Minecraft was a great domain to work in.”

Link to the rest at MIT Technology Review

PG hopes he is not alienating too many visitors with his occasional forays into artificial intelligence. It’s a topic that he finds fascinating.

As far as relevance to TPV, PG has mentioned AI writing programs, which he expects to become more and more sophisticated over time. While PG will not predict the demise of authors who are human beings, he expects AI to continue to improve and expand its writing capabilities.

Who knows, perhaps someone will take the vast sea of written wisdom PG has produced and create an AI version of PG. Such an AI would have to possess a high tolerance for randomness, however. Much of the time, there is no recognizable logic happening in PG’s brain, so there might be insufficient scaffolding to support the development of any sort of intelligent program.