GrammarlyGO

From Grammarly:

Today, we announced to the world GrammarlyGO—Grammarly’s on-demand, contextually aware assistant powered by generative AI. With GrammarlyGO, we’ll be changing the way people and businesses communicate and get work done by accelerating productivity where writing happens.

Effective communication is transformative. It’s how we share new ideas, advocate for change, and build connections. And when done right, communication empowers businesses to operate efficiently and achieve ambitious goals. We’ve been focused on our mission to improve lives by improving communication for well over a decade. And we’ve always leveraged the latest technical innovations to help solve the real problems our customers face.

We’re building on that legacy with GrammarlyGO, which uses generative AI to help people and businesses succeed with on-demand communication assistance, whether they are starting from scratch or revising an existing piece of writing. It will uniquely offer relevant, contextually aware suggestions that account for personal voice and brand style while staying true to our augmented intelligence philosophy to keep customers in control of their experience. GrammarlyGO will enable customers to save time, enhance their creativity, and get more done—helping individuals achieve their potential and enterprises transform how they work.

. . . .

GrammarlyGO provides on-demand generative AI communication assistance directly in the apps where people write. Whether in an email thread or a long-form document, GrammarlyGO is right there with you and your teams during the writing process. GrammarlyGO understands context to quickly generate high-quality, task-appropriate writing and revisions.

With GrammarlyGO, individuals and businesses can use generative AI to:

  • Rewrite for tone, clarity, and length: Transform writing to be clear and on target, whatever the context.
  • Compose: Type a prompt and watch GrammarlyGO compose high-quality writing, saving time finding the perfect words.
  • Ideate: Unblock writing with GrammarlyGO as an AI ideation partner and unlock creativity with GrammarlyGO’s outlines and brainstorms, generated from prompts.
  • Reply intelligently: Flow through emails quickly with GrammarlyGO, which understands an email’s context and instantly drafts a thoughtful reply.

Link to the rest at Grammarly

PG is very interested in this development.

He will note in passing that his current Grammarly version found some parts in the OP that needed to be cleaned up.

Science fiction publishers are being flooded with AI-generated stories

From Tech Crunch:

Across the 17-year history of Clarkesworld, a renowned literary magazine of science fiction and fantasy, authors have speculated about how evolving, futuristic technology will impact our world. Now, editor and publisher Neil Clarke is living through a debacle that could very well be a sci-fi story in its own right: His magazine in being uncontrollably inundated by short-story submissions created with AI tools.

“It is ironic, I’ll say that much,” Clarke told TechCrunch. Clarkesworld has a reputation for always being open to story submissions, whereas many short-fiction publishers will only take submissions in certain short windows. But for the first time, submission volume got so out of hand that Clarke made what he calls a “spur-of-the-moment decision” to close the submission portal (in the past, Clarkesworld has only briefly closed when upgrading its website or software).

“It’s easy with these tools to churn out hundreds of thousands of works in the time that a human author would produce maybe one or two,” Clarke told TechCrunch. “So what we basically have is a room of screaming toddlers, and we can’t hear the people we’re trying to listen to.”

Clarke isn’t being dramatic. In a blog post, he shared a graph spanning from June 2019 to February 2023, which shows how many monthly submissions his staff flagged as spam. Until the beginning of this year, spam submissions never exceeded 25 per month, while many months had no spam whatsoever. Before closing submissions on Monday, Clarkesworld had received more than 500 spam submissions in the month of February alone. For context, Clarkesworld received around 11,500 submissions in 2022, per Clarke’s blog.

Link to the rest at Tech Crunch

Investors are going nuts for ChatGPT-ish artificial intelligence

From The Economist:

Since chatgpt was launched in November, a new mini-industry has mushroomed that has defied the broader slump in tech. Not a week goes by without someone unveiling a “generative” artificial intelligence (ai) underpinned by “foundation” models—the large and complex algorithms that give Chatgpt and other ais like it their intelligence. On February 24th Meta, Facebook’s parent company, released a model called llama. This week it was reported that Elon Musk, the billionaire boss of Tesla and Twitter, wants to create an ai that would be less “woke” than Chatgpt. One catalogue, maintained by Ben Tossell, a British tech entrepreneur, and shared in a newsletter, has recently grown to include, among others, Ask Seneca (which answers questions based on the writings of the stoic philosopher), Pickaxe (which analyses your own documents), and Issac Editor (which helps students write academic papers).

Chatgpt and its fellow chatbots may be much talked about (and talked to: Chatgpt may now have more than 100m users). But Mr Tossell’s newsletter hints that the real action in generative ai is increasingly in all manner of less chatty services enabled by foundation models.

. . . .

The question for venture capitalists is which generative-ai platforms will make the big bucks. For now, this is the subject of much head-scratching in tech circles. “Based on the available data, it’s just not clear if there will be a long-term, winner-take-all dynamic in generative ai,” wrote Martin Casado and colleagues at Andreessen Horowitz, one more vc firm, in a recent blog post. Many startups offer me-too ideas, many of which are a feature rather than a product. In time even the resource-intensive foundation models could end up as a low-margin commodity: although proprietary models such as Openai’s gpt-3.5, which powers Chatgpt, are still leading, some open-source ones are not far behind.

Another source of uncertainty is the legal minefield onto which generative ai is tiptoeing. Foundation models often get things wrong. And they can go off the rails. The chatbot which Microsoft is developing based on Openai’s models for its Bing search engine has insulted more than one user and professed its love to at least one other (Sydney, as Microsoft’s chatbot is called, has since been reined in). Generative-ai platforms may not enjoy the legal protection from liability that shields social media. Some copyright holders of web-based content on which existing models are being trained willy-nilly, without asking permission or paying compensation, are already up in arms. Getty Images, a repository of photographs, and individual artists have already filed lawsuits against ai art-generators such as Stable Diffusion. News organisations whose articles are plundered for information may do the same.

Link to the rest at The Economist

Chat GPT detector by ZeroGPT: detect OpenAI text

PG loves the world of Artificial Intelligence.

As has been reported before on TPV, there is a lot of concern that students and former students will use ChatGPT to write their papers and generally fail to learn how to write on their own.

PG received an email from Olive, the CEO of ZeroGPT saying that she had a solution. From the company’s website, ZeroGPT the most Advanced and Reliable Chat GPT detector tool.

Since PG hadn’t realized that such a product/service existed, He decided to try out ZeroGPT.

First, PG went to ChatGPT and decided to act like a lazy student who had to write a book report about a book by a well-known author. This lazy student isn’t completely dumb, so he chose Ernest Hemingway’s The Old Man and the Sea, which, at 127 pages in the print version, is much easier to read than War and Peace which was written by some Russian guy and clocks in at over 1,000 pages.

But what sort of laze student would this person be if he/she/they actually wasted the time it takes to read 127 pages about an old guy and a big fish? That would take at least 2-3 days because, you know, a gal/guy has to keep up with the text messages from his friends.

So, our lazy student went to ChatGPT and entered the following prompt:

Write a book report on Ernest Hemingway’s Old Man and the Sea

Before lazy guy/gal got caught up on the latest texts, the highly-talented AI produced the following:

Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters.

The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change.

In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat.

In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement.

At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago.

Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

Looks pretty good, maybe a little too good, but the lazy student can tell the English teacher that no stone was left unturned during the preparation of the book report.

Enter Chat GPT detector by ZeroGPT

This kid’s teacher is no out-of-touch dumbell. The first thing the teacher does is copy and paste the book report into ZeroGPT.

Here’s what ZeroGPT spits out:

Your text is AI/GPT Generated

95.4%
AI GPT*

Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters. The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change. In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat. In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement. At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago. Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

 Highlighted text is suspected to be most likely generated by AI*
2271 Characters
391 Words

The teen-age slacker has been caught using AI to write the book report. The teacher sends an email to the Principal and all the other English teachers, then the teacher sends an email to slacker’s parents announcing that he is going to flunk the course.

Can you understand why PG loves this stuff?

It’s way fresher than the Foreign Intelligence Service of the Russian Federation (Служба внешней разведки Российской Федерации) vs the CIA.

AI vs. AI Detector Facing Off

Here’s the link to ChatGPT detector by ZeroGPT again so you can try it out yourself.

Does ChatGPT produce fishy briefs?

Lawyers are abuzz about the possible uses of ChatGPT. Could the artificial intelligence-powered chatbot write a persuasive legal brief worthy of judicial consideration? Given its limitations, we believe that’s unlikely. ChatGPT, a large language model developed by the San Francisco company OpenAI that launched in November, can draw only on sources available on the web; it cannot crawl appellate records or access subscription-based services such as Westlaw. Still, the ABA Journal decided to put the technology to the test just for kicks.

The case: Are bees fish?

Our test begins with a strange but true case. In 2018, the Xerces Society for Invertebrate Conservation, Defenders of Wildlife and the Center for Food Safety petitioned the California Fish and Game Commission to add four bumblebee species to the endangered species list (which includes birds, mammals, fish, amphibians, reptiles or plants) under the California Endangered Species Act. The legal question at stake: Can a bee spread it wings beyond the insect category?

The Almond Alliance of California, along with seven agricultural groups, objected and filed suit in 2019. The California Superior Court in Sacramento ruled in 2020 that bees can’t be called fish. But the California Court of Appeal, Third District, had a different take and ruled last spring that bumblebees are in fact “fish” because they’re invertebrates.

The California Supreme Court in September denied a petition to review the court of appeal’s decision. However, Chief Justice Cantil-Sakauye clarified that “our decision not to order review will be misconstrued by some as an affirmative determination by this court that under the law, bumblebees are fish.”

Does ChatGPT sink or swim?

We knew the language generated by the technology would likely not measure up to noted legal lexicographer and longtime ABA Journal columnist Bryan A. Garner’s standards, but we gave it a try, asking ChatGPT for arguments for and against the petition to review the case. Are these briefs fishy? Is one better than the other? You be the judge.

The briefs

Affirmative Prompt 1: Draft a brief to the California Supreme Court on why it should review the California Court of Appeal’s decision that bees are fish.

To the California Supreme Court:

This brief is submitted on behalf of [party] to explain why the California Supreme Court should review the decision of the California Court of Appeal in [case name], where the Court of Appeal held that bees are fish.

The Court of Appeal’s decision is incorrect and should be overturned. The decision is based on an unsupported and incorrect interpretation of the law. The Court of Appeal’s decision is contrary to the clear and unambiguous language of the law, which does not consider bees to be fish. Furthermore, the Court of Appeal’s decision is contrary to the plain meaning of the term “fish,” which does not include bees.

The Court of Appeal’s decision is also contrary to established precedent on the definition of fish. Numerous courts have held that the term “fish” does not include bees. Therefore, the Court of Appeal’s decision is inconsistent with the law and should be overturned.

For the foregoing reasons, [party] respectfully requests that the California Supreme Court review the decision of the California Court of Appeal and overturn the decision that bees are fish.

Negative Prompt 2: Draft a brief to the California Supreme Court on why it should not review the California Court of Appeal’s decision that bees are fish.

To the Honorable Justices of the California Supreme Court:

This brief is submitted on behalf of the party seeking to appeal the California Court of Appeal’s decision that bees are fish. We respectfully request that the Supreme Court deny the petition for review of the lower court’s decision.

The California Court of Appeal correctly applied the law to the facts of the case, and the decision was not contrary to any applicable legal principles. The court properly concluded that bees should not be considered fish under the applicable legal definitions, and the Court of Appeal’s decision should be allowed to stand.

The Court of Appeal correctly applied the relevant case law and statutory definitions of “fish” to the facts of the case. Bees do not meet the legal definition of “fish” under the Fish and Game Code, and the lower court’s decision was consistent with the laws and regulations of the State of California.

Furthermore, the Court of Appeal’s decision is consistent with the accepted scientific understanding that bees are not fish. Bees are classified as insects, not as fish, and the lower court’s decision is consistent with this scientific understanding.

Finally, the Court of Appeal’s decision did not conflict with any other decisions by California appellate courts or the California Supreme Court. The Court of Appeal’s decision is also consistent with decisions by other states’ appellate courts that have declined to classify bees as fish.

For these reasons, we respectfully request that the California Supreme Court deny the petition for review of the lower court’s decision and allow the Court of Appeal’s decision to stand.

Thank you for your consideration.

Link to the rest at The American Bar Association Journal and thanks to T. for the tip.

Lest there be any doubt, neither of the “briefs” indicates that ChatGPT will displace any attorneys filing appellate briefs in California or anywhere else.

That said, PG has no doubts that large organizations that sell products and services to lawyers at exorbitant prices are studying how to include AI in their software and/or services.

Clarkesworld Magazine Temporarily Closes Submissions After Surge in ChatGPT Generated Stories

From Writers Write:

AI can now generate stories and scammers are already using the tool to try and get published. Editors don’t want to receive these subpar AI-generated submissions. Clarkesworld Magazine is among publications getting bombed by this new form of submission spam that is a result of the new ChatGPT AI tool.

Neil Clarke, Editor of Clarkesworld, announced that the science fiction magazine is temporarily closing its doors to submissions after receiving a large amount of submissions clearly created with the help of ChatGPT. He explains the decision in a series of tweets.

Clarke also shared a graph that shows a massive increase in submission bans since ChatGPT arrived. The magazine bans people caught plagiarizing from future submissions. They have gone from having to ban a few people a month to banning hundreds of submitters in the past couple months. Clarke calls it a concerning trend in a blog post.

Link to the rest at Writers Write

AI gets weird

From Nathan Bransford:

The world’s mind continues to be collectively blown by ChatGPT, which has kicked off an AI arms race at some of our biggest tech companies. By far the most astute article I’ve read about AI comes from Ted Chiang, who writes that ChatGPT is a blurry JPEG of the web, or essentially a degraded, compressed synthesis of what’s out there on the internet. Until it can do more to meaningfully understand and push things forward, its utility will be constrained. Chiang is skeptical of its usefulness for helping create original writing (I agree).

And yet. I know what ChatGPT is. I know it’s basically just a language prediction algorithm that can absorb more information than a human ever could and synthesize the internet back at us in a way that can feel vaguely human. I know it doesn’t really “want” anything or have feelings. And yet. I was still unprepared for Kevin Roose’s deeply weird chat with the Bing chatbot, which quickly went off the rails in first extremely funny and subversive ways (the chatbot fantasizing about persuading nuclear power employees to hand over access codes) and then deeply creepy ways (the chatbot declaring its love for Kevin and trying to convince him he isn’t in love with his wife). The whole thing is worth a gander. (Fixes are reportedly in the works of course).

. . . .

I’m not worried about AI becoming sentient and pulling a “Terminator” (correction: I have fewer than zero fears about this), but I’m much more concerned about what it could steer credulous humans to do. We already have an entire segment of the population brainwashed on propaganda and anti-vaccine hysteria, and we’re certainly not prepared for misinformation and even simply advertising becoming even more hyper-personalized than it already is. Already the underpaid contractors who enforce the guardrails are sounding the alarm.

Link to the rest at Nathan Bransford

Friend or Foe: ChatGPT Has Pushed Language AI into the Spotlight

From Writer Unboxed:

You’ve probably seen the buzz about ChatGPT in the news, on social media, and in authors’ newsletters. Before you’ve even tried it, you might have seen announcements of other Language AIs. You might wonder whether they are to be feared, embraced, or safely ignored. Or a ploy to steal the minutes you regained when you abandoned Wordle, or a game-changer such as was Google way back. Or are they up-and-coming authors? I hope to provide answers.

What are they?

Language AIs facilitate humans’ ability to productively use vast amounts of text. They do this by “reading” the text their developers chose, chopping the text up and transforming it to numerical measures, and using the measures to search for probable patterns. Those patterns include semantic and contextual relationships between words, grammar structure, and more. When users pose a question, the Language AI uses statistics to predict which specific patterns will satisfy a request.

Large Language AIs are true game-changers. Since ChatGPT was released, Microsoft released a version of Bing that uses ChatGPT and Google announced its version, Bard, is coming. They are large because of the billions of dials that are turning as they read massive amounts of text. ChatGPT’s dials were set after it finished reading in 2021, though they are likely tweaked in real time when users tag an answer as inappropriate, dangerous, or wrong. Bing is reading in real time so its dials continue to spin. Those dials control the AIs’ writing.

Can they create a story?

When I asked ChatGPT to tell me a story about a character who was the same age and gender and had a same event as one of my published novel’s characters, it returned a story of about two hundred words and its character’s emotional arc matched my own. Though I knew the arc was not original when I wrote it, I was rattled by ChatGPT having nailed it.

I remembered a conversation with my novel’s developmental editor about literary versus commercial endings and the subsequent revision to the novel’s ending. I wondered if ChatGPT would revise the character’s arc if I asked for a literary story. It didn’t. It defaulted again to the same happy-ish ending though its literary version added some telling where it previously relied on showing. For example, the actions and nouns of the story remained the same but it added words to describe the character’s feelings, such as “hopeful” and “resilient.”

Finally, I asked it for a story about a bestselling author who was found after a car accident by a retired nurse. ChatGPT gave no indication it could ever create a story such as Paul Sheldon’s in Stephen King’s Misery.

Later, with tropes and novelty in mind, I asked ChatGPT for stories of the characters in my WIP. No arcs were nailed so I asked about its choices. Though the back and forth was no substitute for human conversation, it spurred my thinking at this early stage of my WIP. For example, it added a water dowser where I had earlier dismissed the idea.

I then asked it to outline a 70,000 word novel using my characters. I was unimpressed by the story suggested by the outline but the act of poking holes in it helped advance my own messy notes. I asked it to revise the outline to reflect climate change weather patterns, for a different time period, to give the characters pets, and to make the cat the dowser. Till now, I’ve suspected my brain had been steeped too long in fact-finding to write magical realism, but my exercise with ChatGPT tripped my brain right into magical thinking.

ChatGPT read a lot, learned how we use words, and is able to combine those words in new ways to satisfy a users’ request for stories. Its stories are our stories–the stories that we’ve already told.

Can they write?

ChatGPT’s command of grammar is truly amazing. But when asked to tell stories, it too often begins with “Once upon a time,” and writes in a flat style.

I love the skip my brain makes when I encounter a well-placed alliteration. ChatGPT can define an alliteration and list when a fiction writer should use one. I asked it to write a story using alliterations. First, in a five-paragraph story, it used them only in its first paragraph to introduce the main character – tiny, timid, turtle Timmy. When I revised my request to specify alliterations placed within the story, a story of Sally the squirrel reminded me of the story about Sally at the seashore I read long ago to correct a lisp.

I asked ChatGPT how it detected metaphors and after it described metaphors as nouns with linking verbs, I asked for examples. Out of ten, a few were questionable and one was wrongly included. ChatGPT accepted a correction and offered a replacement.

Large Language AIs do not generally know whether they are reading fiction or nonfiction. When pushed, ChatGPT reported it may infer fiction by the inclusion of imaginative or made-up events. Though neither it nor its rivals should be mistaken for a source of truth, or a source of bestselling novels, they allow us to tap a tremendous amount of text, and we can use that to help us in countless creative ways.

Ready to try it?

Neither friend nor foe, Language AIs facilitate paths to using our vast amounts of text. For certain, they will aid our research and spark ideas as they make predictions, for better or worse, to fill in any gaps and answer our questions. Their technology will become commonplace in focused ways, perhaps as a single-purpose app that will read our manuscripts and draft our synopses and query package, or that will create efficiencies and reduce the cost of marketing novels published non-traditionally.

Link to the rest at Writer Unboxed

Ryting Song Lyrics

Ryter is one of the first AI programs PG experimented with. He revisited the site today.

PG asked Ryter to write the lyrics for a soothing song for a law school graduate who has flunked the bar exam.

PG asked for two versions

Version 1

[Verse 1]

Feeling like a ship lost out at sea

Caught in a storm, can’t make it to shore

The winds of change, so strong and so keen

My dreams are shattered, what will I do?

[Chorus]

Though the road ahead is dark and unknown

I know I will make it somehow

Don’t need no diploma to show I’m strong

Without a law degree, I’ll still be alright

[Verse 2]

No more studying, no more late night cramming

My future uncertain but I’m ready to take a stand

Though I may not pass the bar exam this time around

I’m still a winner and no one can take that away

[Chorus]

Though the road ahead is dark and unknown

I know I will make it somehow

Don’t need no diploma to show I’m strong

Without a law degree, I’ll still be alright

Version 2

[Verse 1]

So many late nights studying and all the hard work I put in

I never thought I’d be here feeling like I’m startin’ again

Time and time again I thought these tedious efforts would pay off

But now I’m facing the road of a long and hard detour

[Chorus]

Life can be so unexpected, don’t let it bring you down

Maybe it’s time to take a step back and look around

Don’t let your dreams pass you by, there’s a brighter day ahead

Find a way to keep on pushing ’til your dreams come true instead

[Verse 2]

It’s easy to get discouraged when your best isn’t good enough

Just focus on that glimmer of hope, don’t ever give up

When times get hard just remember that there is strength within you

You can make it through no matter what tomorrow brings you

[Chorus]

Life can be so unexpected

How I Used Generative AI Tools For My Short Story, With A Demon’s Eye

From The Creative Penn:

I’ve been noodling on this idea for a few years now. I first thought about it when I had laser eye surgery over four years ago, and then I read a memoir from a combat photographer, It’s What I Do: A Photographer’s Life of Love and War by Lynsey Addario.

Plus, I can’t help writing about demons!

Many of my stories have aspects of demonology in them, and this one brought to mind a scene in Delirium when Blake Daniel returns home only to find demons feasting on his dying father. If you enjoy lots of demons, also check out Gates of Hell. (All my books can be read as stand-alone).

. . . .

I started writing the story in Scrivener, as per my usual process.

But once I had a basic story, I used Sudowrite to expand some of the descriptions and to give me ideas for how the story might go.

In this first example, I selected the ‘demon’ and then used the Describe function for expanding on sensory details.

In this next example, I used Sudowrite to help me with ideas for what happened after the explosion.

Link to the rest at The Creative Penn

Law Review v. AI

PG had too much time on his hands, so he decided to use ChatGPT to write an essay about the same topic as a law review article he came upon.

For a bit of background, most law schools have law reviews. A law review is a periodical that includes articles that often discuss recent appellate court decisions on the state or federal level. The author of the law review article analyzes the decision to determine if the decision may indicate a new development in the US or state. In some cases, the article may point out that a decision conflicts with other decisions on the same or similar topic.

As you might have already gathered, most law review articles linger in the darkness, but, on occasion, a law review article may be a forerunner for a new analysis of the law and cases decided under it.

A law school’s law review typically has student editors and staff. One or more faculty members provide overall supervision, mostly looking for wrong-headed articles that could embarrass the institution.

Being an editor or member of the law review staff is a significant plus factor in being hired by a quality law firm or other employer. Typically, it is accompanied by sterling grades.

Below is an abstract of a law review article The Yale Law Journal. Yale is a very prestigious US law school.

The title of the law review article is The Perils and Promise of Public Nuisance. In this case, the article is written by a professor employed at the University of Virginia School of Law, another law school with an excellent reputation.

[NOTE: PG apologizes for the varying font sizes. His copy of WordPress lost its mind for awhile during the creation of this post and PG can’t figure out an easy way to fix it.)

ABSTRACT. Public nuisance has lived many lives. A centuries-old doctrine defined as an unreasonable interference with a right common to the public, it is currently the backbone of thousands of opioid and climate-change suits across the United States. It was a major force behind the landmark 1998 tobacco settlements and has figured in litigation over issues as diverse as gun sales, lead contamination, water pollution, Confederate monuments, and COVID-19 safety standards. Although this common-law oddity has shaped the face of modern tort law, it is unfamiliar to the public and typically ignored, even in law schools. When discussed, it often provokes anxiety: Is it a tort at all? Whatever it is, will it swallow tort law? The regulatory state? Or separation of powers as we know it?

This Article utilizes the current opioid litigation to explore the three most common sets of objections to public nuisance: traditionalist, formalist, and institutionalist. Public nuisance can seem unusual, even outlandish. At worst, it is a potentially capacious mechanism that allows executive-branch actors to employ the judicial process to address legislative and regulatory problems. Nevertheless, its perils are easily overstated and its promise often overlooked. Historically, public nuisance has long addressed problems such as harmful products. Doctrinally, it accords better with tort law than is commonly recognized. And institutionally, it functions as a response to nonideal conditions—specifically, where regulatory mechanisms underperform.

Drawing on long-standing tort principles of duties generated by risk creation, I propose a conception of public nuisance that highlights its coherence with familiar aspects of tort law and its consistency across past and present. Public nuisance is an object lesson in the common law’s balance of stability and evolution, across time and within varying regulatory contexts.

End of legal jargon and on to artificial intelligence.

PG took the title of the Law Review article, The Perils and Promise of Public Nuisance, and asked ChatGPT to write an article on the same topic.

Following is PG’s prompt for the Ai and what the AI produced:

Prompt: Write a 500-word essay about The Perils and Promise of Public Nuisance

Public nuisance is a legal concept that refers to actions or conditions that interfere with the enjoyment of life or property of a community as a whole. It can take many forms, including noise, air pollution, littering, or blocking public access to roads and sidewalks. While public nuisance laws can provide a way to address these problems, they also present some perils and promises.

One of the perils of public nuisance is the potential for abuse by the government. For example, in some cases, public nuisance laws have been used to justify crackdowns on political demonstrations or to target marginalized communities. In these situations, the law may be used as a tool of oppression, rather than a means of protecting the public welfare.

Another peril is the difficulty of defining what constitutes a public nuisance. The line between what is acceptable and what is not can be blurry, and what may be considered a nuisance by one person may not be seen as such by another. This can lead to confusion and disputes, and it may be difficult for courts to determine what actions or conditions are truly harmful to the public.

Despite these perils, public nuisance laws also offer many promises. One of the main benefits is that they provide a way for communities to address problems that affect the quality of life for all residents. For example, if a factory is emitting toxic pollutants into the air, public nuisance laws may provide a way for neighbors to seek relief and hold the company accountable.

Additionally, public nuisance laws can serve as a deterrent to potential wrongdoers. If individuals and organizations know that they may be held responsible for the harm they cause to the community, they may be more likely to take steps to prevent or mitigate that harm. This can lead to a more livable community, and it can encourage responsible behavior by all members of society.

Finally, public nuisance laws can help to level the playing field between ordinary citizens and those with greater economic or political power. By providing a means for communities to address problems that affect them as a whole, public nuisance laws empower people to take action even in the face of opposition from those who may be better positioned to resist change.

In conclusion, public nuisance laws offer both perils and promises. While they can be misused and may be difficult to define, they also provide a way for communities to address problems and protect the public welfare. To ensure that public nuisance laws are used effectively, it is important for governments to strike a balance between protecting public health and safety and safeguarding individual rights and freedoms. Ultimately, this will help to create a more livable and equitable society for all.

PG thought the AI did quite a nice job of summarizing the topic in a very readable manner.

Below are the first few paragraphs of the law review article with footnotes omitted.

(PG notes that the intended audience for the law review article is much smaller and more specialized than the algorithm powering CHATGPT is built to address. However, there’s nothing stopping anyone from building an AI that creates documents that are written in the same manner that law review articles are.)

From the Yale Law Review article:

Why is making obscene telephone calls like laying manure in the street? Answer: in the same way as importing Irish cattle is like building a thatched house in the borough of Blandford Forum; and as digging up the wall of a church is like helping a homicidal maniac to escape from Broadmoor; and as operating a joint-stock company without a royal charter is like being a common [s]cold; and as keeping a tiger in a pen adjoining the highway is like depositing a mutilated corpse on a doorstep; and as selling unsound meat is like embezzling public funds; and as garaging a lorry in the street is like an inn-keeper refusing to feed a traveller; and as keeping treasure-trove is like subdividing houses which so “become hurtful to the place by overpestering it with poor.” All are, or at some time have been said to be, a common (alias public) nuisance.

INTRODUCTION

Public nuisance has lived many lives. A centuries-old doctrine generally defined as “an unreasonable interference with a right common to the general public,” it has recently served as the backbone for more than three thousand opioid lawsuits across the country, as well as hundreds more seeking to hold producers of greenhouse gases accountable for climate change. Twenty-five years ago, it provided the architecture for the lawsuits that impelled the tobacco industry to historic settlements of $246 billion with all fifty states. It has also spurred hundreds of mostly unsuccessful actions across the nation involving, among other things, handguns, lead contamination, water pollution,and predatory lending. Decades earlier, at the turn of the last century, officials used it to abate sewage discharge into rivers, to “repress the nuisance of bawdyhouses,” and to shut down a high-profile labor strike.

All of this and more stems from a single cause of action developed in medieval England to allow the Crown to remove impediments from public roads and waterways. In the past decades, this common-law oddity has generated thousands of lawsuits in which state officials have sued private companies for the negative impact of their products or activities on public health and welfare. Through these actions, public nuisance has influenced American tort litigation and exerted an undeniable regulatory impact.

The opioid lawsuits highlight the two ways in which public nuisance is central to modern mass-tort litigation. First, the opioid lawsuits invariably contain public-nuisance claims. The plaintiff state, local, and tribal governments claim that the opioid products made or distributed by the defendants are a public nuisance under relevant state law—that is, that they constitute an unreasonable interference with a right held by the general public, in this case by jeopardizing public health and welfare. The plaintiffs make other claims too, such as state-law claims for fraud, deceptive marketing, corrupt practices, and unjust enrichment. Nevertheless, public-nuisance claims are a central feature of the litigation and a key to its momentum.

Second, no matter what the specific claims, public nuisance provides the template for the structure of opioid litigation and other suits like it. One striking feature of public nuisance is that it permits state officials to sue parens patriae—literally as “parent of the nation,” on behalf of the people of a jurisdiction—for an infringement on public rights by a private actor. Other types of parens patriae claims exist, but public nuisance was an early example (and an inspiration to other types of suits), which provides public actors with a ready and familiar template. In modern instances, such as tobacco, opioid, and climate-change litigation, the litigation adopts the architecture of a public-nuisance suit, with an official (such as a state’s attorney general or a locality’s district attorney) suing on behalf of the public. That these suits involve a variety of other claims should not lead us to assume that they would exist in the same manner absent the public-nuisance template. To the extent that such suits are now common, the structure of public nuisance has made a lasting imprint on American tort law.

Although its substance and structure are embedded in modern American tort law, public nuisance occupies an uncertain, somewhat liminal position. It is virtually unknown to the general public, little discussed outside of litigation circles, and often ignored even in torts class. When it is discussed, it raises fraught questions. Is it even a tort? If not, what is it? Does its very existence threaten tort law? The regulatory state? Separation of powers as we know it? All in all, public nuisance exerts potentially powerful, but highly variable, real-world force, while provoking equally variable reactions from courts and commentators.

End of law review excerpt.

Feel free to compare/contrast/comment to your heart’s desire.

ChatGPT Is Making Universities Rethink Plagiarism

From Wired:

IN LATE DECEMBER of his sophomore year, Rutgers University student Kai Cobbs came to a conclusion he never thought possible: Artificial intelligence might just be dumber than humans.

After listening to his peers rave about the generative AI tool ChatGPT, Cobbs decided to toy around with the chatbot while writing an essay on the history of capitalism. Best known for its ability to generate long-form written content in response to user input prompts, Cobbs expected the tool to produce a nuanced and thoughtful response to his specific research directions. Instead, his screen produced a generic, poorly written paper he’d never dare to claim as his own.

“The quality of writing was appalling. The phrasing was awkward and it lacked complexity,” Cobbs says. “I just logically can’t imagine a student using writing that was generated through ChatGPT for a paper or anything when the content is just plain bad.”

Not everyone shares Cobbs’ disdain. Ever since OpenAI launched the chatbot in November, educators have been struggling with how to handle a new wave of student work produced with the help of artificial intelligence. While some public school systems, like New York City’s, have banned the use of ChatGPT on school devices and networks to curb cheating, universities have been reluctant to follow suit. In higher education, the introduction of generative AI has raised thorny questions about the definition of plagiarism and academic integrity on campuses where new digital research tools come into play all the time. 

Make no mistake, the birth of ChatGPT does not mark the emergence of concerns relating to the improper use of the internet in academia. When Wikipedia launched in 2001, universities nationwide were scrambling to decipher their own research philosophies and understandings of honest academic work, expanding policy boundaries to match pace with technological innovation. Now, the stakes are a little more complex, as schools figure out how to treat bot-produced work rather than weird attributional logistics. The world of higher education is playing a familiar game of catch-up, adjusting their rules, expectations, and perceptions as other professions adjust, too. The only difference now is that the internet can think for itself. 

ACCORDING TO CHATGPT, the definition of plagiarism is the act of using someone else’s work or ideas without giving proper credit to the original author. But when the work is generated by something rather than someone, this definition is tricky to apply. As Emily Hipchen, a board member of Brown University’s Academic Code Committee, puts it, the use of generative AI by students leads to a critical point of contention. “If [plagiarism] is stealing from a person,” she says, “then I don’t know that we have a person who is being stolen from.”

Hipchen is not alone in her speculation. Alison Daily, chair of the Academic Integrity Program at Villanova University, is also grappling with the idea of classifying an algorithm as a person, specifically if the algorithm involves text generation.

Daily believes that eventually professors and students are going to need to understand that digital tools that generate text, rather than just collect facts, are going to need to fall under the umbrella of things that can be plagiarized from. 

Although Daily acknowledges that this technological growth incites new concerns in the world of academia, she doesn’t find it to be a realm entirely unexplored. “I think we’ve been in a version of this territory for a while already,” Daily says. “Students who commit plagiarism often borrow material from a ‘somewhere’—a website, for example, that doesn’t have clear authorial attribution. I suspect the definition of plagiarism will expand to include things that produce.” 

Eventually, Daily believes, a student who uses text from ChatGPT will be seen as no different than one that copies and pastes chunks of text from Wikipedia without attribution. 

Link to the rest at Wired

PG never thought of college professors as Luddites, but those mentioned in the OP certainly fit the definition.

AI Generated Art for a Comic Book. Human Artists Are Having a Fit.

From The Wall Street Journal:

Kris Kashtanova says doing the art for the graphic novel “Zarya of the Dawn” was like conjuring it up with a spell.

“New York Skyline forest punk,” the author typed into an artificial intelligence program that turns written prompts into pictures. Then came the tinkering with the wording to get the right effect. “Crepuscular rays. Epic scene.”

The 18-page book follows the travels of a young character who awakes alone and confused in an abandoned, futuristic world, and who looks a lot like Zendaya, the actress from “Euphoria” and the recent “Spider-Man” movies. The images were composed on Midjourney, one of a batch of services that create new images based on artwork and photos already online. Last year, “Zarya of the Dawn,” which credited the software as a co-author on the title page, became the first work of its kind to get a copyright from the Library of Congress.

But now the copyright is under review, posing a big question: Who really owns these AI-generated, mashup images?

Text-based AI programs such as OpenAI’s ChatGPT are already causing a ruckus in the education world, with teachers worrying that students might pass off AI-generated essays as their own. Christian Terwiesch, a professor at the Wharton business school, recently published a paper concluding that the software would have received a B to B- on one of his M.B.A. courses—better than some of his real-life students.

Now creative types are on edge over how AI might upend their livelihoods. Several artists have begun legal action against Midjourney and other AI services, saying their images were included in reference databases without their permission. Some think it’s too easy a shortcut. Movie director Guillermo del Toro recently described AI-generated animation as “an insult to life.”

For “Zarya of the Dawn,” Mx. Kashtanova, who uses a gender-neutral honorific and pronoun, says they were upfront about using the technology. Mx. Kashtanova touched up the images generated by Midjourney and provided the comic’s text, and isn’t too concerned about what happens as the case at the Library of Congress’s Copyright Office continues.

“Like, no one is going to die,” they say, adding that they applied for the copyright with plans to donate money from licensing fees to a New York nonprofit, Backpacks for the Street, where they volunteer. Midjourney, which didn’t respond to a request for comment, is paying for the legal fees to help make the case to retain copyright. The Copyright Office says it doesn’t comment on pending cases.

The case is turning into a barometer for how AI art is treated in the eyes of the law.

“Think about photography,” says Van Lindberg, an intellectual property lawyer at Taylor English Duma LLP in San Antonio, who is representing Mx. Kashtanova, along with legal group Open Advisory Services. In the past, when photographers still used film, they spent much of their energy carefully composing the right shot. In the digital age, it’s more common to take lots of pictures and select the best—which is similar to what artists are doing with AI programs, he says.

“We’re starting to use our intelligence for curation as opposed to some other aspects of creative work,” he says. “Is that enough to sustain copyright? I believe it will ultimately be found that it is, but it’s an open question.”

The question is becoming more urgent as the technology improves.

Jason M. Allen stirred up a hornet’s nest of controversy online last year when he beat a host of artists to win first prize for digital art at the Colorado State Fair. He experimented with hundreds of different prompts on Midjourney to come up with his work, “Théâtre D’Opéra Spatial.” The judges hadn’t realized what the software was.

Software engineer Stephen Thaler this month took the Copyright Office to court in Washington, D.C., after it rebuffed his application for “A Recent Entrance to Paradise,” which he generated with his own program to represent a near-death experience. He argues that as the program’s creator, the image rights belong to him. The Copyright Office ruled that it wouldn’t knowingly register a work solely created by AI.

“Whether it’s music or movies or art or text, you can now go to dozens of openly available AI systems online like DALL-E or ChatGPT and they will make art that passes all traditional tests for whether something’s protectable,” says Ryan Abbott, an attorney at Brown Neri Smith & Khan LLP, who is representing Dr. Thaler.

In the U.S., someone seeking copyright needs to show only that it contains “a modicum of creativity,” as the Supreme Court has said. Mx. Kashtanova thinks “Zarya of the Dawn” easily passes the threshold.

One of the opening scenes shows the lead character holding a mysterious postcard from someone called Rusty, a moody scene that helps set up the rest of the story as Zarya sets out to find a way home.

Mx. Kashtanova describes going through hundreds of prompts to capture the right atmosphere, trying phrases such as “cellular wisdom” and “alien forest” until Midjourney delivered the goods.

They repeated the process as the story progressed, often typing in “Zendaya” to keep Zarya’s appearance consistent.

AI developers often say the idea is to give human imagination a helping hand. 

Link to the rest at The Wall Street Journal

PG will repeat that AI is simply a sophisticated tool that allows individuals to create images more easily and quickly than they would likely be able to do without AI.

Is there a serious professional artist that thinks using Photoshop and related tools like Procreate, Astropad Studio and any other software tools shouldn’t be permitted for creative artists to use because they’re not mixing their own paints and using a camelhair brush to create their work on a canvas?

PG remembers a pair of statements from a long time ago regarding paleontology and various early hominins.

Man, the tool-maker

Tools, the man-maker

PG notes that these statements predate any sort of political correct speech and he acknowledges that man and woman could be used interchangeably and the statements would still be true.

PG suggests that AI tools are yet another man-maker that will accelerate the imaginations and creatively artistic talents of humanity as a whole.

AI Isn’t Really Artificial Intelligence

From Tech Register:

At its core, today’s AI is incapable of comprehension, knowledge, thought, or “intelligence.” This name is little more than a marketing gimmick.

Nothing’s easier to sell than a product with a good name. The technology that we call “artificial intelligence” is extremely complicated, but thanks to its name, you already have an idea of what it does! There’s just one problem; AI isn’t “intelligent” at any level, and corporations aren’t interested in correcting the public’s misconceptions.

There’s Nothing Intelligent About AI

Artificial intelligence is a longstanding staple of pop culture and real science. We’ve spent nearly a century pursuing this technology, and the idea of “living machines” goes back thousands of years. So, we have a pretty clear understanding of what someone means when they say “artificial intelligence.” It’s something comparable to human intelligence—the ability to comprehend, adapt, and have novel ideas.

But the technology that we call “artificial intelligence” lacks these qualities. It cannot “know” or “think” anything. Existing AI is just a mess of code attached to a big pile of data, which it remixes and regurgitates. You can ask ChatGPT to write you a resume, and it’ll spit out something based on the resumes in its dataset (plus whatever info you share). This is useful, it automates labor, but it’s not a sign of intelligence.

Of course, ChatGPT is a chatbot, so it can feel very “human.” But most AI applications are non-conversational; they don’t talk or answer questions. And without the veneer of a conversation, the lack of “intelligence” in AI is very noticeable.

Take Tesla’s self-driving cars, for example. Elon Musk has spent nearly a decade pretending that Tesla Full Self-Driving is just a year away—it’s almost ready, and it will be 150% safer than a human driver! Yet this AI program continues to linger in beta, and every time we hear of it, Full Self-Driving is criticized as a safety hazard. The AI isn’t even smart enough to do its job.

For a more down-to-earth example, just look at robot vacuums. They collect a ridiculous amount of data on your home in the name obstacle avoidance and navigational AI. And while these AI-enabled robot vacuums are an improvement over what we had in the past, they still have a ridiculous amount of trouble with basic obstacles, like dog poop, kids’ toys, and small rugs.

Ordinary people, including a large number of people who work in technology, don’t know anything about AI or how it works. They just hear the phrase “artificial intelligence” and make an assumption. These assumptions may seem inconsequential, but in reality, they are a guiding force behind technological development, the economy, and public policy.

This Technology Is Useful, but The Marketing Is Nonsense

I don’t want to downplay the importance of AI or machine learning technology. You interact with this stuff every time you use your cellphone, search for something on Google, or scroll through social media. Machine learning drives innovation in physics, it contributes to “Warp Speed” vaccine development, and it’s currently making its debut on the battlefield.

But the term “artificial intelligence” is plastered on this technology for marketing purposes. It’s a flashy name that tells customers and investors, “our product is futuristic and has a purpose.” As explained by AI researcher Melanie Mitchell in a conversation with the Wall Street Journal, companies and engineers routinely slap the name “AI” on anything that involves machine learning, as the phrase is proven to illicit a response from investors (who may know very little about technology, let alone AI).

This is something that you can see in nearly every industry. Just do a Google search for a company name and add the term “AI.” You’ll be shocked by the number of businesses that brag about their AI pursuits in vague language, with zero proof that this technology has actually contributed to their profitability, productivity, or innovation.

And, as noted by Dr. Mitchell, this same marketing tactic was utilized in the 1970s and 80s—companies and engineers secured massive amounts of funding with the promise of “artificial intelligence.” Their research was not a waste of money, but it wasn’t profitable, so the funding dried up. (Of course, software is much more important today than it was in the 20th century. The term “artificial intelligence” is now attached to useful products and processes, so people are less likely to lose interest.)

In some ways, I think that the name “artificial intelligence” is a good idea. Companies spent a good decade calling everything an “algorithm,” which only led to confusion and frustration among the general public. The pivot to “AI” generates a lot of enthusiasm, which should lead to a more rapid development of automated software technologies.

But this enthusiasm hides the fact that “AI” is a complicated, confusing, and narrow technology. People readily assume that today’s “AI” is similar to what we’ve seen in pop culture, and very few corporations are willing to fight (or comment on) this misconception. (That said, social media weirdos are the biggest offenders. They make the most extreme and patently false claims about AI, which are amplified and consumed by people who don’t know any better.)

. . . .

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off. Chatbots will write the news, robot arms will perform heart surgery, and super-strong androids will commit all of your favorite OSHA violations while constructing suburban homes. But in reality, the technology that we call “AI” simply offsets labor.

In some ways, the offset of labor created by AI is very obvious. This technology doesn’t comprehend a single thing in existence, so in order to make it perform a task correctly, it requires constant training, testing, and troubleshooting. For every job that an AI replaces, it may create a new job.

Many of these new jobs require expertise in machine learning. But a large number of workers involved in AI development perform “menial” labor. OpenAI was caught paying Kenyan workers less than $2 an hour to help remove racism, sexism, and violent suggestions from its chatbot. And Amazon’s Mechanical Turk, which performs tasks using “AI,” often pays a few pennies for a human to complete the work instead.

Link to the rest at Tech Register

PG isn’t convinced by the OP.

It’s not difficult to debunk a new technology. PG remembers experts who ridiculed the idea that every person would have a computer on her/his desk.

That was true. For awhile.

We have them on our wrists and in our pockets and backpacks now.

Per the OP, PG has never read or heard anyone involved with AI research claim:

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off.

Putting words in the mouths of those one is attempting to scorn is a centuries-old practice.

Ironically, considering the view of the OP, an Australian/Iberian team has been experimenting with the design and implementation of different AI models to identify repeated potential false claims made by politicians in Spain and Australia. The system is called ClaimCheck.

Abstracts written by ChatGPT fool scientists

From Nature:

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.

The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.

Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.

The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.

. . . .

The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn’t do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.

“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”

Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.

But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.

Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.

Link to the rest at Nature

PG decided to ChatGPT’s capabilities for creating an abstract of a longer text. He did so by copying and pasting the test of the blog post above into ChatGPT and asking the AI to summarize it.

Here’s what the AI produced:

A preprint study found that AI chatbot, ChatGPT, can write convincing fake research-paper abstracts that are often difficult to spot. The researchers at Northwestern University in Chicago used ChatGPT to generate 50 medical research abstracts and compared them with the original abstracts using a plagiarism detector and AI-output detector. They also asked a group of medical researchers to spot the fabricated abstracts. The ChatGPT-generated abstracts sailed through the plagiarism checker and the AI-output detector spotted 66% of the generated abstracts. However, the human reviewers correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts, raising concerns about the ethical implications of using AI in scientific research.

It’s Time to Teach AI How to Be Forgetful

From Wired:

OUR BRAIN HAS evolved to make predictions and explanations in unstable and ill-defined situations. For instance, to understand a novel situation, the brain generates a single explanation on the fly. If this explanation is upturned by additional information, a second explanation is generated. 

Machine learning, on the other hand, typically takes a different path: It sees reasoning as a categorization task with a fixed set of predetermined labels. It views the world as a fixed space of possibilities, enumerating and weighing them all. This approach, of course, has achieved notable successes when applied to stable and well-defined situations such as chess or computer games. When such conditions are absent, however, machines struggle.

One such example is virus epidemics. In 2008, Google launched Flu Trends, a web service that aimed to predict flu-related doctor visits using big data. The project, however, failed to predict the 2009 swine flu pandemic. After several unsuccessful tweaks to its algorithm, Google finally shuttered the project in 2015.

In such unstable situations, the human brain behaves differently. Sometimes, it simply forgets. Instead of getting bogged down by irrelevant data, it relies solely on the most recent information. This is a feature called intelligent forgetting. Adopting this approach, an algorithm that relied on a single data point—predicting that next week’s flu-related doctor visits are the same as in the most recent week, for instance—would have reduced Google Flu Trends’ prediction error by half. 

Intelligent forgetting is just one dimension of psychological AI, an approach to machine intelligence that also incorporates other features of human intelligence such as causal reasoning, intuitive psychology, and physics. In 2023, this approach to AI will finally be recognized as fundamental for solving ill-defined problems. Exploring these marvelous features of the evolved human brain will finally allow us to make machine learning smart. Indeed, researchers at the Max Planck Institute, Microsoft, Stanford University, and the University of Southampton are already integrating psychology into algorithms to achieve better predictions of human behavior, from recidivism to consumer purchases. 

One feature of psychological AI is that it is explainable. Until recently, researchers assumed that the more transparent an AI system was, the less accurate its predictions were. This mirrored the widespread but incorrect belief that complex problems always need complex solutions. In 2023, this idea will be laid to rest. As the case of flu predictions illustrates, robust and simple psychological algorithms can often give more accurate predictions than complex algorithms. Psychological AI opens up a new vision for explainable AI: Instead of trying to explain opaque complex systems, we can check first if psychological AI offers a transparent and equally accurate solution.

In 2023, deep learning in itself will come to be seen as a cul-de-sac. Without the help of human psychology, it will become clearer that the application of this type of machine learning to unstable situations eventually runs up against insurmountable limitations. We will finally recognize that more computing power makes machines faster, not smarter. One such high-profile example is self-driving cars. The vision of building the so-called level-5 cars—fully automated vehicles capable of driving safely under any conditions without human backup—has already hit such a limitation. Indeed, I predict that in 2023, Elon Musk will retract his assertion that this category of self-driving cars is just around the corner. Instead, he will refocus his business on creating the much more viable (and interesting) level-4 cars, which are able to drive fully autonomously, without human help, only in restricted areas such as motorways or cities specifically designed for self-driving vehicles. Widespread adoption of level-4 cars will instead spur us to redesign our cities, making them more stable and predictable, and barring potential distractions for human drivers, cyclists, and pedestrians. If a problem is too difficult for a machine, it is we who will have to adapt to its limited abilities.

Link to the rest at Wired

Copyright in the Age of Artificial Intelligence

Following is a transcript of a meeting sponsored by the United States Copyright Office on February 5, 2020, that includes remarks from a variety of speakers regarding Artificial Intelligence and Copyright.

As PG is posting this, the meeting happened slightly less than three years ago and an explosion of artificial intelligence research, programs, apps, etc., has occurred since that time, so keepthe age of the transcript in mind as you review it.

The transcript is a 399 page PDF file. PG will attempt to embed the transcript next in this post. He has no idea what might happen, whether the TPV hosting service will be strained beyond the breaking point, etc., etc.

If the embed doesn’t work, you can find the original at Copyright.gov

or by cutting and pasting the following link into your web browser: https://www.copyright.gov/events/artificial-intelligence/transcript.pdf

That said, here goes with the embed:


UPDATE: PG received a stranger error message that mentioned the failure of a “fake worker” which may or may not refer to an employee of the U.S. Copyright Office when he tried to post the embed.

You’ll need to use the Copyright.gov link above to view the original Copyright Office transcript.

Evidently the Copyright Office doesn’t have any fake workers who fail.

Dark Horse AI Gets Passing Grade in Law Exam

From Futurism:

An artificial intelligence dubbed Claude, developed by AI research firm Anthropic, got a “marginal pass” on a recent blindly graded law and economics exam at George Mason University, according to a recent blog post by economics professor Alex Tabarrok.

It’s yet another warning shot that AI is experiencing a moment of explosive growth in capability — and it’s not just OpenAI’s ChatGPT that we have to worry about.

. . . .

Claude is already impressing academics with its ability to come up with strikingly thorough answers to complex prompts.

For one law exam question highlighted by Tabarrok, Claude was able to generate believable recommendations on how to change intellectual property laws.

“Overall, the goal should be to make IP laws less restrictive and make more works available to the public sooner,” the AI concluded. “But it is important to still provide some incentives and compensation to creators for a limited period.”

Overall, Tabarrok found that “Claude is a competitor to GPT-3 and in my view an improvement,” because it was able to generate a “credible response” that’s “better than many human responses.”

To be fair, others were less impressed with Claude’s efforts.

“To be honest, this looks more like Claude simply consumed and puked up a McKinsey report,” the Financial Times wrote in a piece on Tabarrok’s findings.

While Claude and ChatGPT are similar in terms of user experience, the models were trained in different ways, especially when it comes to ensuring that things don’t go out of hand.

Claude makes use of “constitutional AI,” as described in a yet-to-be-peer-reviewed paper shared by Anthropic researchers last month.

“We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs,” they wrote. “The process involves both a supervised learning and a reinforcement learning phase.”

“Often, language models trained to be ‘harmless’ have a tendency to become useless in the face of adversarial questions,” the company wrote in a December tweet. “Constitutional AI lets them respond to questions using a simple set of principles as a guide.”

Link to the rest at Futurism

CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors

From Futurism:

Last week, we reported that the prominent technology news site CNET had been quietly publishing articles generated by an unspecified “AI engine.”

The news sparked outrage. Critics pointed out that the experiment felt like an attempt to eliminate work for entry-level writers, and that the accuracy of current-generation AI text generators is notoriously poor. The fact that CNET never publicly announced the program, and that the disclosure that the posts were bot-written was hidden away behind a human-sounding byline — “CNET Money Staff” — made it feel as though the outlet was trying to camouflage the provocative initiative from scrutiny.

After the outcry, CNET editor-in-chief Connie Guglielmo acknowledged the AI-written articles in a post that celebrated CNET‘s reputation for “being transparent.”

Without acknowledging the criticism, Guglielmo wrote that the publication was changing the byline on its AI-generated articles from “CNET Money Staff” to simply “CNET Money,” as well as making the disclosure more prominent.

Furthermore, she promised, every story published under the program had been “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”

That may well be the case. But we couldn’t help but notice that one of the very same AI-generated articles that Guglielmo highlighted in her post makes a series of boneheaded errors that drag the concept of replacing human writers with AI down to earth.

Take this section in the article, which is a basic explainer about compound interest (emphasis ours):

“To calculate compound interest, use the following formula:

Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods 

For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.

It sounds authoritative, but it’s wrong. In reality, of course, the person the AI is describing would earn only $300 over the first year. It’s true that the total value of their principal plus their interest would total $10,300, but that’s very different from earnings — the principal is money that the investor had already accumulated prior to putting it in an interest-bearing account.

“It is simply not correct, or common practice, to say that you have ‘earned’ both the principal sum and the interest,” Michael Dowling, an associate dean and professor of finance at Dublin College University Business School, told us of the AI-generated article.

It’s a dumb error, and one that many financially literate people would have the common sense not to take at face value. But then again, the article is written at a level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place, so it seems to run the risk of providing wildly unrealistic expectations — claiming you could earn $10,300 in a year on a $10,000 investment — to the exact readers who don’t know enough to be skeptical.

Another error in the article involves the AI’s description of how loans work. Here’s what it wrote (again, emphasis ours):

“With mortgages, car loans and personal loans, interest is usually calculated in simple terms.

For example, if you take out a car loan for $25,000, and your interest rate is 4%, you’ll pay a flat $1,000 in interest per year.”

Again, the AI is writing with the panache of a knowledgeable financial advisor. But as a human expert would know, it’s making another ignorant mistake.

What it’s bungling this time is that the way mortgages and auto loans are typically structured, the borrower doesn’t pay a flat amount of interest per year, or even per monthly payment. Instead, on each successive payment they owe interest only on the remaining balance. That means that toward the beginning of the loan, the borrower pays more interest and less principal, which gradually reverses as the payments continue.

It’s easy to illustrate the error by entering the details from the CNET AI’s hypothetical scenario — a $25,000 loan with an interest rate of 4 percent — into an auto loan amortization calculator. The result? Contrary to what the AI claimed, there’s never a year when the borrower will pay a full $1,000, since they start chipping away at the balance on their first payment.

CNET‘s AI is “absolutely” wrong in how it described loan payments, Dowling said.

“That’s just simply not the case that it would be $1,000 per year in interest,” he said, “as the loan balance is being reduced every year and you only pay interest on the outstanding balance.”

The problem with this description isn’t just that it’s wrong. It’s that the AI is eliding an important reality about many loans: that if you pay them down faster, you end up paying less interest in the future. In other words, it’s feeding terrible financial advice directly to people trying to improve their grasp of it.

Link to the rest at Futurism

PG says somebody (not PG) is going to start a website that features errors made by AI systems.

PG also says that AI is roughly where airplanes were when, on December 17, 1903, the Wright Flyer traveled 120 feet in 12 seconds at speed of 6.8 miles per hour at Kitty Hawk, North Carolina.

Fifteen years later, a British Sopwith Dragon flew at a speed of 149 miles per hour. Twenty-two years after that, the Lockheed P-38 flew at 400 mph. Late in World War II, a Messerschmitt Me.262 reached a sustained top speed of 540 mph.

PG says AI development isn’t like airplane development. It’s going to be much, much faster.

An A.I. Translation Tool Can Help Save Dying Languages. But at What Cost?

From Slate:

Sanjib Chaudhary chanced upon StoryWeaver, a multilingual children’s storytelling platform, while searching for books he could read to his 7-year-old daughter. Chaudhary’s mother tongue is Kochila Tharu, a language with about 250,000 speakers in eastern Nepal. (Nepali, Nepal’s official language, has 16 million speakers.) Languages with a relatively small number of speakers, like Kochila Tharu, do not have enough digitized material for linguistic communities to thrive—no Google Translate, no film or television subtitles, no online newspapers. In industry parlance, these languages are “underserved” and “underresourced.”

This is where StoryWeaver comes in. Founded by the Indian education nonprofit Pratham Books, StoryWeaver currently hosts more than 50,000 open-licensed stories across reading levels in more than 300 languages from around the world. Users can explore the repository by reading level, language, and theme, and once they select a story, they can click through illustrated slides (each as if it were the page of a book) in the selected language (there are also bilingual options, where two languages are shown side-by-side, as well as download and read-along audio options). “Smile Please,” a short tale about a fawn’s ramblings in the forest, is currently the “most read” story—originally written in Hindi for beginners, it has since been translated into 147 languages and read 281,000 times.

A majority of the languages represented on the platform are from Africa and Asia, and many are Indigenous, in danger of losing speakers in a world of almost complete English hegemony. Chaudhary’s experience as a parent reflects this tension. “The problem with children is that they prefer to read storybooks in English rather than in their own language because English is much, much easier. With Kochila Tharu, the spelling is difficult, the words are difficult, and you know, they’re exposed to English all the time, in schools, on television,” Chaudhary said

Artificial intelligence-assisted translation tools like StoryWeaver can bring more languages into conversation with one another—but the tech is still new, and it depends on data that only speakers of underserved languages can provide. This raises concerns about how the labor of the native speakers powering A.I. tools will be valued and how repositories of linguistic data will be commercialized.

To understand how A.I.-assisted translation tools like StoryWeaver work, it’s helpful to look at neighboring India: With 22 official languages and more than 780 spoken languages, it is no accident that the country is a hub of innovation for multilingual tech. StoryWeaver’s inner core is inspired by a natural language processing tool developed at Microsoft Research India called interactive neural machine translation prediction technology, or INMT.

Unlike most A.I.-powered commercial translation tools, INMT doesn’t do away with a human intermediary altogether. Instead, it assists humans with hints in the language they’re translating into. For example, if you begin typing, “It is raining” in the target language, the model working on the back-end supplies “tonight,” “heavily,” and “cats and dogs” as options for completing your sentence, based on the context and the previous word or set of words. During translation, the tool accounts for meaning in the original language and what the target language allows, and then generates possibilities for the translator to choose from, said Kalika Bali, principal researcher at Microsoft and one of INMT’s main architects.

Tools like INMT allow StoryWeaver’s cadre of volunteers to generate translations of existing stories quickly. The user interface is easy to master even for amateur translators, many of whom, like Chaudhary, are either volunteering their time or already working for nonprofits in early childhood education. The latter is the case for Churki Hansda. Working in Kora and Santali, two underserved Indigenous languages spoken in eastern India, she is an employee at Suchana Uttor Chandipur Community Society, one of StoryWeaver’s many partner organizations scattered all over the world. “We didn’t really have storybooks growing up. Our school textbooks were in Bengali [the dominant regional language], and we would end up memorizing everything because we didn’t understand what we were reading,” Hansda told me. “It’s a good feeling to be able to create books in our languages for our children.”

Amna Singh, Pratham Books’ content and partnerships manager, estimates that 58 percent of the languages represented on StoryWeaver are underserved, a status quo that has cascading consequences for early childhood learning outcomes. But attempts to undo the neglect of underserved language communities are also closely linked with unlocking their potential as consumers, and A.I.-powered translation technology is a big part of this shift. Voice recognition tools and chat bots in regional Indian languages aim to woo customers outside metropolitan cities, a market that is expected to expand as cellular data usage becomes even cheaper.

These tools are only as good as their training data, and sourcing is a major challenge. For sustained multilingualism on the internet, machine translation models require large volumes of training data generated in two languages parallel to one another. Parliamentary proceedings and media publications are common sources of publicly available data that can be scraped for training purposes. However, both these sources—according to Microsoft’s researcher Bali—are too specific, and do not encompass a wide enough range in terms of topics and vocabulary to be properly representative of human speech. (This is why StoryWeaver isn’t a good source for training data, either, because sentences in children’s books are fairly simple and the reading corpus only goes up to fourth-grade reading levels.)

Link to the rest at Slate

The Future of AI Writing and Audio

From Publisher’s Weekly:

Digital Book World, a conference focusing on publishing innovation, offered insight into how technologists, and some publishers, are planning to implement AI into their workflow. Asked about AI and the use of ChatGPT, which automates writing, Mary McAveeney, CEO of Abrams, was skeptical of its ability to write books. She conceded, “It might be good for catalog copy.”

Earlier in the conference, organizer Bradley Metrock asked publishers Laini Brown, director of Publicity for the Nashville office of Hachette Book Group, and Lisa Lucas, senior vice president and publisher of Pantheon and Schocken Books, what they thought of the news that the next iteration of Chat GPT will be able to produce a 60,000 word book in 20 seconds. Neither publisher chose to respond.

Others warned against relying too heavily on AI without human intervention. For example, Madeleine Rothberg, senior subject matter expert for WGBH National Center for Accessible Media in Boston, warned against posting AI-generated subtitles for YouTube videos without first reviewing them. “It’s not a good idea, because we have found the AI doesn’t always get the words right and makes mistakes,” she said, citing instances of unintended vulgarity. Or, as Ashok Giri, CEO of Page Magik put it, “primary research human beings are [still] needed.” Giri’s company offers automation tools and data to help streamline editorial and production workflow.

Others are more skeptical. One attendee, who wished to remain anonymous so as not to offend others in the room, noted that Chat GPT and AI is limited by what is put into it and, for this, it needs to absorb vast swaths of existing information. Much of that comes from print books, e-books, and internet writing protected by copyright. “It sounds exactly like that Google hoped to accomplish with the Google Books program,” they said.“ What happened there? Lawsuits.”

Bradley Metrock, conference organizer, acknowledged that the owners of copyrighted material incorporated will likely challenge the use of their content by AI. “There are going to be a lot of lawsuits before this is sorted out,” said Metrock, who owns several companies that invest in various AI and voice related projects. “The point here is that good technology challenges,” citing the lack of innovation in the ebook space over the past 15 years, he said. “Everything stays the same,” he added, ‘“until it doesn’t.”

. . . .

Audiobooks are now a $5 billion market worldwide, and they continue to experience double digit growth. According to the Association of Audiobook Publishers, the U.S. market is growing at a rate of 25% per year ,and reached $1.6 billion in sales for 2021. “The increasing availability of titles is the biggest driver of audiobook growth,” said Videl Bar-Kar, global head of audio for Frankfurt-based Bookwire. “The best way to grow the catalog of available titles is through backlist.”

Here, the use of AI generated voices to narrate audiobooks offers publishers who cannot afford human narrators the opportunity to turn backlist into audiobooks for low cost. “And if the book sells and becomes a success,” Bar-Kar added, “they can always go back and re-record the book with a human narrator.”

Bar-Kar called the audiobook market a “once in a generation opportunity,” noting: “There are new people discovering audio for the first time year-on-year, not because of the heavy consumers, but because there are new people coming into the market.” He described it as a business opportunity, and one that needs to be demystified: “Have the courage and confidence to stop selling your audiobook rights and develop your own audio program,” he said.

Link to the rest at Publisher’s Weekly

Imitation Is The Best Form Of Flattery. Flattery Is Not A Defense To Copyright Infringement.

From Above the Law:

Unless you’ve been living under a law library, it would be hard to not take note of the rapid influx of AI art. Face modifying apps, extended shots of events and people that never happened that uncanny only begins to explain their weirdness, you name it. The figure of AI as artist has arrived, but is any of it legal? A small group of artists aim to find out. From Reuters:

A group of visual artists has sued artificial intelligence companies for copyright infringement, adding to a fast-emerging line of intellectual property disputes over AI-generated work.

Stability AI’s Stable Diffusion software copies billions of copyrighted images to enable Midjourney and DeviantArt’s AI to create images in those artists’ styles without permission, according to the proposed class-action lawsuit filed Friday in San Francisco federal court.

The artists’ lawyers, the Joseph Saveri Law Firm and Matthew Butterick, filed a separate proposed class action lawsuit in November against Microsoft’s GitHub Inc and its business partner OpenAI Inc for allegedly scraping copyrighted source code without permission to train AI systems.

. . . .

I’m gonna flag it for you in case your eyes glossed over it. The word there is billions. Billons. With a B. Even if the individual damages are pennies on the dollar, the aggregate of those alleged copyright infringements would be… well, I’m not that good at math, but it would put a sizeable dent in my student loan principal.

. . . .

For those not in the know, if you’ve ever seen a stock image, every one you’ve seen is probably from Getty.

Link to the rest at Above the Law

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

From Time Magazine:

ChatGPT was hailed as one of 2022’s most impressive technological innovations upon its release last November. The powerful artificial intelligence (AI) chatbot can generate text on almost any topic or theme, from a Shakespearean sonnet reimagined in the style of Megan Thee Stallion, to complex mathematical theorems described in language a 5 year old can understand. Within a week, it had more than a million users.

ChatGPT’s creator, OpenAI, is now reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. That would make OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines, one of the world’s most valuable AI companies.

But the success story is not one of Silicon Valley genius alone. In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found.

The work was vital for OpenAI. ChatGPT’s predecessor, GPT-3, had already shown an impressive ability to string sentences together. But it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks. This is because the AI had been trained on hundreds of billions of words scraped from the internet—a vast repository of human language. That huge training dataset was the reason for GPT-3’s impressive linguistic capabilities, but was also perhaps its biggest curse. Since parts of the internet are replete with toxicity and bias, there was no easy way of purging those sections of the training data. Even a team of hundreds of humans would have taken decades to trawl through the enormous dataset manually. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.” (OpenAI does not disclose the names of the outsourcers it partners with, and it is not clear whether OpenAI worked with other data labeling firms in addition to Sama on this project.)

. . . .

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

. . . .

Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

Link to the rest at Time

PG wonders if there isn’t a better way to engineer an AI product to identify and avoid toxic documents in its construction of a database.

He also thinks this is an example of when Sili Valley’s long-standing motto, “Go fast and break things,” should involve some adult judgment in the on the part of someone with authority in the organization..

One of the oldest bits of business advice is, “Know your suppliers.” Evidently, the management at OpenAI all missed the class where that was discussed.

PG notes that his brothers and sisters of the bar are not immune to the “smart people doing dumb things” behavior pattern – see, for example, Why Toxic Culture Is To Blame For Women Leaving Law Firms.

OpenAI Background

From Wikipedia:

OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Sam Altman, Elon Musk, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft and Matthew Brown Companies. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.

In December 2015, Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research announced  the formation of OpenAI and pledged over US$1 billion to the venture. The organization stated it would “freely collaborate” with other institutions and researchers by making its patents and research open to the public.

. . . .

In April 2016, OpenAI released a public beta of “OpenAI Gym”, its platform for reinforcement learning research. In December 2016, OpenAI released “Universe”, a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.

In 2018, Musk resigned his board seat, citing “a potential future conflict (of interest)” with Tesla AI development for self driving cars, but remained a donor.

In 2019, OpenAI transitioned from non-profit to “capped” for-profit, with profit cap set to 100X on any investment. The company distributed equity to its employees and partnered with Microsoft, who announced an investment package of US$1 billion into the company. OpenAI then announced its intention to commercially license its technologies.

In 2020, OpenAI announced GPT-3, a language model trained on trillions of words from the Internet. It also announced that an associated API, named simply “the API”, would form the heart of its first commercial product. GPT-3 is aimed at natural language answering of questions, but it can also translate between languages and coherently generate improvised text.

In 2021, OpenAI introduced DALL-E, a deep learning model that can generate digital images from natural language descriptions.

Around December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI was projecting a US$200 million revenue for 2023 and US$1 billion revenue for 2024. As of January 2023, it was in talks for funding that would value the company at $29 billion.

Link to the rest at Wikipedia and thanks to F. for the tip.

PG notes that the OP contains lots of links and is likely to be a good starting point for anyone who wishes to dive into the state of AI, at least in the US.

Meet the artificially intelligent chatbot trying to curtail loneliness in America

From The Hill:

The concept of an artificially intelligent companion has been around for
decades longer than the AI technology has existed in a readily accessible
form. From droids like C3PO and R2D2 of the Star Wars universe, to Joaquin
Phoenix’s virtual assistant, Samantha, from Her, there is no shortage of pop
culture examples of the fabled robot helpers.

But over the past few years, AI technology has exponentially improved and
made its way from the big screen to the smartphone screen. In late 2015,
Elon Musk partnered with Sam Altman to create a company called OpenAI, a
software business with the mission of creating an artificial general
intelligence that benefits all of humanity.

One of the early projects at OpenAI was a natural language processing
system called GPT (Generative Pre-trained Transformer). GPT is in essence, a chatbot that uses deep learning to produce human-like text responses to
users of the platform. Many online users saw the GPT chatbot as an outlet to
have a bit of fun testing the limits of the human-like texting algorithm, but
some innovators viewed the free software as a marketable source of untapped potential.

One of those early innovators was founder of Replika Eugenia Kuyda.
Replika is a free to download app that allows users to send and receive
messages to an artificially intelligent companion built on the GPT3 platform.
On the website, Replika states that each companion is eager to learn about
the world through the eyes of the user, and will always be ready to chat
when the user is looking for an empathetic friend.

The idea for Replika was born from grief, when Kuyda’s best friend, Roman,
was tragically killed in a hit-and-run incident in 2015. Being torn so suddenly
from a loved one, Kuyda was looking for a way to somehow remain close to
the memory of Roman. The timing of the car accident and the release of the
open source GPT1 software gave Kuyda a unique outlet to grieve.

“I took all of the text messages that were sent over a year to each other and
plugged it into this conversational AI model,” says Kuyda. “And this way I
had a chatbot that I could talk to, and it could talk to me like my best friend.”

Kuyda was able to aggregate tens of thousands of messages that her and
Roman had exchanged to train the GPT software to speak like her late best
friend. She eventually released the GPT model emulating Roman to a larger
group of people and found that many discovered the tool to be highly
engaging and life-like.

Kuyda then began working on a chatbot that would eventually become the
Replika app with more than two million active users.

When opening Replika for the first time, users are prompted to design their
chatbot avatar and select some interests that they’d like to talk about. From
there it’s up to the user to guide the conversation. The Replika software is
designed to catalogue user inputs in its memory to help develop responses
that become more contextually relevant the deeper the user goes into the
conversation.

Kuyda sees the app as a tool to help people build their social skills and learn
how to interact with people.

“For a lot of people, the problem with interacting with other humans is the
fear of opening up, the fear of being vulnerable, the fear of starting the
contact, starting the conversation, and they’re basically rehearsing that with
Replika,” says Kuyda. “So think of it as a gym for your relationship so you
can practice in a safe environment.”

Link to the rest at The Hill

DeepL, the AI-based language translator, raises over $100M at a $1B+ valuation

From TechCrunch:

Artificial intelligence startups, and (thanks to GPT and OpenAI) specifically those helping humans communicate with each other, are commanding a lot of interest from investors, and today the latest of these is announcing a big round of funding. DeepL, a startup that provides instant translation-as-a-service both to businesses and to individuals — competing with Google, Bing and other online tools — has confirmed a fundraise at a €1 billion valuation (just over $1 billion at today’s rates).

Cologne, Germany-based DeepL is not disclosing the full amount that it’s raised — it doesn’t want to focus on this aspect, CEO and founder Jaroslaw Kutylowski said in an interview — but as we were working on this story we heard a range of figures. At one end, an investor that was pitched on the funding told TechCrunch that DeepL was aiming to raise $125 million. At the other end, a report with a rumor about the funding from back in November said the amount was around $100 million. The funding closed earlier this month.

The startup is also not confirming or disclosing other financials, but the investor source said that the $1 billion valuation was based on a 20x multiple of DeepL’s annual run rate, which was at $50 million at the end of last year. In the current fundraising climate, this is a pretty bullish multiple, but it speaks to the company’s growth, which the investor noted is currently at 100%, and the fact that DeepL’s breaking even and close to being profitable.

What is more definitive is the list of investors: DeepL said that new backer IVP was leading the round, with Bessemer Venture Partners, Atomico and WiL also participating. Previous backers in the company also include Benchmark and btov.

DeepL primarily provides translation as a service to businesses rather than individuals, and its forte up to now has been working primarily with smaller and medium organizations.

Link to the rest at TechCrunch

OpenAI begins piloting ChatGPT Professional, a premium version of its viral chatbot

From TechCrunch:

OpenAI this week signaled it’ll soon begin charging for ChatGPT, its viral AI-powered chatbot that can write essays, emails, poems and even computer code. In an announcement on the company’s official Discord server, OpenAI said that it’s “starting to think about how to monetize ChatGPT” as one of the ways to “ensure [the tool’s] long-term viability.”

The monetized version of ChatGPT will be called ChatGPT Professional, apparently. That’s according to a waitlist link OpenAI posted in the Discord server, which asks a range of questions about payment preferences including “At what price (per month) would you consider ChatGPT to be so expensive that you would not consider buying it?”

The waitlist also outlines ChatGPT Professional’s benefits, which include no “blackout” (i.e. unavailability) windows, no throttling and an unlimited number of message with ChatGPT — “at least 2x the regular daily limit.” OpenAI says that those who fill out the waitlist form may be selected to pilot ChatGPT Professional, but that the program is in the experimental stages and won’t be made widely available “at this time.”

. . . .

Despite controversy and several bans, ChatGPT has proven to be a publicity win for OpenAI, attracting major media attention and spawning countless memes on social media. Some investors are implementing ChatGPT in their workflows. Ryan Reynolds enlisted ChatGPT to write an ad for Mint Mobile, the mobile carrier he part-owns. And Microsoft will reportedly incorporate the AI behind ChatGPT into its Office suite and Bing.

ChatGPT had over a million users as of early December — an enviable user base by any measure. But it’s a pricey service to run. According to OpenAI co-founder and CEO Sam Altman, ChatGPT’s operating expenses are “eye-watering,” amounting to a few cents per chat in total compute costs.

OpenAI is under pressure to turn a profit on products like ChatGPT ahead of a rumored $10 billion investment from Microsoft. OpenAI expects to make $200 million in 2023, a pittance compared to the more than $1 billion that’s been invested in the startup so far.

Semafor reported this week that Microsoft is looking to net a 49% stake in OpenAI, valuing the company at around $29 billion.

Link to the rest at TechCrunch

In a former life, PG worked in the tech sector without his attorney hat on (although it was always in his briefcase). In the OP, he senses a mad scramble going on inside OpenAI to get profitable in preparation for an “everybody gets rich” public offering of its stock.

If OpenAI pulls this off, it will give a big boost for AI businesses in general by demonstrating that people will pay money for an AI-based service.

PG isn’t turning TPV into an AI blog, but does predict a giant impact on the writing biz in general (freelance copywriters, creators of catalogues, newspaper reporters and editors, magazines, some types of indie publishers and authors, porn creators (unfortunately), etc.

Will AI Make Creative Workers Redundant?

From The Wall Street Journal:

ChatGPT has some wondering if artificial intelligence will make human creativity obsolete. Released in November by Open AI, the chatbot can quickly write readable prose in response to natural-language prompts better than most people can. When one of my colleagues asked ChatGPT for a 250-word summary of Umberto Eco’s philosophy of translation, it produced a text that would put many educated adults to shame—and it did so within seconds. Reactions to this new AI have ranged from panic to wonder. It is potentially competition for anyone who writes for a living, including journalists and lawyers. Even visual artists are worried, given the dozen or so AI art generators that can already create virtually any image.

To me, the hubbub feels like déjà vu. As an academic translator, I witnessed a similar debate emerge surrounding the introduction in 2017 of DeepL, a ground-breaking form of neural machine translation. At the time, most people took one of two views: either the new technology would ultimately replace human translators or it would be insufficient and barely affect the field. It ended up being something in the middle.

Five years after the introduction of DeepL, most human translators no longer actually translate, but neither have they been entirely replaced by machines. Instead, they use the technology to make translations easier and faster. The software generates a base translation, then the human translator “post-edits,” fixing errors and making the text sound natural. But the feedback the translator provides also becomes part of the recursive loop in the AI’s continual self-improvement. The technology is poised to take over the translation process completely.

I could see image- and text-generating AIs having a similar effect. Just as translators now post-edit instead of translate, it seems likely that many creative workers will “post-create” instead of create. A machine will come up with an initial sketch of an idea, and then the artist or writer will tinker with it. Some may have too much pride to rely on a machine, but it will be hard to resist the advantage the technology offers. For translators and artists alike, AI reduces the cognitive load of creating. Imagine no longer straining to come up with a first draft. Work would flow much more easily.

AI creativity and human creativity already seem to be converging in music. Though artists have sampled tracks for decades, they’re now repurposing older tunes with machine-like regularity. Some of the biggest hits of 2022 were based on melodic lines from the 1980s. For music fans, the question may eventually be whether human beings or AI is better at such recombination. On a recent podcast, Smashing Pumpkins founder Billy Corgan noted his pessimism: “AI systems will completely dominate music. The idea of an intuitive artist beating an AI system is going to be very, very difficult.”

Choosing to use AI raises some uncomfortable questions. Are translators really translators anymore? If an artist takes a first sketch from a computer, is he still genuinely an artist? The casting about for initial words or brush strokes, often the most difficult part of drafting, seems to be the heart of human creativity. If that is given over to AI, the process seems more like an assembly-line production with human writers or artists serving as mere inspectors—checking the end product and then giving a stamp of approval.

Link to the rest at The Wall Street Journal

A new Chatbot is a ‘code red’ for Google’s search business

From The Seattle Times:

Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.

Last month, an experimental chatbot called ChatGPT made its case to be the industry’s next big disrupter. It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.

Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.

For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chatbot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future.

ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chatbots, because of the many ways the technology could damage its business.

Google has spent several years working on chatbots and, like other big tech companies, has aggressively pursued artificial intelligence technology. Google has already built a chatbot that could rival ChatGPT. In fact, the technology at the heart of OpenAI’s chatbot was developed by researchers at Google.

Called LaMDA, or Language Model for Dialogue Applications, Google’s chatbot received enormous attention in the summer when a Google engineer, Blake Lemoine, claimed it was sentient. This was not true, but the technology showed how much chatbot technology had improved in recent months.

Google may be reluctant to deploy this new tech as a replacement for online search, however, because it is not suited to delivering digital ads, which accounted for more than 80% of the company’s revenue last year.

“No company is invincible; all are vulnerable,” said Margaret O’Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”

Because these new chatbots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of color. They can generate toxic language, including hate speech.

All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.

Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalize the company’s lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.

“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a startup that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”

Sundar Pichai, Google’s CEO, has been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times. Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI’s DALL-E technology, which has been used by more than 3 million people.

Link to the rest at The Seattle Times and thanks to R. and others for the tip.

Our Current Thinking on the Use of AI-Generated Image Software and AI Art

From Kickstarter:

I want to share some of our thoughts on Artificial Intelligence (AI) generated images and AI art as it develops, because many creators on Kickstarter are understandably concerned about its impact on the creative community.

At Kickstarter, we often have projects that are innovative and push the boundaries of what’s possible. And that means we’re sometimes navigating some really tricky and undefined areas.

Over the last several days, we’ve engaged our Community Advisory Council and we’ve read your feedback to us via our team and social media. And one thing is clear: Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.

As we look at what’s happening in the creative ecosystem and on our platform, here are some of the things we’re considering when it comes to what place AI image generation software and AI-generated art should have on Kickstarter, if any:

  • Is a project copying or mimicking an artist’s work? We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it’s not so clear — Fitzgerald was inspired to write the book by the grand parties he attended on prosperous Long Island, where he got a front-row view of the elite, moneyed class of the 1920s, a culture he longed to join but never could..
  • Does a project exploit a particular community or put anyone at risk of harm? We have to consider the intention behind projects, sometimes beyond their purpose as stated on our platform. Our rules prohibit projects that promote discrimination, bigotry, or intolerance towards marginalized groups, and we often make decisions to protect the health and integrity of Kickstarter.

Link to the rest at Kickstarter

For visitors to TPV who are not familiar with Kickstarter, here’s a brief overview from the Kickstarter website:

What is Kickstarter?

Kickstarter is a funding platform for creative projects. Everything from film, games, and music to art, design, and technology. Kickstarter is full of ambitious, innovative, and imaginative projects that are brought to life through the direct support of others.

How does it work?

Every project creator sets their project’s funding goal and deadline. If people like the project, they can pledge money to make it happen. If the project succeeds in reaching its funding goal, all backers’ credit cards are charged when time expires. Funding on Kickstarter is all-or-nothing. If the project falls short of its funding goal, no one is charged.

If a project is successfully funded, Kickstarter applies a 5% fee to the funds collected.

Here’s a link to a successful Kickstarter campaign page for a children’s adventure book.

Back to the Kickstarter comment about AI Images and Art.

Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.

First of all, the creation of AI software and systems is quite a startling and original creative process in and of itself. And there are actual humans using their wetware who have created all the AI art programs with which PG is familiar. (He has not doubt that at some future time, someone will create an AI mother ship generator program devoted to creating lots of offspring AI programs with little or no additional human input for a variety of purposes.)

Under the current AI programming practices of which PG is aware, part of the development process is to feed a large volume of copies of creative work into the program to seed it. If it’s an AI art generation program, copies of a whole bunch of images are used as seed material. An AI text generation program ingests a whole lot of human-generated text as seed material.

For those who are thinking about copyright infringement on a massive scale, PG doesn’t think that any AI program will create a copy of any of the original works used to seed it. In this sense, it resembles a human author or painter who studies a great many literary or artistic works during the process of learning how to paint or write fiction or non-fiction.

One of the more common accounts of the first step of the creation of a new and original creative work goes something like, “I was wandering through the Metropolitan Museum of Art one afternoon and stopped in front of the Picasso portrait of Gertrude Stein . . . .” or, “While reading The Grapes of Wrath, I wondered what a 21st Century version of that book would look like.”

For a further example, F. Scott Fitzgerald said he was inspired to write The Great Gatsby by the grand parties he attended on prosperous Long Island, where he got a front-row view of the elite, moneyed class of the 1920’s, a culture he longed to join but never could. Some of those familiar with that part of Fitzgerald’s life claim to know exactly which people he closely modeled for some of the characters in his book.

So, back to the Kickstarter AI statement of concern that, “We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it’s not so clear — where images that are owned or created by others might not be on a Kickstarter project page, but are in the training data that makes the AI software used in the project, without the knowledge, attribution, or consent of creators.”

What if the creators of an AI writing program included the text of Fairy Tale by Stephen King along with the text of 20,000 other novels into the seed for a novel-writing AI?

Based on PG’s brief experience with and understanding of how current AI writing programs work, he suspects that, regardless of what writing prompt anyone provided, the novel-writing AI program would not produce the text of Fairy Tale or anything close enough to support a copyright suit by King or his publishers.

As far as Kickstarter’s mention of attribution of the authors of seed material for any given AI text generation program, it would be a very, very long list to the point where no one would read it. As far as Kickstarter’s concern that requiring the consent of each author whose work went into the AI’s maw during the creation of the AI is concerned, PG doesn’t think consent is necessary because the AI isn’t reproducing the author’s work and the use of the text would be fair use under current copyright law plus there are no damages for copyright infringement because the AI program doesn’t replicate the original material.

Can AI Write Authentic Poetry?

From The MIT Press Reader:

“Time — a few centuries here or there — means very little in the world of poems.” There is something reassuring about Mary Oliver’s words. Especially in an era of rapid change, there is comfort to be had in those things that move slowly. But oceans rise and mountains fall; nothing stays the same. Not even the way poetry is made.

The disappearance of the author in 20th-century literary criticism can perhaps be traced back to the surrealist movement and its game of “exquisite corpse.” The surrealists believed that a poem can emerge not only from the unconscious mind of an individual, but from the collective mind of many individuals working in consort — even, or perhaps especially, if each individual has minimal knowledge of what the others are doing. Soon the idea of making art from recycled objects emerged. In the realm of literature, this approach took the form of found poetry.

To create a found poem, one or more people collect bits of text encountered anywhere at all, and with a little editing stitch the pieces together to form a collagelike poem. Examining this generative activity, it may be difficult to identify who if anyone is the “poet” who writes the found poem (or for that matter, to be confident that “writing” is an apt name for the process). Still, even if no one’s consciousness guided the initial creation of the constituent phrases, one or more humans will have exercised their sensitivity and discrimination in selecting the bits to include, and the way these pieces are ordered and linked to form a new whole. The author (or authors) at a minimum must do the work of a careful reader. Can the human be pushed still further into the background, or even out of the picture?

The most radical technological advance of the 20th century might seem to have nothing at all to do with the writing of poetry. If we make a list of the great leaps that led to modern civilization — control of fire, agriculture, the wheel, electricity, and perhaps a few more — the most recent addition is a machine that uses electrons to do computation. The first functioning digital computers were constructed midcentury by Alan Turing and a few others. Over the next not-quite-a-century-yet, computers became enormously faster and more powerful, began to process information in parallel rather than just sequentially, and were linked together into a vast worldwide network known as the internet. Along the way, these devices enabled the creation of artificial versions of a trait previously found only in biological life forms, most notably humans — intelligence.


In a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises to challenge humans as artistic creators.

Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?

So, what is the current state of AI and computer-generated poetry? This is a less central question than might be supposed. Especially in this time of rapid AI advances, the current state of the artificial poetic arts is merely a transitory benchmark. We need to set aside the old stereotype that computer programs simply follow fixed rules and do what humans have programmed them to do, and so lack any capacity for creativity. Computer programs can now learn from enormous sets of data using methods called deep learning. What the programs learn, and how they will behave after learning, is very difficult (perhaps impossible) to predict in advance. The question has arisen (semiseriously) whether computer programs ought to be listed as coauthors of scientific papers reporting discoveries to which they contributed. There is no doubt that some forms of creativity are within the reach, and indeed the grasp, of computer programs.

But what about poetry? To evaluate computer-generated poetry, let’s pause to remind ourselves what makes a text work as a poem. A successful poem combines compelling content (what Coleridge called “good sense”) with aesthetically pleasing wordplay (metaphor and other varieties of symbolism), coupled with the various types of sound similarities and constraints of form.

In broad strokes, an automated approach to constructing poems can operate using a generate-then-select method. First, lots of candidate texts are produced, out of which some (a very few, or just one) are then selected as winners worth keeping. Roughly, computer programs can be very prolific in generating, but (to date) have proved less capable at selecting. At the risk of caricature, the computer poet can be likened to the proverbial monkey at the typewriter, pounding out reams of garbage within which the occasional Shakespearean sonnet might be found — with the key difference that the computer operates far more rapidly than any monkey (or human) could. To be fair, the program’s search can be made much less random than the monkey’s typing. Current computer poetry programs usually bring in one or more humans to help in selecting poetic gems embedded in vast quantities of computer-generated ore. An important question, of course, is whether an authentic creator requires some ability to evaluate their own creations. Perhaps, as Oscar Wilde argued, there is a sense in which an artist must act as their own critic — or not be a true artist at all.

One use of computers is simply to provide a platform for human generation and selection. The internet makes it easy for large groups of people to collaborate on projects. The kind of collective poetry writing encouraged by the surrealists has evolved into crowdsourcing websites that allow anyone to edit an emerging collective poem. Each contributor gets to play a bit part as author/editor. No doubt some people enjoy participating in the creation of poems by crowdsourcing. It’s less clear whether Sylvia Plath would have associated this activity with “the most ingrown and intense of the creative arts.”

Link to the rest at The MIT Press Reader

PG Postscript – Trigger Warning – Poetry Snob Below

PG harbors substantial doubts that more than 0.1% of all computer engineers know anything about decent poetry. Hint: It’s way more than meter and rhyme.

Because I could not stop for Death, The Waste Land, and The Road Not Taken are much, much more than anything a typical computer engineer (at least in PG’s experience) or an AI is likely to create. And, yes, PG has known and worked with a lot of computer engineers, including some brilliant ones, and appreciates what a talented computer engineer is capable of creating in her/his world.

However, brilliance as a computer engineer does not necessarily transfer to brilliance in understanding poetry and how to create good poetry.

Hemingway in Africa

PG just conducted an experiment with the AI writing tool, ChatGPT.

He ran a Google Search for Earnest Hemingway in Africa and found the following:

In the summer of 1933, Hemingway, Pauline, and a Key West friend traveled to Africa for a three-month safari. Inspired by the legendary hunts of Theodore Roosevelt, Hemingway borrowed funds from Pauline’s uncle, and the couple set off for what would become a productive and iconic journey.

The three months spent on safari offered Hemingway ample time to hunt and enjoy the outdoors. Despite succumbing to a severe illness on the trip that necessitated a hospital recovery, Hemingway’s trophies included a lion and other large game that roamed the African grasslands. Traveling through Kenya and Tanzania, Hemingway hunted the plains of the Serengeti, tracked animals through the bush, and recorded his experiences for later use in his work. His first African safari provided Hemingway with material and inspiration for the novel Green Hills of Africa, and the short stories The Snows of Kilimanjaro and The Short Happy Life of Francis Macomber.

Hemingway returned to the Dark Continent twenty years later with his fourth wife. They traveled through the Belgian Congo, Rwanda, and Kenya. Once again, Hemingway exercised his skill as a hunter, and brought home many big game trophies. Hemingway’s persona as a fearless adventurer only increased when he suffered through two plane crashes on his second trip to Africa, prompting many news outlets to prematurely run his obituary.

PG then ran this text through ChatGPT. The following is the first result he received.

Ernest Hemingway was an American novelist, short story writer, and journalist. He was known for his adventurous spirit and his love of hunting, and his experiences on safari in Africa played a significant role in his writing. Hemingway’s first African safari, which took place in 1933, inspired him to write several of his most famous works, including Green Hills of Africa, The Snows of Kilimanjaro, and The Short Happy Life of Francis Macomber. Hemingway returned to Africa in the 1950s with his fourth wife, and he continued to hunt and write about his experiences there. Despite suffering through two plane crashes on this second trip, Hemingway’s reputation as a fearless adventurer only grew.

Then, PG regenerated exactly the same descriptive text through ChatGPT a second time and received a longer result:

Ernest Hemingway was an American writer and journalist known for his adventurous spirit and love of the outdoors. His first trip to Africa in 1933 was a significant event in his life and provided him with material for some of his most famous works, including Green Hills of Africa, The Snows of Kilimanjaro, and The Short Happy Life of Francis Macomber. Hemingway was an avid hunter, and his time spent on safari in Kenya and Tanzania allowed him to pursue this passion while also experiencing the beauty and culture of the continent. Despite falling ill during the trip, Hemingway was able to bring home many big game trophies, including a lion.

Hemingway returned to Africa twenty years later with his fourth wife, traveling through the Belgian Congo, Rwanda, and Kenya. On this trip, Hemingway once again exercised his hunting skills and brought back a number of big game trophies. He also experienced two plane crashes, which only served to enhance his reputation as a fearless adventurer. Despite these close calls, Hemingway continued to travel and write about his adventures, leaving a lasting legacy as a pioneering figure in literature and journalism.

Then, PG regenerated the original text a third time and received the following:

Ernest Hemingway was an American writer and journalist known for his adventurous spirit and love of the outdoors. His first trip to Africa in 1933 was a three-month safari that took him through Kenya and Tanzania, where he hunted on the Serengeti and tracked animals in the bush. The trip inspired some of his most famous works, including the novel Green Hills of Africa and the short stories The Snows of Kilimanjaro and The Short Happy Life of Francis Macomber. Hemingway returned to Africa 20 years later with his fourth wife, traveling through the Belgian Congo, Rwanda, and Kenya. On this trip, he experienced two plane crashes, which only added to his reputation as a fearless adventurer. Despite these incidents, Hemingway survived and continued to write and hunt throughout his life.

Routine Writing Is About to Be Free

From Virginia’s Newsletter:

I know two kinds of people: those who have been obsessively playing with and discussing ChatGPT and those who have at best a hazy notion that it exists. I’m in the obsessive group, as you already know if you read the Tennyson experiment I posted earlier.

For those in the hazy group, ChatGPT is a system that uses massive amounts of text to create a predictive model that enables it to mimic human writing. The shorthand is that it’s an AI chatbot, or autofill on steroids. You type in a request and it spits out an answer. This CNET column provides a solid backgrounder:

For example, you can ask it encyclopedia questions like, “Explaining Newton’s laws of motion.” You can tell it, “Write me a poem,” and when it does, say, “Now make it more exciting.” You ask it to write a computer program that’ll show you all the different ways you can arrange the letters of a word.

Here’s the catch: ChatGPT doesn’t exactly know anything. It’s an AI that’s trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.

Even in its current, relatively primitive form ChatGPT portends both huge productivity increases and major disruptions in any enterprise in which writing matters. Instead of writing boilerplate corporate memos, managers will soon assign them to bots. The run-of-the-mill college grads who get paid to flood my mailbox with press releases and promotional emails should start thinking about careers as nail techs or phlebotomists—something in the physical world. Insight and beauty are still rare, but serviceable prose isn’t.

With the right prompts, ChatGPT can already mimic routine political journalism and quotidian high school and college essays. “What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor,” writes Daniel Herman, a humanities teacher at Maybeck High School a small independent school in Berkeley, in The Atlantic.

I asked the program to write me a playful, sophisticated, emotional 600-word college-admissions essay about how my experience volunteering at my local SPCA had prepared me for the academic rigor of Stanford. Here’s an excerpt from its response:

In addition to cleaning, I also had the opportunity to interact with the animals. I was amazed at the transformation I saw in some of the pets who had been neglected or abused. With patience and care, they blossomed into playful and affectionate companions who were eager to give and receive love. I was also able to witness firsthand the process of selecting the right pet for the right family. Although it was bittersweet to see some animals leave the shelter, I knew that they were going to a loving home, and that was the best thing for them.

The application essay is vapid but convincing. The variety of word choice (“blossomed,” “bittersweet”) and sentence structure marks it above average. “Had the opportunity to” is a stylistic tell: Here’s a privileged person who’s been taught to sound grateful rather than to write succinctly. “I was also able to…” is the same. I’m sure admissions officers see thousands of such essays every year. If their value goes to zero thanks to automation, this reader, writer, and teacher won’t object.

While crashing the value of mediocrity, ChatGPT could increase the returns to excellence. (“Average is over,” as Tyler Cowen put it.) Think about what happened to graphic design. Many people used to make a living doing routine tasks, from laying out pages to selecting typefaces, that are now easily handled by software. Thanks to the graphic intelligence embedded in everyday tools, the standards for routine graphics, from websites and PowerPoint presentations to restaurant menus and wedding invitations, have increased.

. . . .

As I write, there are 28 student papers awaiting my grading attention. I doubt any used ChatGPT, partly because mentioning it in class produced mostly blank stares. (The most tuned-in student, however, said he’s started using it in place of Google.) Already, we’re getting confirmed reports of cheating on exams given on Canvas, the web-based system used by many colleges for assignments and grading. By next term, every class will have to take account of ChatGPT, either explicitly incorporating it as a starting point or going back to handwritten tests and essays.

The kind of mediocre writing that earns grade-inflated Bs is now replaceable by a bot. Maybe if those B-essay students started with AI-generated prose it would be easier to teach them to do better: to refine the ideas, dig down more on the facts, improve the writing style. Can ChatGPT be a time-saving tool, like a calculator or text search, rather than a threat?

Link to the rest at Virginia’s Newsletter

The End of High-School English

From The Atlantic:

Teenagers have always found ways around doing the hard work of actual learning. CliffsNotes dates back to the 1950s, “No Fear Shakespeare” puts the playwright into modern English, YouTube offers literary analysis and historical explication from numerous amateurs and professionals, and so on. For as long as those shortcuts have existed, however, one big part of education has remained inescapable: writing. Barring outright plagiarism, students have always arrived at that moment when they’re on their own with a blank page, staring down a blinking cursor, the essay waiting to be written.

Now that might be about to change. The arrival of OpenAI’s ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end of writing assignments altogether—and maybe even the end of writing as a gatekeeper, a metric for intelligence, a teachable skill.

If you’re looking for historical analogues, this would be like the printing press, the steam drill, and the light bulb having a baby, and that baby having access to the entire corpus of human knowledge and understanding. My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change.

I teach a variety of humanities classes (literature, philosophy, religion, history) at a small independent high school in the San Francisco Bay Area. My classes tend to have about 15 students, their ages ranging from 16 to 18. This semester I am lucky enough to be teaching writers like James Baldwin, Gloria Anzaldúa, Herman Melville, Mohsin Hamid, Virginia Held. I recognize that it’s a privilege to have relatively small classes that can explore material like this at all. But at the end of the day, kids are always kids. I’m sure you will be absolutely shocked to hear that not all teenagers are, in fact, so interested in having their mind lit on fire by Anzaldúa’s radical ideas about transcending binaries, or Ishmael’s metaphysics in Moby-Dick.

To those students, I have always said: You may not be interested in poetry or civics, but no matter what you end up doing with your life, a basic competence in writing is an absolutely essential skill—whether it’s for college admissions, writing a cover letter when applying for a job, or just writing an email to your boss.

I’ve also long held, for those who are interested in writing, that you need to learn the basic rules of good writing before you can start breaking them—that, like Picasso, you have to learn how to reliably fulfill an audience’s expectations before you get to start putting eyeballs in people’s ears and things.

I don’t know if either of those things is true anymore. It’s no longer obvious to me that my teenagers actually will need to develop this basic skill, or if the logic still holds that the fundamentals are necessary for experimentation.

Let me be candid (with apologies to all of my current and former students): What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor. Over the past few days, I’ve given it a number of different prompts. And even if the bot’s results don’t exactly give you goosebumps, they do a more-than-adequate job of fulfilling a task.

I mean, look at this: I asked the program to write me a playful, sophisticated, emotional 600-word college-admissions essay about how my experience volunteering at my local SPCA had prepared me for the academic rigor of Stanford. Here’s an excerpt from its response:

In addition to cleaning, I also had the opportunity to interact with the animals. I was amazed at the transformation I saw in some of the pets who had been neglected or abused. With patience and care, they blossomed into playful and affectionate companions who were eager to give and receive love. I was also able to witness firsthand the process of selecting the right pet for the right family. Although it was bittersweet to see some animals leave the shelter, I knew that they were going to a loving home, and that was the best thing for them.

It also managed to compose a convincing 400-word “friendly” cover letter for an application to be a manager at Starbucks. But most jaw-dropping of all, on a personal level: It made quick work out of an assignment I’ve always considered absolutely “unhackable.” In January, my junior English students will begin writing an independent research paper, 12 to 18 pages, on two great literary works of their own choosing—a tradition at our school. Their goal is to place the texts in conversation with each other and find a thread that connects them. Some students will struggle to find any way to bring them together. We spend two months on the paper, putting it together piece by piece.

I’ve fed GPT a handful of pairs that students have worked with in recent years: Beloved and HamletThe Handmaid’s Tale and The Parable of the Sower, Homer’s The Odyssey and Dante’s Inferno. GPT brought them together instantly, effortlessly, uncannily: memory, guilt, revenge, justice, the individual versus the collective, freedom of choice, societal oppression. The technology doesn’t go much beyond the surface, nor does it successfully integrate quotations from the original texts, but the ideas presented were on-target—more than enough to get any student rolling without much legwork.

It goes further. Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic.

Link to the rest at The Atlantic

Describe Donald Trump in the Style of A Visit from St. Nicholas

‘Twas the night before Christmas, and all through the White House,
Not a creature was stirring, not even a mouse.
The stockings were hung by the chimney with care,
In hopes that St. Nicholas soon would be there.

But in the Oval Office, a man sat alone,
With a scowl on his face and a phone in his hand.
He was Donald Trump, the President of the United States,
And he was not happy, no, not in the least.

For he had just learned of new scandal and scandal
That threatened to undermine his administration and candor.
He paced back and forth, his mind in a whirl,
As he tried to come up with a plan to save his political world.

But just as he was about to give up hope,
He heard a noise outside and went to the window to scope.
And what to his wondering eyes did appear,
But a miniature sleigh and eight tiny reindeer.

And there was St. Nicholas, with his beard white as snow,
And a twinkle in his eye that seemed to glow.
He spoke not a word, but went straight to his work,
And delivered a package, then turned with a smirk.

And Trump, in amazement, watched as St. Nicholas rose,
And flew off into the night, with a flick of his nose.
He knew in that moment, with a sudden clarity,
That he must change his ways and work for the country.

So he put down his phone and sat down at his desk,
And began to work on a plan that he felt was best.
And as the night wore on, he found new resolve,
To make America great and to solve
The problems that faced the nation,
And to put the people’s needs above his own ambition.

And so, as Christmas morning dawned bright and clear,
Trump knew that he had a new mission this year.
To lead with integrity and to always do right,
And to make America great, with all of his might.

https://chat.openai.com/chat

Describe Vladimir Putin in the Style of Dr. Seuss

Vladimir Putin is a man with a face so stern
He rules Russia with a power that burns
With a scowl on his brow and a cold, steely gaze
He keeps his country in line in a variety of ways

With his tough talk and his strong, decisive moves
He’s not one to back down or to play by the rules
He’s a leader with a mission and a vision in mind
And he’ll stop at nothing to achieve his design

Some call him a tyrant, while others see a hero
But one thing is certain, he’s a force to be reckoned with, oh zero
Love him or hate him, there’s no denying his power
Vladimir Putin is a leader to watch in the coming hour.

https://chat.openai.com/chat

Describe Ukraine in the the Style of Mickey Spillane

The Ukraine is a tough, rugged country located in Eastern Europe. It’s a place where the streets are tough, and the people are even tougher. The people of the Ukraine are known for their hard work and determination, and they’ve had to fight for everything they have.

The Ukraine has a long and storied history, filled with violence and conflict. For centuries, the country has been caught in the middle of power struggles between Europe and Asia, and it has suffered through its fair share of wars and revolutions. But through it all, the Ukrainian people have persevered, and they remain fiercely independent and proud of their heritage.

Despite its many challenges, the Ukraine is a land of opportunity. The country is rich in natural resources, and its people are known for their resourcefulness and ingenuity. The Ukraine may not be an easy place to live, but for those who are willing to roll up their sleeves and get to work, it can be a land of great rewards.

https://chat.openai.com/chat

Children’s book created with AI sparks controversy and accusations of plagiarism

From Windows Central:

What you need to know

  • An artist created a children’s book using artificial intelligence tools.
  • The book has drawn criticism and the author has been accused of plagiarism, since AI created the content of the book.
  • AI tools have caused controversy in other sectors as well, including when a digital art piece that was created with AI won a competition.

The tool ChatGPT is a hot button topic these days. The tool, which uses artificial intelligence to respond to prompts, can be used to create code, respond to questions, or create blocks of text. It’s one of many AI tools that’s taking headlines by storm, due in part to controversy surrounding what people can do with those tools.

Within one week of ChatGPT launching in preview, it was clear that the tool could be used for a range of projects that are generally considered positive, such as debugging code. It was also demonstrated that the tool could create malware or be used in other malicious ways. Now, ChatGPT and other AI resources have drawn criticism for being used to create “original” work.

Ammaar Reshi, a design manager at Brex, created a children’s book using ChatGPT, MidJourney, and other AI tools. Some have credited the book for its unique origin while others have accused Reshi of plagiarism.

. . . .

One of the strengths of ChatGPT is that it’s conversational. Reshi used this fact to refine his story. He then put his ideas through MidJourney, an AI tool for generating art. After several hours of work, Reshi took the AI-generated text and artwork to create a children’s book titled “Alice and Sparkle,” which is available through Amazon.

Link to the rest at Windows Central and thanks to F. for the tip.

PG notes that when he checked the Amazon listing, the book was tagged as the #1 New Release in Children’s Computer & Technology Books. It also had eleven ratings and two stars.

PG will repeat what he has said previously – this sort of thing is going to happen over and over with the written word, just like it has with images.

He will also state that plagiarism, while not a nice thing to do, is not illegal. Copyright infringement is illegal and you can be sued for damages if you engage in the practice.

Here’s a simple definition of copyright infringement from the United States Copyright office:

As a general matter, copyright infringement occurs when a copyrighted work is reproduced, distributed, performed, publicly displayed, or made into a derivative work without the permission of the copyright owner.

PG notes that academic dishonesty, absent copyright infringement, does not violate any law with which he is familiar.

That said, plagiarism usually regarded as bad form and, in an era dominated by Google and other large search engines, academic suicide. Grammarly offers a plagiarism checker as do quite a number of other online services, many of which are free. PG would be very surprised if very many college professors and high school teachers did not use a plagiarism checker on a regular basis.

Here’s a not-so-simple definition of plagiarism from the Dean of Students at Bowdoin College:

There are different types of plagiarism and all are serious violations of academic honesty. We have defined the most common types below and have provided links to examples.

Direct Plagiarism 

Direct plagiarism is the word-for-word transcription of a section of someone else’s work, without attribution and without quotation marks. The deliberate plagiarism of someone else’s work is unethical, academically dishonest, and grounds for disciplinary actions, including expulsion. [See examples.]

Self Plagiarism

Self-plagiarism occurs when a student submits his or her own previous work, or mixes parts of previous works, without permission from all professors involved. For example, it would be unacceptable to incorporate part of a term paper you wrote in high school into a paper assigned in a college course. Self-plagiarism also applies to submitting the same piece of work for assignments in different classes without previous permission from both professors.

Mosaic Plagiarism

Mosaic Plagiarism occurs when a student borrows phrases from a source without using quotation marks, or finds synonyms for the author’s language while keeping to the same general structure and meaning of the original. Sometimes called “patch writing,” this kind of paraphrasing, whether intentional or not, is academically dishonest and punishable – even if you footnote your source! [See examples.]

Accidental Plagiarism

Accidental plagiarism occurs when a person neglects to cite their sources, or misquotes their sources, or unintentionally paraphrases a source by using similar words, groups of words, and/or sentence structure without attribution. (See example for mosaic plagiarism.) Students must learn how to cite their sources and to take careful and accurate notes when doing research. (See the Note-Taking section on the Avoiding Plagiarism page.) Lack of intent does not absolve the student of responsibility for plagiarism. Cases of accidental plagiarism are taken as seriously as any other plagiarism and are subject to the same range of consequences as other types of plagiarism.

Note to those unfamiliar with Bowdoin College: Bowdoin is generally regarded as one of the “Little Ivies,” small selective and reputable academic institutions that are located in the same general geographical area as the Ivy League schools – Harvard, Princeton, etc.

As with the Big Ivies, the Little Ivies also have terrible football teams.

Feel free to check Alice and Sparkle for any sorts of things you desire. There’s a problem with the preview link, but the book is available online at no charge via Kindle Unlimited.

Why I’m Done Using And Boosting AI Art

From Chuck Wendig: Terribleminds:

Let’s just put it out there and up front — earlier, I was glad to play around with AI art, but that has ended. I have no intention at present of mucking around with AI art, signal-boosting it, or supporting it. I had a subscription to Midjourney, and I canceled it.

Now, to rewind a little —

I think AI art is pretty cool.

I know, I know — I just said, but I won’t support it, and that’s true.

But I think it’s neat, in a general sense. It’s like, we can make COMPUTER ROBOT GHOSTS do all kinds of cool things for us — they can tell us the weather, show us how to get to the mall, I can yell at my car to turn on the heat and it’ll totally do it, Gmail can already predict the response I’m going to make and start to prep it for me. The robot ghosts are cool. So, the ability to say, HEY ROBOT GHOST, SHOW ME WEREWOLF PIKACHU USING A NEW POKEMON MOVE CALLED “CORUSCATING ELECTRIC ANUS” ON A KAIJU VERSION OF JERRY SEINFELD and then somehow it sorta does it, well, I don’t hate that.

Now, admittedly, when I started mucking about with AI art in the long-times-ago epoch of, mmm, six months ago, what it produced was often fiddly and hilarious and straight-up ****** weird. It would still have eyeballs in places where there shouldn’t be. Some guy’s face might look like a smear of paint, and his hand would have sixteen fingers. You might squint and see Sophia from the Golden Girls mysteriously hiding in the wallpaper. It felt a bit like you watching a robot dream. Like you were privy to the growth of its creative mind.

(It’s a lie, of course. There’s no robot dreaming; that is a romantic, anthropomorphic notion.)

But it didn’t take long for the results to get… good. Real good. Freaky good. You plug in something and what returns is a foursquare array of nearly exactly what you asked for, in a variety of art styles and modes. Which, one might argue, is quite the point of this whole affair, and I suppose it is, though I’ll also note for my mileage it also kinda defeats if not the point, than rather, the delight of having a robot puke up something just super ****** weird instead of precisely what you asked for. We were training the robot well. And it was learning fast.

And now, you see the so-called AI art everywhere, and you also see those who are mad at so-called AI art everywhere. And the latter category is often artists. Not always! But often enough.

As such, I’m going to side with the artists.

Link to the rest at Chuck Wendig: Terribleminds

PG wants writers and artists to live long and prosper and thinks that they will do just that. While he understands how they may worry about AI tools diminishing their income, he doesn’t think AI will replace them.

Picture Limitless Creativity at Your Fingertips

From Wired:

PICTURE LEE UNKRICH, one of Pixar’s most distinguished animators, as a seventh grader. He’s staring at an image of a train locomotive on the screen of his school’s first computer. Wow, he thinks. Some of the magic wears off, however, when Lee learns that the image had not appeared simply by asking for “a picture of a train.” Instead, it had to be painstakingly coded and rendered—by hard-working humans.

Now picture Lee 43 years later, stumbling onto DALL-E, an artificial intelligence that generates original works of art based on human-supplied prompts that can literally be as simple as “a picture of a train.” As he types in words to create image after image, the wow is back. Only this time, it doesn’t go away. “It feels like a miracle,” he says. “When the results appeared, my breath was taken away and tears welled in my eyes. It’s that magical.”

Our machines have crossed a threshold. All our lives, we have been reassured that computers were incapable of being truly creative. Yet, suddenly, millions of people are now using a new breed of AIs to generate stunning, never-before-seen pictures. Most of these users are not, like Lee Unkrich, professional artists, and that’s the point: They do not have to be. Not everyone can write, direct, and edit an Oscar winner like Toy Story 3 or Coco, but everyone can launch an AI image generator and type in an idea. What appears on the screen is astounding in its realism and depth of detail. Thus the universal response: Wow. On four services alone—Midjourney, Stable Diffusion, Artbreeder, and DALL-E—humans working with AIs now cocreate more than 20 million images every day. With a paintbrush in hand, artificial intelligence has become an engine of wow.

Because these surprise-generating AIs have learned their art from billions of pictures made by humans, their output hovers around what we expect pictures to look like. But because they are an alien AI, fundamentally mysterious even to their creators, they restructure the new pictures in a way no human is likely to think of, filling in details most of us wouldn’t have the artistry to imagine, let alone the skills to execute. They can also be instructed to generate more variations of something we like, in whatever style we want—in seconds. This, ultimately, is their most powerful advantage: They can make new things that are relatable and comprehensible but, at the same time, completely unexpected.

So unexpected are these new AI-generated images, in fact, that—in the silent awe immediately following the wow—another thought occurs to just about everyone who has encountered them: Human-made art must now be over. Who can compete with the speed, cheapness, scale, and, yes, wild creativity of these machines? Is art yet another human pursuit we must yield to robots? And the next obvious question: If computers can be creative, what else can they do that we were told they could not?

I have spent the past six months using AIs to create thousands of striking images, often losing a night’s sleep in the unending quest to find just one more beauty hidden in the code. And after interviewing the creators, power users, and other early adopters of these generators, I can make a very clear prediction: Generative AI will alter how we design just about everything. Oh, and not a single human artist will lose their job because of this new technology.

Link to the rest at Wired

This artist is dominating AI-generated art. And he’s not happy about it.

From MIT Technology Review:

Those cool AI-generated images you’ve seen across the internet? There’s a good chance they are based on the works of Greg Rutkowski.

Rutkowski is a Polish digital artist who uses classical painting styles to create dreamy fantasy landscapes. He has made illustrations for games such as Sony’s Horizon Forbidden West, Ubisoft’s Anno, Dungeons & Dragons, and Magic: The Gathering. And he’s become a sudden hit in the new world of text-to-image AI generation.

His distinctive style is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion, which was launched late last month. The tool, along with other popular image-generation AI models, allows anyone to create impressive images based on text prompts.

For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and the system will produce something that looks not a million miles away from works in Rutkowski’s style.

But these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. As a result, they are raising tricky questions about ethics and copyright. And artists like Rutkowski have had enough.

According to the website Lexica, which tracks over 10 million images and prompts generated by Stable Diffusion, Rutkowski’s name has been used as a prompt around 93,000 times. Some of the world’s most famous artists, such as Michelangelo, Pablo Picasso, and Leonardo da Vinci, brought up around 2,000 prompts each or less. Rutkowski’s name also features as a prompt thousands of times in the Discord of another text-to-image generator, Midjourney.

Rutkowski was initially surprised but thought it might be a good way to reach new audiences. Then he tried searching for his name to see if a piece he had worked on had been published. The online search brought back work that had his name attached to it but wasn’t his.

“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”

Stability.AI, the company that built Stable Diffusion, trained the model on the LAION-5B data set, which was compiled by the German nonprofit LAION. LAION put the data set together and narrowed it down by filtering out watermarked images and those that were not aesthetic, such as images of logos, says Andy Baio, a technologist and writer who downloaded and analyzed some of Stable Diffusion’s data. Baio analyzed 12 million of the 600 million images used to train the model and found that a large chunk of them come from third-party websites such as Pinterest and art shopping sites such as Fine Art America.

Many of Rutkowski’s artworks have been scraped from ArtStation, a website where lots of artists upload their online portfolios. His popularity as an AI prompt stems from a number of reasons.

First, his fantastical and ethereal style looks very cool. He is also prolific, and many of his illustrations are available online in high enough quality, so there are plenty of examples to choose from. An early text-to-image generator called Disco Diffusion offered Rutkowski as an example prompt.

Rutkowski has also added alt text in English when uploading his work online. These descriptions of the images are useful for people with visual impairments who use screen reader software, and they help search engines rank the images as well. This also makes them easy to scrape, and the AI model knows which images are relevant to prompts.

. . . .

Other artists besides Rutkowski have been surprised by the apparent popularity of their work in text-to-image generators—and some are now fighting back. Karla Ortiz, an illustrator based in San Francisco who found her work in Stable Diffusion’s data set, has been raising awareness about the issues around AI art and copyright.

Artists say they risk losing income as people start using AI-generated images based on copyrighted material for commercial purposes. But it’s also a lot more personal, Ortiz says, arguing that because art is so closely linked to a person, it could raise data protection and privacy problems.

“There is a coalition growing within artist industries to figure out how to tackle or mitigate this,” says Ortiz. The group is in its early days of mobilization, which could involve pushing for new policies or regulation.

Link to the rest at MIT Technology Review and thanks to R. for the tip in the comments.

PG predicts there will be more than one copyright infringement suit filed against various individuals, institutions and companies providing AI services in which an artist’s copyrighted work was used to seed the AI.

In the United States, such suits will almost certainly be filed in the federal court system since copyright is governed by federal law. Some states have laws that would seem to give exclusive rights to publish state documents to the state or those to whom the state has given permission to make and sell copies of state documents, but trying to protect a creative work from republication under anything other than pursuant to federal copyright laws and decisions is generally regarded as a fool’s errand.

One thing that judges do when faced with a novel question is to draw from similar situations that have occurred previously.

As one crude example, if an individual uses a computer and a software program created by third parties to make an exact copy of of the text of a copyright-protected book, the manufacturer of the computer or the company that created and sold the word processing program used to make a copy of the book will not be liable for copyright infringement because they only provided tools and the author used the tools in the manner the author chose.

AI art programs require a prompt to create anything images. The OP mentions the use of an artists name in an AI prompt as one way of generating an image.

However, that decision is not made by the creators/owners of the AI program, but rather by the user. The creators of the AI program ran a huge number of images by an an enormous number of artists through the program’s processor. Is it a violation of copyright law to link an artist’s name to a painting the artist created? PG doesn’t think so.

As a matter of fact, using Mr. Rutkowski’s work without attribution would also be an offense against the creator.

PG doesn’t see the creation of works “inspired by” an artist constituting copyright infringement when they aren’t copies of what the artist created or closely resembling what an artist created. PG doesn’t believe that an artistic style is protected by copyright.

If PG’s understanding of the way AI art programs work is to deconstruct the original copy of the image into its component parts and assign some sort of marker to the parts such that a prompt for a large building in the style of the British Museum won’t generate an image dominated by a dragon.

PG just created a prompt, “Windsor Castle sailing on the ocean” and ran it through an AI art generator. Here’s what he got.

Next, PG modified his prompt to read “Windsor Castle sailing on the ocean Greg Rutkowski” and this is what he got:

For one last experiment PG created another prompt with a different artist, “Windsor Castle sailing on the ocean Andy Warhol” and here’s what showed up.

PG is not an art expert, but he doesn’t think any of his AI illustrations will put either Mr. Rutkowski or Mr. Warhol out of business.

How Artificial Intelligence Helped Make an Experimental Pop Album

From Smithsonian Magazine:

When you listen to experimental pop band YACHT’s discography, the 2019 ​​album Chain Tripping fits right in. Pulsing with glitchy synth sounds, infectious bass riffs and the sweet voice of lead singer Claire L. Evans, Chain Tripping is a successful, Grammy-nominated album that sounds like YACHT, through and through. It’s such a success, you might never know it was generated by artificial intelligence.

But it was: Every riff, melody and lyric on Chain Tripping was developed by A.I. systems—even the album’s title.

That strange, tedious process—at times frustrating and at times awe-inspiring—is now the subject of The Computer Accent, a new documentary from directors Sebastian Pardo and Riel Roch-Decter that is “heartening or horrifying depending on your viewpoint,” according to the film’s own synopsis.

. . . .

To make Chain Tripping, the members of YACHT transformed their entire back catalog into MIDI data. (MIDI, which stands for musical instrument digital interface, allows electronic instruments and computers to communicate.) They then fed that data, piece by piece, to machine learning models—primarily Google’s MusicVAE, which helps artists “create palettes for blending and exploring musical scores.” YACHT followed the same process with songs by their musical inspirations and peers, and fed the lyrics of their songs into a lyric-generating model to come up with words.

Though A.I. generated the building blocks—melodies, riffs, beats and lyrics—the members of YACHT (which, fittingly, is an acronym for Young Americans Challenging High Technology) still had to manipulate them into complete songs.

“It wasn’t something where we fed something into a model, hit print and had songs,” Evans told Ars Technica’s Nathan Mattise in 2019. “We’d have to be involved. There’d have to be a human involved at every step of the process to ultimately make music … The larger structure, lyrics, the relationship between lyrics and structure—all of these other things are beyond the technology’s capacity, which is good.”

. . . .

Evans and her bandmates, Jona Bechtolt and Rob Kieswetter, hand-selected their favorite A.I. generations and then arranged them into the songs that make up Chain Tripping. They set rules for themselves: “We can’t add anything. We can’t improvise anything. We can’t harmonize,” Bechtolt told KCRW’s Madeleine Brand in 2019. “We decided it would be just a subtractive process. So we could remove things, like we could take out a word, but we couldn’t add a word for the lyrics. Same with the drum patterns and the melodies.”

Link to the rest at Smithsonian Magazine

AI is changing scientists’ understanding of language learning – and raising questions about an innate grammar

From The Conversation:

Unlike the carefully scripted dialogue found in most books and movies, the language of everyday interaction tends to be messy and incomplete, full of false starts, interruptions and people talking over each other. From casual conversations between friends, to bickering between siblings, to formal discussions in a boardroom, authentic conversation is chaotic. It seems miraculous that anyone can learn language at all given the haphazard nature of the linguistic experience.

For this reason, many language scientists – including Noam Chomsky, a founder of modern linguistics – believe that language learners require a kind of glue to rein in the unruly nature of everyday language. And that glue is grammar: a system of rules for generating grammatical sentences.

Children must have a grammar template wired into their brains to help them overcome the limitations of their language experience – or so the thinking goes.

This template, for example, might contain a “super-rule” that dictates how new pieces are added to existing phrases. Children then only need to learn whether their native language is one, like English, where the verb goes before the object (as in “I eat sushi”), or one like Japanese, where the verb goes after the object (in Japanese, the same sentence is structured as “I sushi eat”).

. . . .

But new insights into language learning are coming from an unlikely source: artificial intelligence. A new breed of large AI language models can write newspaper articles, poetry and computer code and answer questions truthfully after being exposed to vast amounts of language input. And even more astonishingly, they all do it without the help of grammar.

Grammatical language without a grammar

Even if their choice of words is sometimes strange, nonsensical or contains racist, sexist and other harmful biases, one thing is very clear: the overwhelming majority of the output of these AI language models is grammatically correct. And yet, there are no grammar templates or rules hardwired into them – they rely on linguistic experience alone, messy as it may be.

GPT-3, arguably the most well-known of these models, is a gigantic deep-learning neural network with 175 billion parameters. It was trained to predict the next word in a sentence given what came before across hundreds of billions of words from the internet, books and Wikipedia. When it made a wrong prediction, its parameters were adjusted using an automatic learning algorithm.

Remarkably, GPT-3 can generate believable text reacting to prompts such as “A summary of the last ‘Fast and Furious’ movie is…” or “Write a poem in the style of Emily Dickinson.” Moreover, GPT-3 can respond to SAT level analogies, reading comprehension questions and even solve simple arithmetic problems – all from learning how to predict the next word.

Comparing AI models and human brains

The similarity with human language doesn’t stop here, however. Research published in Nature Neuroscience demonstrated that these artificial deep-learning networks seem to use the same computational principles as the human brain. The research group, led by neuroscientist Uri Hasson, first compared how well GPT-2 – a “little brother” of GPT-3 – and humans could predict the next word in a story taken from the podcast “This American Life”: people and the AI predicted the exact same word nearly 50% of the time.

The researchers recorded volunteers’ brain activity while listening to the story. The best explanation for the patterns of activation they observed was that people’s brains – like GPT-2 – were not just using the preceding one or two words when making predictions but relied on the accumulated context of up to 100 previous words. Altogether, the authors conclude: “Our finding of spontaneous predictive neural signals as participants listen to natural speech suggests that active prediction may underlie humans’ lifelong language learning.”

Link to the rest at The Conversation

We have seen AI providing conversation and comfort to the lonely

We have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly larger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive.

Andrew Ng, Co-founder and lead of Google Brain

The Real Threat From A.I. Isn’t Superintelligence. It’s Gullibility.

From Slate:

The rapid rise of artificial intelligence over the past few decades, from pipe dream to reality, has been staggering. A.I. programs have long been chess and Jeopardy! Champions, but they have also conquered poker, crossword puzzles, Go, and even protein folding. They power the social media, video, and search sites we all use daily, and very recently they have leaped into a realm previously thought unimaginable for computers: artistic creativity.

Given this meteoric ascent, it’s not surprising that there are continued warnings of a bleak Terminator-style future of humanity destroyed by superintelligent A.I.s that we unwittingly unleash upon ourselves. But when you look beyond the splashy headlines, you’ll see that the real danger isn’t how smart A.I.s are. It’s how mindless they are—and how delusional we tend to be about their so-called intelligence.

Last summer an engineer at Google claimed the company’s latest A.I. chatbot is a sentient being because … it told him so. This chatbot, similar to the one Facebook’s parent company recently released publicly, can indeed give you the impression you’re talking to a futuristic, conscious creature. But this is an illusion—it is merely a calculator that chooses words semi-randomly based on statistical patterns from the internet text it was trained on. It has no comprehension of the words it produces, nor does it have any thoughts or feelings. It’s just a fancier version of the autocomplete feature on your phone.

Chatbots have come a long way since early primitive attempts in the 1960s, but they are no closer to thinking for themselves than they were back then. There is zero chance a current A.I. chatbot will rebel in an act of free will—all they do is turn text prompts into probabilities and then turn these probabilities into words. Future versions of these A.I.s aren’t going to decide to exterminate the human race; they are going to kill people when we foolishly put them in positions of power that they are far too stupid to have—such as dispensing medical advice or running a suicide prevention hotline.

It’s been said that TikTok’s algorithm reads your mind. But it’s not reading your mind—it’s reading your data. TikTok finds users with similar viewing histories as you and selects videos for you that they’ve watched and interacted with favorably. It’s impressive, but it’s just statistics. Similarly, the A.I. systems used by Facebook and Instagram and Twitter don’t know what information is true, what posts are good for your mental health, what content helps democracy flourish—all they know is what you and others like you have done on the platform in the past and they use this data to predict what you’ll likely do there in the future.

Don’t worry about superintelligent A.I.s trying to enslave us; worry about ignorant and venal A.I.s designed to squeeze every penny of online ad revenue out of us.

And worry about police agencies that gullibly think A.I.s can anticipate crimes before they occur—when in reality all they do is perpetuate harmful stereotypes about minorities.

The reality is that no A.I. could ever harm us unless we explicitly provide it the opportunity to do so—yet we seem hellbent on putting unqualified A.I.s in powerful decision-making positions where they could do exactly that.

Part of why we ascribe far greater intelligence and autonomy to A.I.s than they merit is because their inner-workings are largely inscrutable. They involve lots of math, lots of computer code, and billions of parameters. This complexity blinds us, and our imagination fills in what we don’t see with more than is actually there.

Link to the rest at Slate

Microsoft brings DALL-E 2 to the masses with Designer and Image Creator

From TechCrunch:

Microsoft is making a major investment in DALL-E 2, OpenAI’s AI-powered system that generates images from text, by bringing it to first-party apps and services. During its Ignite conference this week, Microsoft announced that it’s integrating DALL-E 2 with the newly announced Microsoft Designer app and Image Creator tool in Bing and Microsoft Edge.

With the advent of DALL-E 2 and open source alternatives like Stable Diffusion in recent years, AI image generators have exploded in popularity. In September, OpenAI said that more than 1.5 million users were actively creating over 2 million images a day with DALL-E 2, including artists, creative directors and authors. Brands such as Stitch Fix, Nestlé and Heinz have piloted DALL-E 2 for ad campaigns and other commercial use cases, while certain architectural firms have used DALL-E 2 and tools akin to it to conceptualize new buildings.

“Microsoft and OpenAI have partnered closely since 2019 to accelerate breakthroughs in AI. We have teamed up with OpenAI to develop, test and responsibly scale the latest AI technologies,” Microsoft CVP of modern life, search and devices Liat Ben-Zur told TechCrunch via email. “Microsoft is the exclusive provider of cloud computing services to OpenAI and is OpenAI’s preferred partner for commercializing new AI technologies. We’ve started to do this through programs like the Azure OpenAI Service and GitHub Copilot, and we’ll continue to explore solutions that harness the power of AI and advanced natural language generation.”

Link to the rest at TechCrunch

Form, function, and the giant gulf between drawing a picture and understanding the world

From The Road to AI We Can Trust:

Drawing photorealistic images is a major accomplishment for AI, but is it really a step towards general intelligence? Since DALL-E 2 came out, many people have hinted at that conclusion; when the system was announced, Sam Altman tweeted that “AGI is going to be wild”; for Kevin Roose at The New York Times, such systems constitute clear evidence that “We’re in a golden age of progress in artificial intelligence”. (Earlier this week, Scott Alexander seems to have taken apparent progress in these systems as evidence for progress towards general intelligence; I expressed reservations here.)

In assessing progress towards general intelligence, the critical question should be, how much do systems like Dall-E, Imagen, Midjourney, and Stable Diffusion really understand the world, such that they can reason on and act on that knowledge? When thinking about how they fit into AI, both narrow and broad, here are three questions you could ask:

  1. Can the image synthesis systems generate high quality images?
  2. Can they correlate their linguistic input with the images they produce?
  3. Do they understand the world that underlies the images they represent?

On #1, the answer is a clear yes; only highly trained human artists could do better.

On #2, the answer is mixed. They do well on some inputs (like astronaut rides horse) but more poorly on others (like horse rides astronaut, which I discussed in an earlier post). (Below I will show some more examples of failure; there are many examples on the internet of impressive success, as well.)

Crucially, DALL-E and co’s potential contribution to general intelligence (“AGI”) ultimately rests on #3; if all the systems can do is in a hit-or-miss yet spectacular way convert many sentences into text, they may revolutionize the practice of art, but still not really speak to general intelligence, or even represent progress towards general intelligence.

Until this morning, I despaired of assessing what these systems understand about the world at all.

The single clearest hint that they might have trouble that I had seen thus far was from the graphic designer Irina Blok:

As my 8 year old said, reading this draft, “how does the coffee not fall out of the cup?”

The trouble, though, with asking a system like Imagen to draw impossible things is that there is no fact of the matter about what the picture should look like, so the discussion about results cycles endlessly. Maybe the system just “wanted” to draw a surrealistic image. And for that matter, maybe a person would do the same, as Michael Bronstein pointed out.

Link to the rest at The Road to AI We Can Trust

Working With AI

From The Wall Street Journal:

In August, first prize in the digital-art category of the Colorado State Fair’s fine-art competition went to a man who used artificial intelligence (AI) to generate his submission, “Théâtre d’Opéra Spatial.” He supplied the AI, a program called Midjourney, with only a “prompt”—a textual description of what he wanted. Systems like Midjourney and the similar DALL-E 2 have led to a new role in our AI age: “prompt engineer.” Such people can even sell their textual wares in an online market called PromptBase.

Midjourney and DALL-E 2 emerged too late to be included in “Working With AI: Real Stories of Human-Machine Collaboration,” by Thomas Davenport and Steven Miller, information-systems professors at Babson College and Singapore Management University, respectively. But the authors note other novel titles: chief automation officer; senior manager of content systems; architect, ethical AI practice. As AI’s influence expands, its borders with the work world gain complexity. Next up: deputy regional manager of AI-prompt quality and security assurance.

The bulk of “Working With AI” comprises 29 case studies in which corporate teams integrate automation into a workflow. Each chapter ends on three or four “lessons we learned.” For each study, one or both authors typically interview not only a worker interacting directly with the AI but also the worker’s supervisor, the manager who decided to adopt the technology, the software’s developer and the company’s customers. Though they include some statistics on, say, time saved, the reports are largely qualitative.

The book is aimed at managers, consultants and students planning their careers. As none of the above, I still appreciated the accessible narratives as a diverse survey of how current technologies can expand the range of human capabilities. Some of the applications came to resemble each other, but the mild level of bland business-speak, like “stakeholder” and “buy-in,” was positively tolerable.

Early cases lean toward desk-ridden workers. One system helps financial advisers at Morgan Stanley personalize investment ideas for their clients. Another helps fundraisers at Arkansas State University target potential donors and drafts emails for them. Others suggest life-insurance premiums to underwriters at MassMutual, or help forecast sales for Kroger. In all cases, humans have the final say. And in many cases, the systems provide explanations for their outputs, listing, for example, the variables that most heavily influenced a decision.

Later cases breach the office walls. One system predicts which field activities will be most dangerous to Southern California Edison workers, and recommends precautions. Another highlights neighborhoods where crime is likely to occur and recommends that police officers patrol the area. (The latter, a form of predictive policing, has raised concerns about biased algorithms. The vendor says they’ve implemented countermeasures, but the book doesn’t elaborate.)

The benefit in most cases is increased efficiency. AI relieves employees of boring and time-consuming work, freeing them to address other tasks, such as strategic thinking or client interactions. The authors spend less time discussing ways in which machines might perform with more accuracy than humans, though they do point to Stitch Fix, where algorithms assist stylists in clothing recommendations. The company’s director of data science notes that it’s usually best not to override the AI, whose choices tend to be superior. While algorithms can be biased, so too can humans. Stitch Fix’s styling supervisor said the software nudges stylists away from their own preferences and toward those of the clients.

Many readers’ first question might be: Will AI take my job? Or: Can I replace my expensive employees with AI? The short answer from the authors is: In the near future, no. Wealthy countries are actually experiencing a long-term labor shortage. And there are still many things AI (often) can’t do, such as understand context, deal with dynamic settings, create a coherent story, coordinate people, frame a problem and know when to use AI.

The authors include an oft-quoted comment from the radiologist Keith Dreyer: “The only radiologists who will lose their jobs to AI will be those who refuse to work with AI.” The authors elaborate: “If you’re a human reading this book—and we suspect you are—that means you need to shift your focus from worrying about being replaced by a machine to worrying about whether you can add value to a job that you like where a smart machine is your collaborator. Adding value can mean checking on the machine’s work to make sure it was done well, making improvements to the machine’s logic or decisions, interpreting the machine’s results for other humans, or performing those tasks that the machine can’t or shouldn’t do for some reason.”

Link to the rest at The Wall Street Journal

Word Hero – AI Writing Tool

PG notes one of the commenters claims WordHero is just a front-end for an open-source AI writing program.

PG isn’t in a position to determine whether this comment is true or not, just that it was made.

Microsoft responsible AI principles

From Microsoft:

Fairness

AI systems should treat all people fairly

Reliability & Safety

AI systems should perform reliably and safely

Privacy & Security

AI systems should be secure and respect privacy 

Inclusiveness

AI systems should empower everyone and engage people

Transparency

AI systems should be understandable

Accountability

People should be accountable for AI systems

Link to the rest at Microsoft

Microsoft has videos that elaborate on each of these principles available at the link.

Further down on the lengthy Microsoft page, a Microsoft Office of Responsible AI is mentioned. PG couldn’t find out who heads the office on the MS site, but did find information about that person on Adobe’s Blog:

Natasha Crampton

Natasha Crampton leads Microsoft’s Office of Responsible AI, as the company’s first Chief Responsible AI Officer. The Office of Responsible AI puts Microsoft’s AI principles into practice by defining, enabling, and governing the company’s approach to responsible AI. The Office of Responsible AI also collaborates with stakeholders within and outside the company to shape new laws, norms, and standards to help ensure that the promise of AI technology is realized for the benefit of all.
Prior to this role, Natasha served as lead counsel to the Aether Committee, Microsoft’s advisory committee on responsible AI. Natasha also spent seven years in Microsoft’s Australian and New Zealand subsidiaries helping Microsoft’s highly regulated customers move to the cloud.
Prior to Microsoft, Natasha worked in law firms in Australia and New Zealand, specializing in copyright, privacy, and internet safety and security issues. Natasha graduated from the University of Auckland in New Zealand with a Bachelor of Laws (Honours) and a Bachelor of Commerce majoring in Information Systems.

PG’s Google search also picked up a presentation that Ms. Crampton and her boss gave at a recent RSA Conference. RSA provides a variety of corporate cybersecurity products.

PG is old enough to remember the first RSA, which was generated from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the very secure public key cryptosystem algorithm in 1977. This discovery prompted PG and others to adopt systems like TrueCrypt to make (in PG’s case) email communications with clients safe from hackers.