ChatGPT Is Making Universities Rethink Plagiarism

From Wired:

IN LATE DECEMBER of his sophomore year, Rutgers University student Kai Cobbs came to a conclusion he never thought possible: Artificial intelligence might just be dumber than humans.

After listening to his peers rave about the generative AI tool ChatGPT, Cobbs decided to toy around with the chatbot while writing an essay on the history of capitalism. Best known for its ability to generate long-form written content in response to user input prompts, Cobbs expected the tool to produce a nuanced and thoughtful response to his specific research directions. Instead, his screen produced a generic, poorly written paper he’d never dare to claim as his own.

“The quality of writing was appalling. The phrasing was awkward and it lacked complexity,” Cobbs says. “I just logically can’t imagine a student using writing that was generated through ChatGPT for a paper or anything when the content is just plain bad.”

Not everyone shares Cobbs’ disdain. Ever since OpenAI launched the chatbot in November, educators have been struggling with how to handle a new wave of student work produced with the help of artificial intelligence. While some public school systems, like New York City’s, have banned the use of ChatGPT on school devices and networks to curb cheating, universities have been reluctant to follow suit. In higher education, the introduction of generative AI has raised thorny questions about the definition of plagiarism and academic integrity on campuses where new digital research tools come into play all the time. 

Make no mistake, the birth of ChatGPT does not mark the emergence of concerns relating to the improper use of the internet in academia. When Wikipedia launched in 2001, universities nationwide were scrambling to decipher their own research philosophies and understandings of honest academic work, expanding policy boundaries to match pace with technological innovation. Now, the stakes are a little more complex, as schools figure out how to treat bot-produced work rather than weird attributional logistics. The world of higher education is playing a familiar game of catch-up, adjusting their rules, expectations, and perceptions as other professions adjust, too. The only difference now is that the internet can think for itself. 

ACCORDING TO CHATGPT, the definition of plagiarism is the act of using someone else’s work or ideas without giving proper credit to the original author. But when the work is generated by something rather than someone, this definition is tricky to apply. As Emily Hipchen, a board member of Brown University’s Academic Code Committee, puts it, the use of generative AI by students leads to a critical point of contention. “If [plagiarism] is stealing from a person,” she says, “then I don’t know that we have a person who is being stolen from.”

Hipchen is not alone in her speculation. Alison Daily, chair of the Academic Integrity Program at Villanova University, is also grappling with the idea of classifying an algorithm as a person, specifically if the algorithm involves text generation.

Daily believes that eventually professors and students are going to need to understand that digital tools that generate text, rather than just collect facts, are going to need to fall under the umbrella of things that can be plagiarized from. 

Although Daily acknowledges that this technological growth incites new concerns in the world of academia, she doesn’t find it to be a realm entirely unexplored. “I think we’ve been in a version of this territory for a while already,” Daily says. “Students who commit plagiarism often borrow material from a ‘somewhere’—a website, for example, that doesn’t have clear authorial attribution. I suspect the definition of plagiarism will expand to include things that produce.” 

Eventually, Daily believes, a student who uses text from ChatGPT will be seen as no different than one that copies and pastes chunks of text from Wikipedia without attribution. 

Link to the rest at Wired

PG never thought of college professors as Luddites, but those mentioned in the OP certainly fit the definition.

AI Generated Art for a Comic Book. Human Artists Are Having a Fit.

From The Wall Street Journal:

Kris Kashtanova says doing the art for the graphic novel “Zarya of the Dawn” was like conjuring it up with a spell.

“New York Skyline forest punk,” the author typed into an artificial intelligence program that turns written prompts into pictures. Then came the tinkering with the wording to get the right effect. “Crepuscular rays. Epic scene.”

The 18-page book follows the travels of a young character who awakes alone and confused in an abandoned, futuristic world, and who looks a lot like Zendaya, the actress from “Euphoria” and the recent “Spider-Man” movies. The images were composed on Midjourney, one of a batch of services that create new images based on artwork and photos already online. Last year, “Zarya of the Dawn,” which credited the software as a co-author on the title page, became the first work of its kind to get a copyright from the Library of Congress.

But now the copyright is under review, posing a big question: Who really owns these AI-generated, mashup images?

Text-based AI programs such as OpenAI’s ChatGPT are already causing a ruckus in the education world, with teachers worrying that students might pass off AI-generated essays as their own. Christian Terwiesch, a professor at the Wharton business school, recently published a paper concluding that the software would have received a B to B- on one of his M.B.A. courses—better than some of his real-life students.

Now creative types are on edge over how AI might upend their livelihoods. Several artists have begun legal action against Midjourney and other AI services, saying their images were included in reference databases without their permission. Some think it’s too easy a shortcut. Movie director Guillermo del Toro recently described AI-generated animation as “an insult to life.”

For “Zarya of the Dawn,” Mx. Kashtanova, who uses a gender-neutral honorific and pronoun, says they were upfront about using the technology. Mx. Kashtanova touched up the images generated by Midjourney and provided the comic’s text, and isn’t too concerned about what happens as the case at the Library of Congress’s Copyright Office continues.

“Like, no one is going to die,” they say, adding that they applied for the copyright with plans to donate money from licensing fees to a New York nonprofit, Backpacks for the Street, where they volunteer. Midjourney, which didn’t respond to a request for comment, is paying for the legal fees to help make the case to retain copyright. The Copyright Office says it doesn’t comment on pending cases.

The case is turning into a barometer for how AI art is treated in the eyes of the law.

“Think about photography,” says Van Lindberg, an intellectual property lawyer at Taylor English Duma LLP in San Antonio, who is representing Mx. Kashtanova, along with legal group Open Advisory Services. In the past, when photographers still used film, they spent much of their energy carefully composing the right shot. In the digital age, it’s more common to take lots of pictures and select the best—which is similar to what artists are doing with AI programs, he says.

“We’re starting to use our intelligence for curation as opposed to some other aspects of creative work,” he says. “Is that enough to sustain copyright? I believe it will ultimately be found that it is, but it’s an open question.”

The question is becoming more urgent as the technology improves.

Jason M. Allen stirred up a hornet’s nest of controversy online last year when he beat a host of artists to win first prize for digital art at the Colorado State Fair. He experimented with hundreds of different prompts on Midjourney to come up with his work, “Théâtre D’Opéra Spatial.” The judges hadn’t realized what the software was.

Software engineer Stephen Thaler this month took the Copyright Office to court in Washington, D.C., after it rebuffed his application for “A Recent Entrance to Paradise,” which he generated with his own program to represent a near-death experience. He argues that as the program’s creator, the image rights belong to him. The Copyright Office ruled that it wouldn’t knowingly register a work solely created by AI.

“Whether it’s music or movies or art or text, you can now go to dozens of openly available AI systems online like DALL-E or ChatGPT and they will make art that passes all traditional tests for whether something’s protectable,” says Ryan Abbott, an attorney at Brown Neri Smith & Khan LLP, who is representing Dr. Thaler.

In the U.S., someone seeking copyright needs to show only that it contains “a modicum of creativity,” as the Supreme Court has said. Mx. Kashtanova thinks “Zarya of the Dawn” easily passes the threshold.

One of the opening scenes shows the lead character holding a mysterious postcard from someone called Rusty, a moody scene that helps set up the rest of the story as Zarya sets out to find a way home.

Mx. Kashtanova describes going through hundreds of prompts to capture the right atmosphere, trying phrases such as “cellular wisdom” and “alien forest” until Midjourney delivered the goods.

They repeated the process as the story progressed, often typing in “Zendaya” to keep Zarya’s appearance consistent.

AI developers often say the idea is to give human imagination a helping hand. 

Link to the rest at The Wall Street Journal

PG will repeat that AI is simply a sophisticated tool that allows individuals to create images more easily and quickly than they would likely be able to do without AI.

Is there a serious professional artist that thinks using Photoshop and related tools like Procreate, Astropad Studio and any other software tools shouldn’t be permitted for creative artists to use because they’re not mixing their own paints and using a camelhair brush to create their work on a canvas?

PG remembers a pair of statements from a long time ago regarding paleontology and various early hominins.

Man, the tool-maker

Tools, the man-maker

PG notes that these statements predate any sort of political correct speech and he acknowledges that man and woman could be used interchangeably and the statements would still be true.

PG suggests that AI tools are yet another man-maker that will accelerate the imaginations and creatively artistic talents of humanity as a whole.

AI Isn’t Really Artificial Intelligence

From Tech Register:

At its core, today’s AI is incapable of comprehension, knowledge, thought, or “intelligence.” This name is little more than a marketing gimmick.

Nothing’s easier to sell than a product with a good name. The technology that we call “artificial intelligence” is extremely complicated, but thanks to its name, you already have an idea of what it does! There’s just one problem; AI isn’t “intelligent” at any level, and corporations aren’t interested in correcting the public’s misconceptions.

There’s Nothing Intelligent About AI

Artificial intelligence is a longstanding staple of pop culture and real science. We’ve spent nearly a century pursuing this technology, and the idea of “living machines” goes back thousands of years. So, we have a pretty clear understanding of what someone means when they say “artificial intelligence.” It’s something comparable to human intelligence—the ability to comprehend, adapt, and have novel ideas.

But the technology that we call “artificial intelligence” lacks these qualities. It cannot “know” or “think” anything. Existing AI is just a mess of code attached to a big pile of data, which it remixes and regurgitates. You can ask ChatGPT to write you a resume, and it’ll spit out something based on the resumes in its dataset (plus whatever info you share). This is useful, it automates labor, but it’s not a sign of intelligence.

Of course, ChatGPT is a chatbot, so it can feel very “human.” But most AI applications are non-conversational; they don’t talk or answer questions. And without the veneer of a conversation, the lack of “intelligence” in AI is very noticeable.

Take Tesla’s self-driving cars, for example. Elon Musk has spent nearly a decade pretending that Tesla Full Self-Driving is just a year away—it’s almost ready, and it will be 150% safer than a human driver! Yet this AI program continues to linger in beta, and every time we hear of it, Full Self-Driving is criticized as a safety hazard. The AI isn’t even smart enough to do its job.

For a more down-to-earth example, just look at robot vacuums. They collect a ridiculous amount of data on your home in the name obstacle avoidance and navigational AI. And while these AI-enabled robot vacuums are an improvement over what we had in the past, they still have a ridiculous amount of trouble with basic obstacles, like dog poop, kids’ toys, and small rugs.

Ordinary people, including a large number of people who work in technology, don’t know anything about AI or how it works. They just hear the phrase “artificial intelligence” and make an assumption. These assumptions may seem inconsequential, but in reality, they are a guiding force behind technological development, the economy, and public policy.

This Technology Is Useful, but The Marketing Is Nonsense

I don’t want to downplay the importance of AI or machine learning technology. You interact with this stuff every time you use your cellphone, search for something on Google, or scroll through social media. Machine learning drives innovation in physics, it contributes to “Warp Speed” vaccine development, and it’s currently making its debut on the battlefield.

But the term “artificial intelligence” is plastered on this technology for marketing purposes. It’s a flashy name that tells customers and investors, “our product is futuristic and has a purpose.” As explained by AI researcher Melanie Mitchell in a conversation with the Wall Street Journal, companies and engineers routinely slap the name “AI” on anything that involves machine learning, as the phrase is proven to illicit a response from investors (who may know very little about technology, let alone AI).

This is something that you can see in nearly every industry. Just do a Google search for a company name and add the term “AI.” You’ll be shocked by the number of businesses that brag about their AI pursuits in vague language, with zero proof that this technology has actually contributed to their profitability, productivity, or innovation.

And, as noted by Dr. Mitchell, this same marketing tactic was utilized in the 1970s and 80s—companies and engineers secured massive amounts of funding with the promise of “artificial intelligence.” Their research was not a waste of money, but it wasn’t profitable, so the funding dried up. (Of course, software is much more important today than it was in the 20th century. The term “artificial intelligence” is now attached to useful products and processes, so people are less likely to lose interest.)

In some ways, I think that the name “artificial intelligence” is a good idea. Companies spent a good decade calling everything an “algorithm,” which only led to confusion and frustration among the general public. The pivot to “AI” generates a lot of enthusiasm, which should lead to a more rapid development of automated software technologies.

But this enthusiasm hides the fact that “AI” is a complicated, confusing, and narrow technology. People readily assume that today’s “AI” is similar to what we’ve seen in pop culture, and very few corporations are willing to fight (or comment on) this misconception. (That said, social media weirdos are the biggest offenders. They make the most extreme and patently false claims about AI, which are amplified and consumed by people who don’t know any better.)

. . . .

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off. Chatbots will write the news, robot arms will perform heart surgery, and super-strong androids will commit all of your favorite OSHA violations while constructing suburban homes. But in reality, the technology that we call “AI” simply offsets labor.

In some ways, the offset of labor created by AI is very obvious. This technology doesn’t comprehend a single thing in existence, so in order to make it perform a task correctly, it requires constant training, testing, and troubleshooting. For every job that an AI replaces, it may create a new job.

Many of these new jobs require expertise in machine learning. But a large number of workers involved in AI development perform “menial” labor. OpenAI was caught paying Kenyan workers less than $2 an hour to help remove racism, sexism, and violent suggestions from its chatbot. And Amazon’s Mechanical Turk, which performs tasks using “AI,” often pays a few pennies for a human to complete the work instead.

Link to the rest at Tech Register

PG isn’t convinced by the OP.

It’s not difficult to debunk a new technology. PG remembers experts who ridiculed the idea that every person would have a computer on her/his desk.

That was true. For awhile.

We have them on our wrists and in our pockets and backpacks now.

Per the OP, PG has never read or heard anyone involved with AI research claim:

One of the promises of AI is that it will replace workers, leading to a utopia where humans sit on their hands all day or simply die off.

Putting words in the mouths of those one is attempting to scorn is a centuries-old practice.

Ironically, considering the view of the OP, an Australian/Iberian team has been experimenting with the design and implementation of different AI models to identify repeated potential false claims made by politicians in Spain and Australia. The system is called ClaimCheck.

Abstracts written by ChatGPT fool scientists

From Nature:

An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.

“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.

The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.

Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.

The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.

. . . .

The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn’t do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.

“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”

Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.

But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.

Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.

Link to the rest at Nature

PG decided to ChatGPT’s capabilities for creating an abstract of a longer text. He did so by copying and pasting the test of the blog post above into ChatGPT and asking the AI to summarize it.

Here’s what the AI produced:

A preprint study found that AI chatbot, ChatGPT, can write convincing fake research-paper abstracts that are often difficult to spot. The researchers at Northwestern University in Chicago used ChatGPT to generate 50 medical research abstracts and compared them with the original abstracts using a plagiarism detector and AI-output detector. They also asked a group of medical researchers to spot the fabricated abstracts. The ChatGPT-generated abstracts sailed through the plagiarism checker and the AI-output detector spotted 66% of the generated abstracts. However, the human reviewers correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts, raising concerns about the ethical implications of using AI in scientific research.

It’s Time to Teach AI How to Be Forgetful

From Wired:

OUR BRAIN HAS evolved to make predictions and explanations in unstable and ill-defined situations. For instance, to understand a novel situation, the brain generates a single explanation on the fly. If this explanation is upturned by additional information, a second explanation is generated. 

Machine learning, on the other hand, typically takes a different path: It sees reasoning as a categorization task with a fixed set of predetermined labels. It views the world as a fixed space of possibilities, enumerating and weighing them all. This approach, of course, has achieved notable successes when applied to stable and well-defined situations such as chess or computer games. When such conditions are absent, however, machines struggle.

One such example is virus epidemics. In 2008, Google launched Flu Trends, a web service that aimed to predict flu-related doctor visits using big data. The project, however, failed to predict the 2009 swine flu pandemic. After several unsuccessful tweaks to its algorithm, Google finally shuttered the project in 2015.

In such unstable situations, the human brain behaves differently. Sometimes, it simply forgets. Instead of getting bogged down by irrelevant data, it relies solely on the most recent information. This is a feature called intelligent forgetting. Adopting this approach, an algorithm that relied on a single data point—predicting that next week’s flu-related doctor visits are the same as in the most recent week, for instance—would have reduced Google Flu Trends’ prediction error by half. 

Intelligent forgetting is just one dimension of psychological AI, an approach to machine intelligence that also incorporates other features of human intelligence such as causal reasoning, intuitive psychology, and physics. In 2023, this approach to AI will finally be recognized as fundamental for solving ill-defined problems. Exploring these marvelous features of the evolved human brain will finally allow us to make machine learning smart. Indeed, researchers at the Max Planck Institute, Microsoft, Stanford University, and the University of Southampton are already integrating psychology into algorithms to achieve better predictions of human behavior, from recidivism to consumer purchases. 

One feature of psychological AI is that it is explainable. Until recently, researchers assumed that the more transparent an AI system was, the less accurate its predictions were. This mirrored the widespread but incorrect belief that complex problems always need complex solutions. In 2023, this idea will be laid to rest. As the case of flu predictions illustrates, robust and simple psychological algorithms can often give more accurate predictions than complex algorithms. Psychological AI opens up a new vision for explainable AI: Instead of trying to explain opaque complex systems, we can check first if psychological AI offers a transparent and equally accurate solution.

In 2023, deep learning in itself will come to be seen as a cul-de-sac. Without the help of human psychology, it will become clearer that the application of this type of machine learning to unstable situations eventually runs up against insurmountable limitations. We will finally recognize that more computing power makes machines faster, not smarter. One such high-profile example is self-driving cars. The vision of building the so-called level-5 cars—fully automated vehicles capable of driving safely under any conditions without human backup—has already hit such a limitation. Indeed, I predict that in 2023, Elon Musk will retract his assertion that this category of self-driving cars is just around the corner. Instead, he will refocus his business on creating the much more viable (and interesting) level-4 cars, which are able to drive fully autonomously, without human help, only in restricted areas such as motorways or cities specifically designed for self-driving vehicles. Widespread adoption of level-4 cars will instead spur us to redesign our cities, making them more stable and predictable, and barring potential distractions for human drivers, cyclists, and pedestrians. If a problem is too difficult for a machine, it is we who will have to adapt to its limited abilities.

Link to the rest at Wired

Copyright in the Age of Artificial Intelligence

Following is a transcript of a meeting sponsored by the United States Copyright Office on February 5, 2020, that includes remarks from a variety of speakers regarding Artificial Intelligence and Copyright.

As PG is posting this, the meeting happened slightly less than three years ago and an explosion of artificial intelligence research, programs, apps, etc., has occurred since that time, so keepthe age of the transcript in mind as you review it.

The transcript is a 399 page PDF file. PG will attempt to embed the transcript next in this post. He has no idea what might happen, whether the TPV hosting service will be strained beyond the breaking point, etc., etc.

If the embed doesn’t work, you can find the original at Copyright.gov

or by cutting and pasting the following link into your web browser: https://www.copyright.gov/events/artificial-intelligence/transcript.pdf

That said, here goes with the embed:


UPDATE: PG received a stranger error message that mentioned the failure of a “fake worker” which may or may not refer to an employee of the U.S. Copyright Office when he tried to post the embed.

You’ll need to use the Copyright.gov link above to view the original Copyright Office transcript.

Evidently the Copyright Office doesn’t have any fake workers who fail.

Dark Horse AI Gets Passing Grade in Law Exam

From Futurism:

An artificial intelligence dubbed Claude, developed by AI research firm Anthropic, got a “marginal pass” on a recent blindly graded law and economics exam at George Mason University, according to a recent blog post by economics professor Alex Tabarrok.

It’s yet another warning shot that AI is experiencing a moment of explosive growth in capability — and it’s not just OpenAI’s ChatGPT that we have to worry about.

. . . .

Claude is already impressing academics with its ability to come up with strikingly thorough answers to complex prompts.

For one law exam question highlighted by Tabarrok, Claude was able to generate believable recommendations on how to change intellectual property laws.

“Overall, the goal should be to make IP laws less restrictive and make more works available to the public sooner,” the AI concluded. “But it is important to still provide some incentives and compensation to creators for a limited period.”

Overall, Tabarrok found that “Claude is a competitor to GPT-3 and in my view an improvement,” because it was able to generate a “credible response” that’s “better than many human responses.”

To be fair, others were less impressed with Claude’s efforts.

“To be honest, this looks more like Claude simply consumed and puked up a McKinsey report,” the Financial Times wrote in a piece on Tabarrok’s findings.

While Claude and ChatGPT are similar in terms of user experience, the models were trained in different ways, especially when it comes to ensuring that things don’t go out of hand.

Claude makes use of “constitutional AI,” as described in a yet-to-be-peer-reviewed paper shared by Anthropic researchers last month.

“We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs,” they wrote. “The process involves both a supervised learning and a reinforcement learning phase.”

“Often, language models trained to be ‘harmless’ have a tendency to become useless in the face of adversarial questions,” the company wrote in a December tweet. “Constitutional AI lets them respond to questions using a simple set of principles as a guide.”

Link to the rest at Futurism

CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors

From Futurism:

Last week, we reported that the prominent technology news site CNET had been quietly publishing articles generated by an unspecified “AI engine.”

The news sparked outrage. Critics pointed out that the experiment felt like an attempt to eliminate work for entry-level writers, and that the accuracy of current-generation AI text generators is notoriously poor. The fact that CNET never publicly announced the program, and that the disclosure that the posts were bot-written was hidden away behind a human-sounding byline — “CNET Money Staff” — made it feel as though the outlet was trying to camouflage the provocative initiative from scrutiny.

After the outcry, CNET editor-in-chief Connie Guglielmo acknowledged the AI-written articles in a post that celebrated CNET‘s reputation for “being transparent.”

Without acknowledging the criticism, Guglielmo wrote that the publication was changing the byline on its AI-generated articles from “CNET Money Staff” to simply “CNET Money,” as well as making the disclosure more prominent.

Furthermore, she promised, every story published under the program had been “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”

That may well be the case. But we couldn’t help but notice that one of the very same AI-generated articles that Guglielmo highlighted in her post makes a series of boneheaded errors that drag the concept of replacing human writers with AI down to earth.

Take this section in the article, which is a basic explainer about compound interest (emphasis ours):

“To calculate compound interest, use the following formula:

Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods 

For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.

It sounds authoritative, but it’s wrong. In reality, of course, the person the AI is describing would earn only $300 over the first year. It’s true that the total value of their principal plus their interest would total $10,300, but that’s very different from earnings — the principal is money that the investor had already accumulated prior to putting it in an interest-bearing account.

“It is simply not correct, or common practice, to say that you have ‘earned’ both the principal sum and the interest,” Michael Dowling, an associate dean and professor of finance at Dublin College University Business School, told us of the AI-generated article.

It’s a dumb error, and one that many financially literate people would have the common sense not to take at face value. But then again, the article is written at a level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place, so it seems to run the risk of providing wildly unrealistic expectations — claiming you could earn $10,300 in a year on a $10,000 investment — to the exact readers who don’t know enough to be skeptical.

Another error in the article involves the AI’s description of how loans work. Here’s what it wrote (again, emphasis ours):

“With mortgages, car loans and personal loans, interest is usually calculated in simple terms.

For example, if you take out a car loan for $25,000, and your interest rate is 4%, you’ll pay a flat $1,000 in interest per year.”

Again, the AI is writing with the panache of a knowledgeable financial advisor. But as a human expert would know, it’s making another ignorant mistake.

What it’s bungling this time is that the way mortgages and auto loans are typically structured, the borrower doesn’t pay a flat amount of interest per year, or even per monthly payment. Instead, on each successive payment they owe interest only on the remaining balance. That means that toward the beginning of the loan, the borrower pays more interest and less principal, which gradually reverses as the payments continue.

It’s easy to illustrate the error by entering the details from the CNET AI’s hypothetical scenario — a $25,000 loan with an interest rate of 4 percent — into an auto loan amortization calculator. The result? Contrary to what the AI claimed, there’s never a year when the borrower will pay a full $1,000, since they start chipping away at the balance on their first payment.

CNET‘s AI is “absolutely” wrong in how it described loan payments, Dowling said.

“That’s just simply not the case that it would be $1,000 per year in interest,” he said, “as the loan balance is being reduced every year and you only pay interest on the outstanding balance.”

The problem with this description isn’t just that it’s wrong. It’s that the AI is eliding an important reality about many loans: that if you pay them down faster, you end up paying less interest in the future. In other words, it’s feeding terrible financial advice directly to people trying to improve their grasp of it.

Link to the rest at Futurism

PG says somebody (not PG) is going to start a website that features errors made by AI systems.

PG also says that AI is roughly where airplanes were when, on December 17, 1903, the Wright Flyer traveled 120 feet in 12 seconds at speed of 6.8 miles per hour at Kitty Hawk, North Carolina.

Fifteen years later, a British Sopwith Dragon flew at a speed of 149 miles per hour. Twenty-two years after that, the Lockheed P-38 flew at 400 mph. Late in World War II, a Messerschmitt Me.262 reached a sustained top speed of 540 mph.

PG says AI development isn’t like airplane development. It’s going to be much, much faster.

An A.I. Translation Tool Can Help Save Dying Languages. But at What Cost?

From Slate:

Sanjib Chaudhary chanced upon StoryWeaver, a multilingual children’s storytelling platform, while searching for books he could read to his 7-year-old daughter. Chaudhary’s mother tongue is Kochila Tharu, a language with about 250,000 speakers in eastern Nepal. (Nepali, Nepal’s official language, has 16 million speakers.) Languages with a relatively small number of speakers, like Kochila Tharu, do not have enough digitized material for linguistic communities to thrive—no Google Translate, no film or television subtitles, no online newspapers. In industry parlance, these languages are “underserved” and “underresourced.”

This is where StoryWeaver comes in. Founded by the Indian education nonprofit Pratham Books, StoryWeaver currently hosts more than 50,000 open-licensed stories across reading levels in more than 300 languages from around the world. Users can explore the repository by reading level, language, and theme, and once they select a story, they can click through illustrated slides (each as if it were the page of a book) in the selected language (there are also bilingual options, where two languages are shown side-by-side, as well as download and read-along audio options). “Smile Please,” a short tale about a fawn’s ramblings in the forest, is currently the “most read” story—originally written in Hindi for beginners, it has since been translated into 147 languages and read 281,000 times.

A majority of the languages represented on the platform are from Africa and Asia, and many are Indigenous, in danger of losing speakers in a world of almost complete English hegemony. Chaudhary’s experience as a parent reflects this tension. “The problem with children is that they prefer to read storybooks in English rather than in their own language because English is much, much easier. With Kochila Tharu, the spelling is difficult, the words are difficult, and you know, they’re exposed to English all the time, in schools, on television,” Chaudhary said

Artificial intelligence-assisted translation tools like StoryWeaver can bring more languages into conversation with one another—but the tech is still new, and it depends on data that only speakers of underserved languages can provide. This raises concerns about how the labor of the native speakers powering A.I. tools will be valued and how repositories of linguistic data will be commercialized.

To understand how A.I.-assisted translation tools like StoryWeaver work, it’s helpful to look at neighboring India: With 22 official languages and more than 780 spoken languages, it is no accident that the country is a hub of innovation for multilingual tech. StoryWeaver’s inner core is inspired by a natural language processing tool developed at Microsoft Research India called interactive neural machine translation prediction technology, or INMT.

Unlike most A.I.-powered commercial translation tools, INMT doesn’t do away with a human intermediary altogether. Instead, it assists humans with hints in the language they’re translating into. For example, if you begin typing, “It is raining” in the target language, the model working on the back-end supplies “tonight,” “heavily,” and “cats and dogs” as options for completing your sentence, based on the context and the previous word or set of words. During translation, the tool accounts for meaning in the original language and what the target language allows, and then generates possibilities for the translator to choose from, said Kalika Bali, principal researcher at Microsoft and one of INMT’s main architects.

Tools like INMT allow StoryWeaver’s cadre of volunteers to generate translations of existing stories quickly. The user interface is easy to master even for amateur translators, many of whom, like Chaudhary, are either volunteering their time or already working for nonprofits in early childhood education. The latter is the case for Churki Hansda. Working in Kora and Santali, two underserved Indigenous languages spoken in eastern India, she is an employee at Suchana Uttor Chandipur Community Society, one of StoryWeaver’s many partner organizations scattered all over the world. “We didn’t really have storybooks growing up. Our school textbooks were in Bengali [the dominant regional language], and we would end up memorizing everything because we didn’t understand what we were reading,” Hansda told me. “It’s a good feeling to be able to create books in our languages for our children.”

Amna Singh, Pratham Books’ content and partnerships manager, estimates that 58 percent of the languages represented on StoryWeaver are underserved, a status quo that has cascading consequences for early childhood learning outcomes. But attempts to undo the neglect of underserved language communities are also closely linked with unlocking their potential as consumers, and A.I.-powered translation technology is a big part of this shift. Voice recognition tools and chat bots in regional Indian languages aim to woo customers outside metropolitan cities, a market that is expected to expand as cellular data usage becomes even cheaper.

These tools are only as good as their training data, and sourcing is a major challenge. For sustained multilingualism on the internet, machine translation models require large volumes of training data generated in two languages parallel to one another. Parliamentary proceedings and media publications are common sources of publicly available data that can be scraped for training purposes. However, both these sources—according to Microsoft’s researcher Bali—are too specific, and do not encompass a wide enough range in terms of topics and vocabulary to be properly representative of human speech. (This is why StoryWeaver isn’t a good source for training data, either, because sentences in children’s books are fairly simple and the reading corpus only goes up to fourth-grade reading levels.)

Link to the rest at Slate

The Future of AI Writing and Audio

From Publisher’s Weekly:

Digital Book World, a conference focusing on publishing innovation, offered insight into how technologists, and some publishers, are planning to implement AI into their workflow. Asked about AI and the use of ChatGPT, which automates writing, Mary McAveeney, CEO of Abrams, was skeptical of its ability to write books. She conceded, “It might be good for catalog copy.”

Earlier in the conference, organizer Bradley Metrock asked publishers Laini Brown, director of Publicity for the Nashville office of Hachette Book Group, and Lisa Lucas, senior vice president and publisher of Pantheon and Schocken Books, what they thought of the news that the next iteration of Chat GPT will be able to produce a 60,000 word book in 20 seconds. Neither publisher chose to respond.

Others warned against relying too heavily on AI without human intervention. For example, Madeleine Rothberg, senior subject matter expert for WGBH National Center for Accessible Media in Boston, warned against posting AI-generated subtitles for YouTube videos without first reviewing them. “It’s not a good idea, because we have found the AI doesn’t always get the words right and makes mistakes,” she said, citing instances of unintended vulgarity. Or, as Ashok Giri, CEO of Page Magik put it, “primary research human beings are [still] needed.” Giri’s company offers automation tools and data to help streamline editorial and production workflow.

Others are more skeptical. One attendee, who wished to remain anonymous so as not to offend others in the room, noted that Chat GPT and AI is limited by what is put into it and, for this, it needs to absorb vast swaths of existing information. Much of that comes from print books, e-books, and internet writing protected by copyright. “It sounds exactly like that Google hoped to accomplish with the Google Books program,” they said.“ What happened there? Lawsuits.”

Bradley Metrock, conference organizer, acknowledged that the owners of copyrighted material incorporated will likely challenge the use of their content by AI. “There are going to be a lot of lawsuits before this is sorted out,” said Metrock, who owns several companies that invest in various AI and voice related projects. “The point here is that good technology challenges,” citing the lack of innovation in the ebook space over the past 15 years, he said. “Everything stays the same,” he added, ‘“until it doesn’t.”

. . . .

Audiobooks are now a $5 billion market worldwide, and they continue to experience double digit growth. According to the Association of Audiobook Publishers, the U.S. market is growing at a rate of 25% per year ,and reached $1.6 billion in sales for 2021. “The increasing availability of titles is the biggest driver of audiobook growth,” said Videl Bar-Kar, global head of audio for Frankfurt-based Bookwire. “The best way to grow the catalog of available titles is through backlist.”

Here, the use of AI generated voices to narrate audiobooks offers publishers who cannot afford human narrators the opportunity to turn backlist into audiobooks for low cost. “And if the book sells and becomes a success,” Bar-Kar added, “they can always go back and re-record the book with a human narrator.”

Bar-Kar called the audiobook market a “once in a generation opportunity,” noting: “There are new people discovering audio for the first time year-on-year, not because of the heavy consumers, but because there are new people coming into the market.” He described it as a business opportunity, and one that needs to be demystified: “Have the courage and confidence to stop selling your audiobook rights and develop your own audio program,” he said.

Link to the rest at Publisher’s Weekly

Imitation Is The Best Form Of Flattery. Flattery Is Not A Defense To Copyright Infringement.

From Above the Law:

Unless you’ve been living under a law library, it would be hard to not take note of the rapid influx of AI art. Face modifying apps, extended shots of events and people that never happened that uncanny only begins to explain their weirdness, you name it. The figure of AI as artist has arrived, but is any of it legal? A small group of artists aim to find out. From Reuters:

A group of visual artists has sued artificial intelligence companies for copyright infringement, adding to a fast-emerging line of intellectual property disputes over AI-generated work.

Stability AI’s Stable Diffusion software copies billions of copyrighted images to enable Midjourney and DeviantArt’s AI to create images in those artists’ styles without permission, according to the proposed class-action lawsuit filed Friday in San Francisco federal court.

The artists’ lawyers, the Joseph Saveri Law Firm and Matthew Butterick, filed a separate proposed class action lawsuit in November against Microsoft’s GitHub Inc and its business partner OpenAI Inc for allegedly scraping copyrighted source code without permission to train AI systems.

. . . .

I’m gonna flag it for you in case your eyes glossed over it. The word there is billions. Billons. With a B. Even if the individual damages are pennies on the dollar, the aggregate of those alleged copyright infringements would be… well, I’m not that good at math, but it would put a sizeable dent in my student loan principal.

. . . .

For those not in the know, if you’ve ever seen a stock image, every one you’ve seen is probably from Getty.

Link to the rest at Above the Law

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic

From Time Magazine:

ChatGPT was hailed as one of 2022’s most impressive technological innovations upon its release last November. The powerful artificial intelligence (AI) chatbot can generate text on almost any topic or theme, from a Shakespearean sonnet reimagined in the style of Megan Thee Stallion, to complex mathematical theorems described in language a 5 year old can understand. Within a week, it had more than a million users.

ChatGPT’s creator, OpenAI, is now reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. That would make OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines, one of the world’s most valuable AI companies.

But the success story is not one of Silicon Valley genius alone. In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found.

The work was vital for OpenAI. ChatGPT’s predecessor, GPT-3, had already shown an impressive ability to string sentences together. But it was a difficult sell, as the app was also prone to blurting out violent, sexist and racist remarks. This is because the AI had been trained on hundreds of billions of words scraped from the internet—a vast repository of human language. That huge training dataset was the reason for GPT-3’s impressive linguistic capabilities, but was also perhaps its biggest curse. Since parts of the internet are replete with toxicity and bias, there was no easy way of purging those sections of the training data. Even a team of hundreds of humans would have taken decades to trawl through the enormous dataset manually. It was only by building an additional AI-powered safety mechanism that OpenAI would be able to rein in that harm, producing a chatbot suitable for everyday use.

To build that safety system, OpenAI took a leaf out of the playbook of social media companies like Facebook, who had already shown it was possible to build AIs that could detect toxic language like hate speech to help remove it from their platforms. The premise was simple: feed an AI with labeled examples of violence, hate speech, and sexual abuse, and that tool could learn to detect those forms of toxicity in the wild. That detector would be built into ChatGPT to check whether it was echoing the toxicity of its training data, and filter it out before it ever reached the user. It could also help scrub toxic text from the training datasets of future AI models.

To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.

OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour depending on seniority and performance. For this story, TIME reviewed hundreds of pages of internal Sama and OpenAI documents, including workers’ payslips, and interviewed four Sama employees who worked on the project. All the employees spoke on condition of anonymity out of concern for their livelihoods.

The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.” (OpenAI does not disclose the names of the outsourcers it partners with, and it is not clear whether OpenAI worked with other data labeling firms in addition to Sama on this project.)

. . . .

One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.

. . . .

Documents reviewed by TIME show that OpenAI signed three contracts worth about $200,000 in total with Sama in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence. Around three dozen workers were split into three teams, one focusing on each subject. Three employees told TIME they were expected to read and label between 150 and 250 passages of text per nine-hour shift. Those snippets could range from around 100 words to well over 1,000. All of the four employees interviewed by TIME described being mentally scarred by the work. Although they were entitled to attend sessions with “wellness” counselors, all four said these sessions were unhelpful and rare due to high demands to be more productive at work. Two said they were only given the option to attend group sessions, and one said their requests to see counselors on a one-to-one basis instead were repeatedly denied by Sama management.

Link to the rest at Time

PG wonders if there isn’t a better way to engineer an AI product to identify and avoid toxic documents in its construction of a database.

He also thinks this is an example of when Sili Valley’s long-standing motto, “Go fast and break things,” should involve some adult judgment in the on the part of someone with authority in the organization..

One of the oldest bits of business advice is, “Know your suppliers.” Evidently, the management at OpenAI all missed the class where that was discussed.

PG notes that his brothers and sisters of the bar are not immune to the “smart people doing dumb things” behavior pattern – see, for example, Why Toxic Culture Is To Blame For Women Leaving Law Firms.

OpenAI Background

From Wikipedia:

OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Sam Altman, Elon Musk, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft and Matthew Brown Companies. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.

In December 2015, Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research announced  the formation of OpenAI and pledged over US$1 billion to the venture. The organization stated it would “freely collaborate” with other institutions and researchers by making its patents and research open to the public.

. . . .

In April 2016, OpenAI released a public beta of “OpenAI Gym”, its platform for reinforcement learning research. In December 2016, OpenAI released “Universe”, a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.

In 2018, Musk resigned his board seat, citing “a potential future conflict (of interest)” with Tesla AI development for self driving cars, but remained a donor.

In 2019, OpenAI transitioned from non-profit to “capped” for-profit, with profit cap set to 100X on any investment. The company distributed equity to its employees and partnered with Microsoft, who announced an investment package of US$1 billion into the company. OpenAI then announced its intention to commercially license its technologies.

In 2020, OpenAI announced GPT-3, a language model trained on trillions of words from the Internet. It also announced that an associated API, named simply “the API”, would form the heart of its first commercial product. GPT-3 is aimed at natural language answering of questions, but it can also translate between languages and coherently generate improvised text.

In 2021, OpenAI introduced DALL-E, a deep learning model that can generate digital images from natural language descriptions.

Around December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI was projecting a US$200 million revenue for 2023 and US$1 billion revenue for 2024. As of January 2023, it was in talks for funding that would value the company at $29 billion.

Link to the rest at Wikipedia and thanks to F. for the tip.

PG notes that the OP contains lots of links and is likely to be a good starting point for anyone who wishes to dive into the state of AI, at least in the US.

Meet the artificially intelligent chatbot trying to curtail loneliness in America

From The Hill:

The concept of an artificially intelligent companion has been around for
decades longer than the AI technology has existed in a readily accessible
form. From droids like C3PO and R2D2 of the Star Wars universe, to Joaquin
Phoenix’s virtual assistant, Samantha, from Her, there is no shortage of pop
culture examples of the fabled robot helpers.

But over the past few years, AI technology has exponentially improved and
made its way from the big screen to the smartphone screen. In late 2015,
Elon Musk partnered with Sam Altman to create a company called OpenAI, a
software business with the mission of creating an artificial general
intelligence that benefits all of humanity.

One of the early projects at OpenAI was a natural language processing
system called GPT (Generative Pre-trained Transformer). GPT is in essence, a chatbot that uses deep learning to produce human-like text responses to
users of the platform. Many online users saw the GPT chatbot as an outlet to
have a bit of fun testing the limits of the human-like texting algorithm, but
some innovators viewed the free software as a marketable source of untapped potential.

One of those early innovators was founder of Replika Eugenia Kuyda.
Replika is a free to download app that allows users to send and receive
messages to an artificially intelligent companion built on the GPT3 platform.
On the website, Replika states that each companion is eager to learn about
the world through the eyes of the user, and will always be ready to chat
when the user is looking for an empathetic friend.

The idea for Replika was born from grief, when Kuyda’s best friend, Roman,
was tragically killed in a hit-and-run incident in 2015. Being torn so suddenly
from a loved one, Kuyda was looking for a way to somehow remain close to
the memory of Roman. The timing of the car accident and the release of the
open source GPT1 software gave Kuyda a unique outlet to grieve.

“I took all of the text messages that were sent over a year to each other and
plugged it into this conversational AI model,” says Kuyda. “And this way I
had a chatbot that I could talk to, and it could talk to me like my best friend.”

Kuyda was able to aggregate tens of thousands of messages that her and
Roman had exchanged to train the GPT software to speak like her late best
friend. She eventually released the GPT model emulating Roman to a larger
group of people and found that many discovered the tool to be highly
engaging and life-like.

Kuyda then began working on a chatbot that would eventually become the
Replika app with more than two million active users.

When opening Replika for the first time, users are prompted to design their
chatbot avatar and select some interests that they’d like to talk about. From
there it’s up to the user to guide the conversation. The Replika software is
designed to catalogue user inputs in its memory to help develop responses
that become more contextually relevant the deeper the user goes into the
conversation.

Kuyda sees the app as a tool to help people build their social skills and learn
how to interact with people.

“For a lot of people, the problem with interacting with other humans is the
fear of opening up, the fear of being vulnerable, the fear of starting the
contact, starting the conversation, and they’re basically rehearsing that with
Replika,” says Kuyda. “So think of it as a gym for your relationship so you
can practice in a safe environment.”

Link to the rest at The Hill

DeepL, the AI-based language translator, raises over $100M at a $1B+ valuation

From TechCrunch:

Artificial intelligence startups, and (thanks to GPT and OpenAI) specifically those helping humans communicate with each other, are commanding a lot of interest from investors, and today the latest of these is announcing a big round of funding. DeepL, a startup that provides instant translation-as-a-service both to businesses and to individuals — competing with Google, Bing and other online tools — has confirmed a fundraise at a €1 billion valuation (just over $1 billion at today’s rates).

Cologne, Germany-based DeepL is not disclosing the full amount that it’s raised — it doesn’t want to focus on this aspect, CEO and founder Jaroslaw Kutylowski said in an interview — but as we were working on this story we heard a range of figures. At one end, an investor that was pitched on the funding told TechCrunch that DeepL was aiming to raise $125 million. At the other end, a report with a rumor about the funding from back in November said the amount was around $100 million. The funding closed earlier this month.

The startup is also not confirming or disclosing other financials, but the investor source said that the $1 billion valuation was based on a 20x multiple of DeepL’s annual run rate, which was at $50 million at the end of last year. In the current fundraising climate, this is a pretty bullish multiple, but it speaks to the company’s growth, which the investor noted is currently at 100%, and the fact that DeepL’s breaking even and close to being profitable.

What is more definitive is the list of investors: DeepL said that new backer IVP was leading the round, with Bessemer Venture Partners, Atomico and WiL also participating. Previous backers in the company also include Benchmark and btov.

DeepL primarily provides translation as a service to businesses rather than individuals, and its forte up to now has been working primarily with smaller and medium organizations.

Link to the rest at TechCrunch

OpenAI begins piloting ChatGPT Professional, a premium version of its viral chatbot

From TechCrunch:

OpenAI this week signaled it’ll soon begin charging for ChatGPT, its viral AI-powered chatbot that can write essays, emails, poems and even computer code. In an announcement on the company’s official Discord server, OpenAI said that it’s “starting to think about how to monetize ChatGPT” as one of the ways to “ensure [the tool’s] long-term viability.”

The monetized version of ChatGPT will be called ChatGPT Professional, apparently. That’s according to a waitlist link OpenAI posted in the Discord server, which asks a range of questions about payment preferences including “At what price (per month) would you consider ChatGPT to be so expensive that you would not consider buying it?”

The waitlist also outlines ChatGPT Professional’s benefits, which include no “blackout” (i.e. unavailability) windows, no throttling and an unlimited number of message with ChatGPT — “at least 2x the regular daily limit.” OpenAI says that those who fill out the waitlist form may be selected to pilot ChatGPT Professional, but that the program is in the experimental stages and won’t be made widely available “at this time.”

. . . .

Despite controversy and several bans, ChatGPT has proven to be a publicity win for OpenAI, attracting major media attention and spawning countless memes on social media. Some investors are implementing ChatGPT in their workflows. Ryan Reynolds enlisted ChatGPT to write an ad for Mint Mobile, the mobile carrier he part-owns. And Microsoft will reportedly incorporate the AI behind ChatGPT into its Office suite and Bing.

ChatGPT had over a million users as of early December — an enviable user base by any measure. But it’s a pricey service to run. According to OpenAI co-founder and CEO Sam Altman, ChatGPT’s operating expenses are “eye-watering,” amounting to a few cents per chat in total compute costs.

OpenAI is under pressure to turn a profit on products like ChatGPT ahead of a rumored $10 billion investment from Microsoft. OpenAI expects to make $200 million in 2023, a pittance compared to the more than $1 billion that’s been invested in the startup so far.

Semafor reported this week that Microsoft is looking to net a 49% stake in OpenAI, valuing the company at around $29 billion.

Link to the rest at TechCrunch

In a former life, PG worked in the tech sector without his attorney hat on (although it was always in his briefcase). In the OP, he senses a mad scramble going on inside OpenAI to get profitable in preparation for an “everybody gets rich” public offering of its stock.

If OpenAI pulls this off, it will give a big boost for AI businesses in general by demonstrating that people will pay money for an AI-based service.

PG isn’t turning TPV into an AI blog, but does predict a giant impact on the writing biz in general (freelance copywriters, creators of catalogues, newspaper reporters and editors, magazines, some types of indie publishers and authors, porn creators (unfortunately), etc.

Will AI Make Creative Workers Redundant?

From The Wall Street Journal:

ChatGPT has some wondering if artificial intelligence will make human creativity obsolete. Released in November by Open AI, the chatbot can quickly write readable prose in response to natural-language prompts better than most people can. When one of my colleagues asked ChatGPT for a 250-word summary of Umberto Eco’s philosophy of translation, it produced a text that would put many educated adults to shame—and it did so within seconds. Reactions to this new AI have ranged from panic to wonder. It is potentially competition for anyone who writes for a living, including journalists and lawyers. Even visual artists are worried, given the dozen or so AI art generators that can already create virtually any image.

To me, the hubbub feels like déjà vu. As an academic translator, I witnessed a similar debate emerge surrounding the introduction in 2017 of DeepL, a ground-breaking form of neural machine translation. At the time, most people took one of two views: either the new technology would ultimately replace human translators or it would be insufficient and barely affect the field. It ended up being something in the middle.

Five years after the introduction of DeepL, most human translators no longer actually translate, but neither have they been entirely replaced by machines. Instead, they use the technology to make translations easier and faster. The software generates a base translation, then the human translator “post-edits,” fixing errors and making the text sound natural. But the feedback the translator provides also becomes part of the recursive loop in the AI’s continual self-improvement. The technology is poised to take over the translation process completely.

I could see image- and text-generating AIs having a similar effect. Just as translators now post-edit instead of translate, it seems likely that many creative workers will “post-create” instead of create. A machine will come up with an initial sketch of an idea, and then the artist or writer will tinker with it. Some may have too much pride to rely on a machine, but it will be hard to resist the advantage the technology offers. For translators and artists alike, AI reduces the cognitive load of creating. Imagine no longer straining to come up with a first draft. Work would flow much more easily.

AI creativity and human creativity already seem to be converging in music. Though artists have sampled tracks for decades, they’re now repurposing older tunes with machine-like regularity. Some of the biggest hits of 2022 were based on melodic lines from the 1980s. For music fans, the question may eventually be whether human beings or AI is better at such recombination. On a recent podcast, Smashing Pumpkins founder Billy Corgan noted his pessimism: “AI systems will completely dominate music. The idea of an intuitive artist beating an AI system is going to be very, very difficult.”

Choosing to use AI raises some uncomfortable questions. Are translators really translators anymore? If an artist takes a first sketch from a computer, is he still genuinely an artist? The casting about for initial words or brush strokes, often the most difficult part of drafting, seems to be the heart of human creativity. If that is given over to AI, the process seems more like an assembly-line production with human writers or artists serving as mere inspectors—checking the end product and then giving a stamp of approval.

Link to the rest at The Wall Street Journal

A new Chatbot is a ‘code red’ for Google’s search business

From The Seattle Times:

Over the past three decades, a handful of products like Netscape’s web browser, Google’s search engine and Apple’s iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.

Last month, an experimental chatbot called ChatGPT made its case to be the industry’s next big disrupter. It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.

Although ChatGPT still has plenty of room for improvement, its release led Google’s management to declare a “code red.” For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread — the arrival of an enormous technological change that could upend the business.

For more than 20 years, the Google search engine has served as the world’s primary gateway to the internet. But with a new kind of chatbot technology poised to reinvent or even replace traditional search engines, Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google’s future.

ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chatbots, because of the many ways the technology could damage its business.

Google has spent several years working on chatbots and, like other big tech companies, has aggressively pursued artificial intelligence technology. Google has already built a chatbot that could rival ChatGPT. In fact, the technology at the heart of OpenAI’s chatbot was developed by researchers at Google.

Called LaMDA, or Language Model for Dialogue Applications, Google’s chatbot received enormous attention in the summer when a Google engineer, Blake Lemoine, claimed it was sentient. This was not true, but the technology showed how much chatbot technology had improved in recent months.

Google may be reluctant to deploy this new tech as a replacement for online search, however, because it is not suited to delivering digital ads, which accounted for more than 80% of the company’s revenue last year.

“No company is invincible; all are vulnerable,” said Margaret O’Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. “For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different.”

Because these new chatbots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of color. They can generate toxic language, including hate speech.

All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.

Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalize the company’s lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.

“Google has a business model issue,” said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara, a startup that is building similar technology. “If Google gives you the perfect answer to each query, you won’t click on any ads.”

Sundar Pichai, Google’s CEO, has been involved in a series of meetings to define Google’s AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times. Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI’s DALL-E technology, which has been used by more than 3 million people.

Link to the rest at The Seattle Times and thanks to R. and others for the tip.

Our Current Thinking on the Use of AI-Generated Image Software and AI Art

From Kickstarter:

I want to share some of our thoughts on Artificial Intelligence (AI) generated images and AI art as it develops, because many creators on Kickstarter are understandably concerned about its impact on the creative community.

At Kickstarter, we often have projects that are innovative and push the boundaries of what’s possible. And that means we’re sometimes navigating some really tricky and undefined areas.

Over the last several days, we’ve engaged our Community Advisory Council and we’ve read your feedback to us via our team and social media. And one thing is clear: Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.

As we look at what’s happening in the creative ecosystem and on our platform, here are some of the things we’re considering when it comes to what place AI image generation software and AI-generated art should have on Kickstarter, if any:

  • Is a project copying or mimicking an artist’s work? We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it’s not so clear — Fitzgerald was inspired to write the book by the grand parties he attended on prosperous Long Island, where he got a front-row view of the elite, moneyed class of the 1920s, a culture he longed to join but never could..
  • Does a project exploit a particular community or put anyone at risk of harm? We have to consider the intention behind projects, sometimes beyond their purpose as stated on our platform. Our rules prohibit projects that promote discrimination, bigotry, or intolerance towards marginalized groups, and we often make decisions to protect the health and integrity of Kickstarter.

Link to the rest at Kickstarter

For visitors to TPV who are not familiar with Kickstarter, here’s a brief overview from the Kickstarter website:

What is Kickstarter?

Kickstarter is a funding platform for creative projects. Everything from film, games, and music to art, design, and technology. Kickstarter is full of ambitious, innovative, and imaginative projects that are brought to life through the direct support of others.

How does it work?

Every project creator sets their project’s funding goal and deadline. If people like the project, they can pledge money to make it happen. If the project succeeds in reaching its funding goal, all backers’ credit cards are charged when time expires. Funding on Kickstarter is all-or-nothing. If the project falls short of its funding goal, no one is charged.

If a project is successfully funded, Kickstarter applies a 5% fee to the funds collected.

Here’s a link to a successful Kickstarter campaign page for a children’s adventure book.

Back to the Kickstarter comment about AI Images and Art.

Kickstarter must, and will always be, on the side of creative work and the humans behind that work. We’re here to help creative work thrive.

First of all, the creation of AI software and systems is quite a startling and original creative process in and of itself. And there are actual humans using their wetware who have created all the AI art programs with which PG is familiar. (He has not doubt that at some future time, someone will create an AI mother ship generator program devoted to creating lots of offspring AI programs with little or no additional human input for a variety of purposes.)

Under the current AI programming practices of which PG is aware, part of the development process is to feed a large volume of copies of creative work into the program to seed it. If it’s an AI art generation program, copies of a whole bunch of images are used as seed material. An AI text generation program ingests a whole lot of human-generated text as seed material.

For those who are thinking about copyright infringement on a massive scale, PG doesn’t think that any AI program will create a copy of any of the original works used to seed it. In this sense, it resembles a human author or painter who studies a great many literary or artistic works during the process of learning how to paint or write fiction or non-fiction.

One of the more common accounts of the first step of the creation of a new and original creative work goes something like, “I was wandering through the Metropolitan Museum of Art one afternoon and stopped in front of the Picasso portrait of Gertrude Stein . . . .” or, “While reading The Grapes of Wrath, I wondered what a 21st Century version of that book would look like.”

For a further example, F. Scott Fitzgerald said he was inspired to write The Great Gatsby by the grand parties he attended on prosperous Long Island, where he got a front-row view of the elite, moneyed class of the 1920’s, a culture he longed to join but never could. Some of those familiar with that part of Fitzgerald’s life claim to know exactly which people he closely modeled for some of the characters in his book.

So, back to the Kickstarter AI statement of concern that, “We must consider not only if a work has a straightforward copyright claim, but also evaluate situations where it’s not so clear — where images that are owned or created by others might not be on a Kickstarter project page, but are in the training data that makes the AI software used in the project, without the knowledge, attribution, or consent of creators.”

What if the creators of an AI writing program included the text of Fairy Tale by Stephen King along with the text of 20,000 other novels into the seed for a novel-writing AI?

Based on PG’s brief experience with and understanding of how current AI writing programs work, he suspects that, regardless of what writing prompt anyone provided, the novel-writing AI program would not produce the text of Fairy Tale or anything close enough to support a copyright suit by King or his publishers.

As far as Kickstarter’s mention of attribution of the authors of seed material for any given AI text generation program, it would be a very, very long list to the point where no one would read it. As far as Kickstarter’s concern that requiring the consent of each author whose work went into the AI’s maw during the creation of the AI is concerned, PG doesn’t think consent is necessary because the AI isn’t reproducing the author’s work and the use of the text would be fair use under current copyright law plus there are no damages for copyright infringement because the AI program doesn’t replicate the original material.

Can AI Write Authentic Poetry?

From The MIT Press Reader:

“Time — a few centuries here or there — means very little in the world of poems.” There is something reassuring about Mary Oliver’s words. Especially in an era of rapid change, there is comfort to be had in those things that move slowly. But oceans rise and mountains fall; nothing stays the same. Not even the way poetry is made.

The disappearance of the author in 20th-century literary criticism can perhaps be traced back to the surrealist movement and its game of “exquisite corpse.” The surrealists believed that a poem can emerge not only from the unconscious mind of an individual, but from the collective mind of many individuals working in consort — even, or perhaps especially, if each individual has minimal knowledge of what the others are doing. Soon the idea of making art from recycled objects emerged. In the realm of literature, this approach took the form of found poetry.

To create a found poem, one or more people collect bits of text encountered anywhere at all, and with a little editing stitch the pieces together to form a collagelike poem. Examining this generative activity, it may be difficult to identify who if anyone is the “poet” who writes the found poem (or for that matter, to be confident that “writing” is an apt name for the process). Still, even if no one’s consciousness guided the initial creation of the constituent phrases, one or more humans will have exercised their sensitivity and discrimination in selecting the bits to include, and the way these pieces are ordered and linked to form a new whole. The author (or authors) at a minimum must do the work of a careful reader. Can the human be pushed still further into the background, or even out of the picture?

The most radical technological advance of the 20th century might seem to have nothing at all to do with the writing of poetry. If we make a list of the great leaps that led to modern civilization — control of fire, agriculture, the wheel, electricity, and perhaps a few more — the most recent addition is a machine that uses electrons to do computation. The first functioning digital computers were constructed midcentury by Alan Turing and a few others. Over the next not-quite-a-century-yet, computers became enormously faster and more powerful, began to process information in parallel rather than just sequentially, and were linked together into a vast worldwide network known as the internet. Along the way, these devices enabled the creation of artificial versions of a trait previously found only in biological life forms, most notably humans — intelligence.


In a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises to challenge humans as artistic creators.

Artificial intelligence (AI) is in the process of changing the world and its societies in ways no one can fully predict. On the hazier side of the present horizon, there may come a tipping point at which AI surpasses the general intelligence of humans. (In various specific domains, notably mathematical calculation, the intersection point was passed decades ago.) Many people anticipate this technological moment, dubbed the Singularity, as a kind of Second Coming — though whether of a savior or of Yeats’s rough beast is less clear. Perhaps by constructing an artificial human, computer scientists will finally realize Mary Shelley’s vision.

Of all the actual and potential consequences of AI, surely the least significant is that AI programs are beginning to write poetry. But that effort happens to be the AI application most relevant to our theme. And in a certain sense, poetry may serve as a kind of canary in the coal mine — an early indicator of the extent to which AI promises (threatens?) to challenge humans as artistic creators. If AI can be a poet, what other previously human-only roles will it slip into?

So, what is the current state of AI and computer-generated poetry? This is a less central question than might be supposed. Especially in this time of rapid AI advances, the current state of the artificial poetic arts is merely a transitory benchmark. We need to set aside the old stereotype that computer programs simply follow fixed rules and do what humans have programmed them to do, and so lack any capacity for creativity. Computer programs can now learn from enormous sets of data using methods called deep learning. What the programs learn, and how they will behave after learning, is very difficult (perhaps impossible) to predict in advance. The question has arisen (semiseriously) whether computer programs ought to be listed as coauthors of scientific papers reporting discoveries to which they contributed. There is no doubt that some forms of creativity are within the reach, and indeed the grasp, of computer programs.

But what about poetry? To evaluate computer-generated poetry, let’s pause to remind ourselves what makes a text work as a poem. A successful poem combines compelling content (what Coleridge called “good sense”) with aesthetically pleasing wordplay (metaphor and other varieties of symbolism), coupled with the various types of sound similarities and constraints of form.

In broad strokes, an automated approach to constructing poems can operate using a generate-then-select method. First, lots of candidate texts are produced, out of which some (a very few, or just one) are then selected as winners worth keeping. Roughly, computer programs can be very prolific in generating, but (to date) have proved less capable at selecting. At the risk of caricature, the computer poet can be likened to the proverbial monkey at the typewriter, pounding out reams of garbage within which the occasional Shakespearean sonnet might be found — with the key difference that the computer operates far more rapidly than any monkey (or human) could. To be fair, the program’s search can be made much less random than the monkey’s typing. Current computer poetry programs usually bring in one or more humans to help in selecting poetic gems embedded in vast quantities of computer-generated ore. An important question, of course, is whether an authentic creator requires some ability to evaluate their own creations. Perhaps, as Oscar Wilde argued, there is a sense in which an artist must act as their own critic — or not be a true artist at all.

One use of computers is simply to provide a platform for human generation and selection. The internet makes it easy for large groups of people to collaborate on projects. The kind of collective poetry writing encouraged by the surrealists has evolved into crowdsourcing websites that allow anyone to edit an emerging collective poem. Each contributor gets to play a bit part as author/editor. No doubt some people enjoy participating in the creation of poems by crowdsourcing. It’s less clear whether Sylvia Plath would have associated this activity with “the most ingrown and intense of the creative arts.”

Link to the rest at The MIT Press Reader

PG Postscript – Trigger Warning – Poetry Snob Below

PG harbors substantial doubts that more than 0.1% of all computer engineers know anything about decent poetry. Hint: It’s way more than meter and rhyme.

Because I could not stop for Death, The Waste Land, and The Road Not Taken are much, much more than anything a typical computer engineer (at least in PG’s experience) or an AI is likely to create. And, yes, PG has known and worked with a lot of computer engineers, including some brilliant ones, and appreciates what a talented computer engineer is capable of creating in her/his world.

However, brilliance as a computer engineer does not necessarily transfer to brilliance in understanding poetry and how to create good poetry.

Hemingway in Africa

PG just conducted an experiment with the AI writing tool, ChatGPT.

He ran a Google Search for Earnest Hemingway in Africa and found the following:

In the summer of 1933, Hemingway, Pauline, and a Key West friend traveled to Africa for a three-month safari. Inspired by the legendary hunts of Theodore Roosevelt, Hemingway borrowed funds from Pauline’s uncle, and the couple set off for what would become a productive and iconic journey.

The three months spent on safari offered Hemingway ample time to hunt and enjoy the outdoors. Despite succumbing to a severe illness on the trip that necessitated a hospital recovery, Hemingway’s trophies included a lion and other large game that roamed the African grasslands. Traveling through Kenya and Tanzania, Hemingway hunted the plains of the Serengeti, tracked animals through the bush, and recorded his experiences for later use in his work. His first African safari provided Hemingway with material and inspiration for the novel Green Hills of Africa, and the short stories The Snows of Kilimanjaro and The Short Happy Life of Francis Macomber.

Hemingway returned to the Dark Continent twenty years later with his fourth wife. They traveled through the Belgian Congo, Rwanda, and Kenya. Once again, Hemingway exercised his skill as a hunter, and brought home many big game trophies. Hemingway’s persona as a fearless adventurer only increased when he suffered through two plane crashes on his second trip to Africa, prompting many news outlets to prematurely run his obituary.

PG then ran this text through ChatGPT. The following is the first result he received.

Ernest Hemingway was an American novelist, short story writer, and journalist. He was known for his adventurous spirit and his love of hunting, and his experiences on safari in Africa played a significant role in his writing. Hemingway’s first African safari, which took place in 1933, inspired him to write several of his most famous works, including Green Hills of Africa, The Snows of Kilimanjaro, and The Short Happy Life of Francis Macomber. Hemingway returned to Africa in the 1950s with his fourth wife, and he continued to hunt and write about his experiences there. Despite suffering through two plane crashes on this second trip, Hemingway’s reputation as a fearless adventurer only grew.

Then, PG regenerated exactly the same descriptive text through ChatGPT a second time and received a longer result:

Ernest Hemingway was an American writer and journalist known for his adventurous spirit and love of the outdoors. His first trip to Africa in 1933 was a significant event in his life and provided him with material for some of his most famous works, including Green Hills of Africa, The Snows of Kilimanjaro, and The Short Happy Life of Francis Macomber. Hemingway was an avid hunter, and his time spent on safari in Kenya and Tanzania allowed him to pursue this passion while also experiencing the beauty and culture of the continent. Despite falling ill during the trip, Hemingway was able to bring home many big game trophies, including a lion.

Hemingway returned to Africa twenty years later with his fourth wife, traveling through the Belgian Congo, Rwanda, and Kenya. On this trip, Hemingway once again exercised his hunting skills and brought back a number of big game trophies. He also experienced two plane crashes, which only served to enhance his reputation as a fearless adventurer. Despite these close calls, Hemingway continued to travel and write about his adventures, leaving a lasting legacy as a pioneering figure in literature and journalism.

Then, PG regenerated the original text a third time and received the following:

Ernest Hemingway was an American writer and journalist known for his adventurous spirit and love of the outdoors. His first trip to Africa in 1933 was a three-month safari that took him through Kenya and Tanzania, where he hunted on the Serengeti and tracked animals in the bush. The trip inspired some of his most famous works, including the novel Green Hills of Africa and the short stories The Snows of Kilimanjaro and The Short Happy Life of Francis Macomber. Hemingway returned to Africa 20 years later with his fourth wife, traveling through the Belgian Congo, Rwanda, and Kenya. On this trip, he experienced two plane crashes, which only added to his reputation as a fearless adventurer. Despite these incidents, Hemingway survived and continued to write and hunt throughout his life.

Routine Writing Is About to Be Free

From Virginia’s Newsletter:

I know two kinds of people: those who have been obsessively playing with and discussing ChatGPT and those who have at best a hazy notion that it exists. I’m in the obsessive group, as you already know if you read the Tennyson experiment I posted earlier.

For those in the hazy group, ChatGPT is a system that uses massive amounts of text to create a predictive model that enables it to mimic human writing. The shorthand is that it’s an AI chatbot, or autofill on steroids. You type in a request and it spits out an answer. This CNET column provides a solid backgrounder:

For example, you can ask it encyclopedia questions like, “Explaining Newton’s laws of motion.” You can tell it, “Write me a poem,” and when it does, say, “Now make it more exciting.” You ask it to write a computer program that’ll show you all the different ways you can arrange the letters of a word.

Here’s the catch: ChatGPT doesn’t exactly know anything. It’s an AI that’s trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.

Even in its current, relatively primitive form ChatGPT portends both huge productivity increases and major disruptions in any enterprise in which writing matters. Instead of writing boilerplate corporate memos, managers will soon assign them to bots. The run-of-the-mill college grads who get paid to flood my mailbox with press releases and promotional emails should start thinking about careers as nail techs or phlebotomists—something in the physical world. Insight and beauty are still rare, but serviceable prose isn’t.

With the right prompts, ChatGPT can already mimic routine political journalism and quotidian high school and college essays. “What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor,” writes Daniel Herman, a humanities teacher at Maybeck High School a small independent school in Berkeley, in The Atlantic.

I asked the program to write me a playful, sophisticated, emotional 600-word college-admissions essay about how my experience volunteering at my local SPCA had prepared me for the academic rigor of Stanford. Here’s an excerpt from its response:

In addition to cleaning, I also had the opportunity to interact with the animals. I was amazed at the transformation I saw in some of the pets who had been neglected or abused. With patience and care, they blossomed into playful and affectionate companions who were eager to give and receive love. I was also able to witness firsthand the process of selecting the right pet for the right family. Although it was bittersweet to see some animals leave the shelter, I knew that they were going to a loving home, and that was the best thing for them.

The application essay is vapid but convincing. The variety of word choice (“blossomed,” “bittersweet”) and sentence structure marks it above average. “Had the opportunity to” is a stylistic tell: Here’s a privileged person who’s been taught to sound grateful rather than to write succinctly. “I was also able to…” is the same. I’m sure admissions officers see thousands of such essays every year. If their value goes to zero thanks to automation, this reader, writer, and teacher won’t object.

While crashing the value of mediocrity, ChatGPT could increase the returns to excellence. (“Average is over,” as Tyler Cowen put it.) Think about what happened to graphic design. Many people used to make a living doing routine tasks, from laying out pages to selecting typefaces, that are now easily handled by software. Thanks to the graphic intelligence embedded in everyday tools, the standards for routine graphics, from websites and PowerPoint presentations to restaurant menus and wedding invitations, have increased.

. . . .

As I write, there are 28 student papers awaiting my grading attention. I doubt any used ChatGPT, partly because mentioning it in class produced mostly blank stares. (The most tuned-in student, however, said he’s started using it in place of Google.) Already, we’re getting confirmed reports of cheating on exams given on Canvas, the web-based system used by many colleges for assignments and grading. By next term, every class will have to take account of ChatGPT, either explicitly incorporating it as a starting point or going back to handwritten tests and essays.

The kind of mediocre writing that earns grade-inflated Bs is now replaceable by a bot. Maybe if those B-essay students started with AI-generated prose it would be easier to teach them to do better: to refine the ideas, dig down more on the facts, improve the writing style. Can ChatGPT be a time-saving tool, like a calculator or text search, rather than a threat?

Link to the rest at Virginia’s Newsletter

The End of High-School English

From The Atlantic:

Teenagers have always found ways around doing the hard work of actual learning. CliffsNotes dates back to the 1950s, “No Fear Shakespeare” puts the playwright into modern English, YouTube offers literary analysis and historical explication from numerous amateurs and professionals, and so on. For as long as those shortcuts have existed, however, one big part of education has remained inescapable: writing. Barring outright plagiarism, students have always arrived at that moment when they’re on their own with a blank page, staring down a blinking cursor, the essay waiting to be written.

Now that might be about to change. The arrival of OpenAI’s ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end of writing assignments altogether—and maybe even the end of writing as a gatekeeper, a metric for intelligence, a teachable skill.

If you’re looking for historical analogues, this would be like the printing press, the steam drill, and the light bulb having a baby, and that baby having access to the entire corpus of human knowledge and understanding. My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change.

I teach a variety of humanities classes (literature, philosophy, religion, history) at a small independent high school in the San Francisco Bay Area. My classes tend to have about 15 students, their ages ranging from 16 to 18. This semester I am lucky enough to be teaching writers like James Baldwin, Gloria Anzaldúa, Herman Melville, Mohsin Hamid, Virginia Held. I recognize that it’s a privilege to have relatively small classes that can explore material like this at all. But at the end of the day, kids are always kids. I’m sure you will be absolutely shocked to hear that not all teenagers are, in fact, so interested in having their mind lit on fire by Anzaldúa’s radical ideas about transcending binaries, or Ishmael’s metaphysics in Moby-Dick.

To those students, I have always said: You may not be interested in poetry or civics, but no matter what you end up doing with your life, a basic competence in writing is an absolutely essential skill—whether it’s for college admissions, writing a cover letter when applying for a job, or just writing an email to your boss.

I’ve also long held, for those who are interested in writing, that you need to learn the basic rules of good writing before you can start breaking them—that, like Picasso, you have to learn how to reliably fulfill an audience’s expectations before you get to start putting eyeballs in people’s ears and things.

I don’t know if either of those things is true anymore. It’s no longer obvious to me that my teenagers actually will need to develop this basic skill, or if the logic still holds that the fundamentals are necessary for experimentation.

Let me be candid (with apologies to all of my current and former students): What GPT can produce right now is better than the large majority of writing seen by your average teacher or professor. Over the past few days, I’ve given it a number of different prompts. And even if the bot’s results don’t exactly give you goosebumps, they do a more-than-adequate job of fulfilling a task.

I mean, look at this: I asked the program to write me a playful, sophisticated, emotional 600-word college-admissions essay about how my experience volunteering at my local SPCA had prepared me for the academic rigor of Stanford. Here’s an excerpt from its response:

In addition to cleaning, I also had the opportunity to interact with the animals. I was amazed at the transformation I saw in some of the pets who had been neglected or abused. With patience and care, they blossomed into playful and affectionate companions who were eager to give and receive love. I was also able to witness firsthand the process of selecting the right pet for the right family. Although it was bittersweet to see some animals leave the shelter, I knew that they were going to a loving home, and that was the best thing for them.

It also managed to compose a convincing 400-word “friendly” cover letter for an application to be a manager at Starbucks. But most jaw-dropping of all, on a personal level: It made quick work out of an assignment I’ve always considered absolutely “unhackable.” In January, my junior English students will begin writing an independent research paper, 12 to 18 pages, on two great literary works of their own choosing—a tradition at our school. Their goal is to place the texts in conversation with each other and find a thread that connects them. Some students will struggle to find any way to bring them together. We spend two months on the paper, putting it together piece by piece.

I’ve fed GPT a handful of pairs that students have worked with in recent years: Beloved and HamletThe Handmaid’s Tale and The Parable of the Sower, Homer’s The Odyssey and Dante’s Inferno. GPT brought them together instantly, effortlessly, uncannily: memory, guilt, revenge, justice, the individual versus the collective, freedom of choice, societal oppression. The technology doesn’t go much beyond the surface, nor does it successfully integrate quotations from the original texts, but the ideas presented were on-target—more than enough to get any student rolling without much legwork.

It goes further. Last night, I received an essay draft from a student. I passed it along to OpenAI’s bots. “Can you fix this essay up and make it better?” Turns out, it could. It kept the student’s words intact but employed them more gracefully; it removed the clutter so the ideas were able to shine through. It was like magic.

Link to the rest at The Atlantic

Describe Donald Trump in the Style of A Visit from St. Nicholas

‘Twas the night before Christmas, and all through the White House,
Not a creature was stirring, not even a mouse.
The stockings were hung by the chimney with care,
In hopes that St. Nicholas soon would be there.

But in the Oval Office, a man sat alone,
With a scowl on his face and a phone in his hand.
He was Donald Trump, the President of the United States,
And he was not happy, no, not in the least.

For he had just learned of new scandal and scandal
That threatened to undermine his administration and candor.
He paced back and forth, his mind in a whirl,
As he tried to come up with a plan to save his political world.

But just as he was about to give up hope,
He heard a noise outside and went to the window to scope.
And what to his wondering eyes did appear,
But a miniature sleigh and eight tiny reindeer.

And there was St. Nicholas, with his beard white as snow,
And a twinkle in his eye that seemed to glow.
He spoke not a word, but went straight to his work,
And delivered a package, then turned with a smirk.

And Trump, in amazement, watched as St. Nicholas rose,
And flew off into the night, with a flick of his nose.
He knew in that moment, with a sudden clarity,
That he must change his ways and work for the country.

So he put down his phone and sat down at his desk,
And began to work on a plan that he felt was best.
And as the night wore on, he found new resolve,
To make America great and to solve
The problems that faced the nation,
And to put the people’s needs above his own ambition.

And so, as Christmas morning dawned bright and clear,
Trump knew that he had a new mission this year.
To lead with integrity and to always do right,
And to make America great, with all of his might.

https://chat.openai.com/chat

Describe Vladimir Putin in the Style of Dr. Seuss

Vladimir Putin is a man with a face so stern
He rules Russia with a power that burns
With a scowl on his brow and a cold, steely gaze
He keeps his country in line in a variety of ways

With his tough talk and his strong, decisive moves
He’s not one to back down or to play by the rules
He’s a leader with a mission and a vision in mind
And he’ll stop at nothing to achieve his design

Some call him a tyrant, while others see a hero
But one thing is certain, he’s a force to be reckoned with, oh zero
Love him or hate him, there’s no denying his power
Vladimir Putin is a leader to watch in the coming hour.

https://chat.openai.com/chat

Describe Ukraine in the the Style of Mickey Spillane

The Ukraine is a tough, rugged country located in Eastern Europe. It’s a place where the streets are tough, and the people are even tougher. The people of the Ukraine are known for their hard work and determination, and they’ve had to fight for everything they have.

The Ukraine has a long and storied history, filled with violence and conflict. For centuries, the country has been caught in the middle of power struggles between Europe and Asia, and it has suffered through its fair share of wars and revolutions. But through it all, the Ukrainian people have persevered, and they remain fiercely independent and proud of their heritage.

Despite its many challenges, the Ukraine is a land of opportunity. The country is rich in natural resources, and its people are known for their resourcefulness and ingenuity. The Ukraine may not be an easy place to live, but for those who are willing to roll up their sleeves and get to work, it can be a land of great rewards.

https://chat.openai.com/chat

Children’s book created with AI sparks controversy and accusations of plagiarism

From Windows Central:

What you need to know

  • An artist created a children’s book using artificial intelligence tools.
  • The book has drawn criticism and the author has been accused of plagiarism, since AI created the content of the book.
  • AI tools have caused controversy in other sectors as well, including when a digital art piece that was created with AI won a competition.

The tool ChatGPT is a hot button topic these days. The tool, which uses artificial intelligence to respond to prompts, can be used to create code, respond to questions, or create blocks of text. It’s one of many AI tools that’s taking headlines by storm, due in part to controversy surrounding what people can do with those tools.

Within one week of ChatGPT launching in preview, it was clear that the tool could be used for a range of projects that are generally considered positive, such as debugging code. It was also demonstrated that the tool could create malware or be used in other malicious ways. Now, ChatGPT and other AI resources have drawn criticism for being used to create “original” work.

Ammaar Reshi, a design manager at Brex, created a children’s book using ChatGPT, MidJourney, and other AI tools. Some have credited the book for its unique origin while others have accused Reshi of plagiarism.

. . . .

One of the strengths of ChatGPT is that it’s conversational. Reshi used this fact to refine his story. He then put his ideas through MidJourney, an AI tool for generating art. After several hours of work, Reshi took the AI-generated text and artwork to create a children’s book titled “Alice and Sparkle,” which is available through Amazon.

Link to the rest at Windows Central and thanks to F. for the tip.

PG notes that when he checked the Amazon listing, the book was tagged as the #1 New Release in Children’s Computer & Technology Books. It also had eleven ratings and two stars.

PG will repeat what he has said previously – this sort of thing is going to happen over and over with the written word, just like it has with images.

He will also state that plagiarism, while not a nice thing to do, is not illegal. Copyright infringement is illegal and you can be sued for damages if you engage in the practice.

Here’s a simple definition of copyright infringement from the United States Copyright office:

As a general matter, copyright infringement occurs when a copyrighted work is reproduced, distributed, performed, publicly displayed, or made into a derivative work without the permission of the copyright owner.

PG notes that academic dishonesty, absent copyright infringement, does not violate any law with which he is familiar.

That said, plagiarism usually regarded as bad form and, in an era dominated by Google and other large search engines, academic suicide. Grammarly offers a plagiarism checker as do quite a number of other online services, many of which are free. PG would be very surprised if very many college professors and high school teachers did not use a plagiarism checker on a regular basis.

Here’s a not-so-simple definition of plagiarism from the Dean of Students at Bowdoin College:

There are different types of plagiarism and all are serious violations of academic honesty. We have defined the most common types below and have provided links to examples.

Direct Plagiarism 

Direct plagiarism is the word-for-word transcription of a section of someone else’s work, without attribution and without quotation marks. The deliberate plagiarism of someone else’s work is unethical, academically dishonest, and grounds for disciplinary actions, including expulsion. [See examples.]

Self Plagiarism

Self-plagiarism occurs when a student submits his or her own previous work, or mixes parts of previous works, without permission from all professors involved. For example, it would be unacceptable to incorporate part of a term paper you wrote in high school into a paper assigned in a college course. Self-plagiarism also applies to submitting the same piece of work for assignments in different classes without previous permission from both professors.

Mosaic Plagiarism

Mosaic Plagiarism occurs when a student borrows phrases from a source without using quotation marks, or finds synonyms for the author’s language while keeping to the same general structure and meaning of the original. Sometimes called “patch writing,” this kind of paraphrasing, whether intentional or not, is academically dishonest and punishable – even if you footnote your source! [See examples.]

Accidental Plagiarism

Accidental plagiarism occurs when a person neglects to cite their sources, or misquotes their sources, or unintentionally paraphrases a source by using similar words, groups of words, and/or sentence structure without attribution. (See example for mosaic plagiarism.) Students must learn how to cite their sources and to take careful and accurate notes when doing research. (See the Note-Taking section on the Avoiding Plagiarism page.) Lack of intent does not absolve the student of responsibility for plagiarism. Cases of accidental plagiarism are taken as seriously as any other plagiarism and are subject to the same range of consequences as other types of plagiarism.

Note to those unfamiliar with Bowdoin College: Bowdoin is generally regarded as one of the “Little Ivies,” small selective and reputable academic institutions that are located in the same general geographical area as the Ivy League schools – Harvard, Princeton, etc.

As with the Big Ivies, the Little Ivies also have terrible football teams.

Feel free to check Alice and Sparkle for any sorts of things you desire. There’s a problem with the preview link, but the book is available online at no charge via Kindle Unlimited.

Why I’m Done Using And Boosting AI Art

From Chuck Wendig: Terribleminds:

Let’s just put it out there and up front — earlier, I was glad to play around with AI art, but that has ended. I have no intention at present of mucking around with AI art, signal-boosting it, or supporting it. I had a subscription to Midjourney, and I canceled it.

Now, to rewind a little —

I think AI art is pretty cool.

I know, I know — I just said, but I won’t support it, and that’s true.

But I think it’s neat, in a general sense. It’s like, we can make COMPUTER ROBOT GHOSTS do all kinds of cool things for us — they can tell us the weather, show us how to get to the mall, I can yell at my car to turn on the heat and it’ll totally do it, Gmail can already predict the response I’m going to make and start to prep it for me. The robot ghosts are cool. So, the ability to say, HEY ROBOT GHOST, SHOW ME WEREWOLF PIKACHU USING A NEW POKEMON MOVE CALLED “CORUSCATING ELECTRIC ANUS” ON A KAIJU VERSION OF JERRY SEINFELD and then somehow it sorta does it, well, I don’t hate that.

Now, admittedly, when I started mucking about with AI art in the long-times-ago epoch of, mmm, six months ago, what it produced was often fiddly and hilarious and straight-up ****** weird. It would still have eyeballs in places where there shouldn’t be. Some guy’s face might look like a smear of paint, and his hand would have sixteen fingers. You might squint and see Sophia from the Golden Girls mysteriously hiding in the wallpaper. It felt a bit like you watching a robot dream. Like you were privy to the growth of its creative mind.

(It’s a lie, of course. There’s no robot dreaming; that is a romantic, anthropomorphic notion.)

But it didn’t take long for the results to get… good. Real good. Freaky good. You plug in something and what returns is a foursquare array of nearly exactly what you asked for, in a variety of art styles and modes. Which, one might argue, is quite the point of this whole affair, and I suppose it is, though I’ll also note for my mileage it also kinda defeats if not the point, than rather, the delight of having a robot puke up something just super ****** weird instead of precisely what you asked for. We were training the robot well. And it was learning fast.

And now, you see the so-called AI art everywhere, and you also see those who are mad at so-called AI art everywhere. And the latter category is often artists. Not always! But often enough.

As such, I’m going to side with the artists.

Link to the rest at Chuck Wendig: Terribleminds

PG wants writers and artists to live long and prosper and thinks that they will do just that. While he understands how they may worry about AI tools diminishing their income, he doesn’t think AI will replace them.

Picture Limitless Creativity at Your Fingertips

From Wired:

PICTURE LEE UNKRICH, one of Pixar’s most distinguished animators, as a seventh grader. He’s staring at an image of a train locomotive on the screen of his school’s first computer. Wow, he thinks. Some of the magic wears off, however, when Lee learns that the image had not appeared simply by asking for “a picture of a train.” Instead, it had to be painstakingly coded and rendered—by hard-working humans.

Now picture Lee 43 years later, stumbling onto DALL-E, an artificial intelligence that generates original works of art based on human-supplied prompts that can literally be as simple as “a picture of a train.” As he types in words to create image after image, the wow is back. Only this time, it doesn’t go away. “It feels like a miracle,” he says. “When the results appeared, my breath was taken away and tears welled in my eyes. It’s that magical.”

Our machines have crossed a threshold. All our lives, we have been reassured that computers were incapable of being truly creative. Yet, suddenly, millions of people are now using a new breed of AIs to generate stunning, never-before-seen pictures. Most of these users are not, like Lee Unkrich, professional artists, and that’s the point: They do not have to be. Not everyone can write, direct, and edit an Oscar winner like Toy Story 3 or Coco, but everyone can launch an AI image generator and type in an idea. What appears on the screen is astounding in its realism and depth of detail. Thus the universal response: Wow. On four services alone—Midjourney, Stable Diffusion, Artbreeder, and DALL-E—humans working with AIs now cocreate more than 20 million images every day. With a paintbrush in hand, artificial intelligence has become an engine of wow.

Because these surprise-generating AIs have learned their art from billions of pictures made by humans, their output hovers around what we expect pictures to look like. But because they are an alien AI, fundamentally mysterious even to their creators, they restructure the new pictures in a way no human is likely to think of, filling in details most of us wouldn’t have the artistry to imagine, let alone the skills to execute. They can also be instructed to generate more variations of something we like, in whatever style we want—in seconds. This, ultimately, is their most powerful advantage: They can make new things that are relatable and comprehensible but, at the same time, completely unexpected.

So unexpected are these new AI-generated images, in fact, that—in the silent awe immediately following the wow—another thought occurs to just about everyone who has encountered them: Human-made art must now be over. Who can compete with the speed, cheapness, scale, and, yes, wild creativity of these machines? Is art yet another human pursuit we must yield to robots? And the next obvious question: If computers can be creative, what else can they do that we were told they could not?

I have spent the past six months using AIs to create thousands of striking images, often losing a night’s sleep in the unending quest to find just one more beauty hidden in the code. And after interviewing the creators, power users, and other early adopters of these generators, I can make a very clear prediction: Generative AI will alter how we design just about everything. Oh, and not a single human artist will lose their job because of this new technology.

Link to the rest at Wired

This artist is dominating AI-generated art. And he’s not happy about it.

From MIT Technology Review:

Those cool AI-generated images you’ve seen across the internet? There’s a good chance they are based on the works of Greg Rutkowski.

Rutkowski is a Polish digital artist who uses classical painting styles to create dreamy fantasy landscapes. He has made illustrations for games such as Sony’s Horizon Forbidden West, Ubisoft’s Anno, Dungeons & Dragons, and Magic: The Gathering. And he’s become a sudden hit in the new world of text-to-image AI generation.

His distinctive style is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion, which was launched late last month. The tool, along with other popular image-generation AI models, allows anyone to create impressive images based on text prompts.

For example, type in “Wizard with sword and a glowing orb of magic fire fights a fierce dragon Greg Rutkowski,” and the system will produce something that looks not a million miles away from works in Rutkowski’s style.

But these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists. As a result, they are raising tricky questions about ethics and copyright. And artists like Rutkowski have had enough.

According to the website Lexica, which tracks over 10 million images and prompts generated by Stable Diffusion, Rutkowski’s name has been used as a prompt around 93,000 times. Some of the world’s most famous artists, such as Michelangelo, Pablo Picasso, and Leonardo da Vinci, brought up around 2,000 prompts each or less. Rutkowski’s name also features as a prompt thousands of times in the Discord of another text-to-image generator, Midjourney.

Rutkowski was initially surprised but thought it might be a good way to reach new audiences. Then he tried searching for his name to see if a piece he had worked on had been published. The online search brought back work that had his name attached to it but wasn’t his.

“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”

Stability.AI, the company that built Stable Diffusion, trained the model on the LAION-5B data set, which was compiled by the German nonprofit LAION. LAION put the data set together and narrowed it down by filtering out watermarked images and those that were not aesthetic, such as images of logos, says Andy Baio, a technologist and writer who downloaded and analyzed some of Stable Diffusion’s data. Baio analyzed 12 million of the 600 million images used to train the model and found that a large chunk of them come from third-party websites such as Pinterest and art shopping sites such as Fine Art America.

Many of Rutkowski’s artworks have been scraped from ArtStation, a website where lots of artists upload their online portfolios. His popularity as an AI prompt stems from a number of reasons.

First, his fantastical and ethereal style looks very cool. He is also prolific, and many of his illustrations are available online in high enough quality, so there are plenty of examples to choose from. An early text-to-image generator called Disco Diffusion offered Rutkowski as an example prompt.

Rutkowski has also added alt text in English when uploading his work online. These descriptions of the images are useful for people with visual impairments who use screen reader software, and they help search engines rank the images as well. This also makes them easy to scrape, and the AI model knows which images are relevant to prompts.

. . . .

Other artists besides Rutkowski have been surprised by the apparent popularity of their work in text-to-image generators—and some are now fighting back. Karla Ortiz, an illustrator based in San Francisco who found her work in Stable Diffusion’s data set, has been raising awareness about the issues around AI art and copyright.

Artists say they risk losing income as people start using AI-generated images based on copyrighted material for commercial purposes. But it’s also a lot more personal, Ortiz says, arguing that because art is so closely linked to a person, it could raise data protection and privacy problems.

“There is a coalition growing within artist industries to figure out how to tackle or mitigate this,” says Ortiz. The group is in its early days of mobilization, which could involve pushing for new policies or regulation.

Link to the rest at MIT Technology Review and thanks to R. for the tip in the comments.

PG predicts there will be more than one copyright infringement suit filed against various individuals, institutions and companies providing AI services in which an artist’s copyrighted work was used to seed the AI.

In the United States, such suits will almost certainly be filed in the federal court system since copyright is governed by federal law. Some states have laws that would seem to give exclusive rights to publish state documents to the state or those to whom the state has given permission to make and sell copies of state documents, but trying to protect a creative work from republication under anything other than pursuant to federal copyright laws and decisions is generally regarded as a fool’s errand.

One thing that judges do when faced with a novel question is to draw from similar situations that have occurred previously.

As one crude example, if an individual uses a computer and a software program created by third parties to make an exact copy of of the text of a copyright-protected book, the manufacturer of the computer or the company that created and sold the word processing program used to make a copy of the book will not be liable for copyright infringement because they only provided tools and the author used the tools in the manner the author chose.

AI art programs require a prompt to create anything images. The OP mentions the use of an artists name in an AI prompt as one way of generating an image.

However, that decision is not made by the creators/owners of the AI program, but rather by the user. The creators of the AI program ran a huge number of images by an an enormous number of artists through the program’s processor. Is it a violation of copyright law to link an artist’s name to a painting the artist created? PG doesn’t think so.

As a matter of fact, using Mr. Rutkowski’s work without attribution would also be an offense against the creator.

PG doesn’t see the creation of works “inspired by” an artist constituting copyright infringement when they aren’t copies of what the artist created or closely resembling what an artist created. PG doesn’t believe that an artistic style is protected by copyright.

If PG’s understanding of the way AI art programs work is to deconstruct the original copy of the image into its component parts and assign some sort of marker to the parts such that a prompt for a large building in the style of the British Museum won’t generate an image dominated by a dragon.

PG just created a prompt, “Windsor Castle sailing on the ocean” and ran it through an AI art generator. Here’s what he got.

Next, PG modified his prompt to read “Windsor Castle sailing on the ocean Greg Rutkowski” and this is what he got:

For one last experiment PG created another prompt with a different artist, “Windsor Castle sailing on the ocean Andy Warhol” and here’s what showed up.

PG is not an art expert, but he doesn’t think any of his AI illustrations will put either Mr. Rutkowski or Mr. Warhol out of business.

How Artificial Intelligence Helped Make an Experimental Pop Album

From Smithsonian Magazine:

When you listen to experimental pop band YACHT’s discography, the 2019 ​​album Chain Tripping fits right in. Pulsing with glitchy synth sounds, infectious bass riffs and the sweet voice of lead singer Claire L. Evans, Chain Tripping is a successful, Grammy-nominated album that sounds like YACHT, through and through. It’s such a success, you might never know it was generated by artificial intelligence.

But it was: Every riff, melody and lyric on Chain Tripping was developed by A.I. systems—even the album’s title.

That strange, tedious process—at times frustrating and at times awe-inspiring—is now the subject of The Computer Accent, a new documentary from directors Sebastian Pardo and Riel Roch-Decter that is “heartening or horrifying depending on your viewpoint,” according to the film’s own synopsis.

. . . .

To make Chain Tripping, the members of YACHT transformed their entire back catalog into MIDI data. (MIDI, which stands for musical instrument digital interface, allows electronic instruments and computers to communicate.) They then fed that data, piece by piece, to machine learning models—primarily Google’s MusicVAE, which helps artists “create palettes for blending and exploring musical scores.” YACHT followed the same process with songs by their musical inspirations and peers, and fed the lyrics of their songs into a lyric-generating model to come up with words.

Though A.I. generated the building blocks—melodies, riffs, beats and lyrics—the members of YACHT (which, fittingly, is an acronym for Young Americans Challenging High Technology) still had to manipulate them into complete songs.

“It wasn’t something where we fed something into a model, hit print and had songs,” Evans told Ars Technica’s Nathan Mattise in 2019. “We’d have to be involved. There’d have to be a human involved at every step of the process to ultimately make music … The larger structure, lyrics, the relationship between lyrics and structure—all of these other things are beyond the technology’s capacity, which is good.”

. . . .

Evans and her bandmates, Jona Bechtolt and Rob Kieswetter, hand-selected their favorite A.I. generations and then arranged them into the songs that make up Chain Tripping. They set rules for themselves: “We can’t add anything. We can’t improvise anything. We can’t harmonize,” Bechtolt told KCRW’s Madeleine Brand in 2019. “We decided it would be just a subtractive process. So we could remove things, like we could take out a word, but we couldn’t add a word for the lyrics. Same with the drum patterns and the melodies.”

Link to the rest at Smithsonian Magazine

AI is changing scientists’ understanding of language learning – and raising questions about an innate grammar

From The Conversation:

Unlike the carefully scripted dialogue found in most books and movies, the language of everyday interaction tends to be messy and incomplete, full of false starts, interruptions and people talking over each other. From casual conversations between friends, to bickering between siblings, to formal discussions in a boardroom, authentic conversation is chaotic. It seems miraculous that anyone can learn language at all given the haphazard nature of the linguistic experience.

For this reason, many language scientists – including Noam Chomsky, a founder of modern linguistics – believe that language learners require a kind of glue to rein in the unruly nature of everyday language. And that glue is grammar: a system of rules for generating grammatical sentences.

Children must have a grammar template wired into their brains to help them overcome the limitations of their language experience – or so the thinking goes.

This template, for example, might contain a “super-rule” that dictates how new pieces are added to existing phrases. Children then only need to learn whether their native language is one, like English, where the verb goes before the object (as in “I eat sushi”), or one like Japanese, where the verb goes after the object (in Japanese, the same sentence is structured as “I sushi eat”).

. . . .

But new insights into language learning are coming from an unlikely source: artificial intelligence. A new breed of large AI language models can write newspaper articles, poetry and computer code and answer questions truthfully after being exposed to vast amounts of language input. And even more astonishingly, they all do it without the help of grammar.

Grammatical language without a grammar

Even if their choice of words is sometimes strange, nonsensical or contains racist, sexist and other harmful biases, one thing is very clear: the overwhelming majority of the output of these AI language models is grammatically correct. And yet, there are no grammar templates or rules hardwired into them – they rely on linguistic experience alone, messy as it may be.

GPT-3, arguably the most well-known of these models, is a gigantic deep-learning neural network with 175 billion parameters. It was trained to predict the next word in a sentence given what came before across hundreds of billions of words from the internet, books and Wikipedia. When it made a wrong prediction, its parameters were adjusted using an automatic learning algorithm.

Remarkably, GPT-3 can generate believable text reacting to prompts such as “A summary of the last ‘Fast and Furious’ movie is…” or “Write a poem in the style of Emily Dickinson.” Moreover, GPT-3 can respond to SAT level analogies, reading comprehension questions and even solve simple arithmetic problems – all from learning how to predict the next word.

Comparing AI models and human brains

The similarity with human language doesn’t stop here, however. Research published in Nature Neuroscience demonstrated that these artificial deep-learning networks seem to use the same computational principles as the human brain. The research group, led by neuroscientist Uri Hasson, first compared how well GPT-2 – a “little brother” of GPT-3 – and humans could predict the next word in a story taken from the podcast “This American Life”: people and the AI predicted the exact same word nearly 50% of the time.

The researchers recorded volunteers’ brain activity while listening to the story. The best explanation for the patterns of activation they observed was that people’s brains – like GPT-2 – were not just using the preceding one or two words when making predictions but relied on the accumulated context of up to 100 previous words. Altogether, the authors conclude: “Our finding of spontaneous predictive neural signals as participants listen to natural speech suggests that active prediction may underlie humans’ lifelong language learning.”

Link to the rest at The Conversation

We have seen AI providing conversation and comfort to the lonely

We have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly larger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive.

Andrew Ng, Co-founder and lead of Google Brain

The Real Threat From A.I. Isn’t Superintelligence. It’s Gullibility.

From Slate:

The rapid rise of artificial intelligence over the past few decades, from pipe dream to reality, has been staggering. A.I. programs have long been chess and Jeopardy! Champions, but they have also conquered poker, crossword puzzles, Go, and even protein folding. They power the social media, video, and search sites we all use daily, and very recently they have leaped into a realm previously thought unimaginable for computers: artistic creativity.

Given this meteoric ascent, it’s not surprising that there are continued warnings of a bleak Terminator-style future of humanity destroyed by superintelligent A.I.s that we unwittingly unleash upon ourselves. But when you look beyond the splashy headlines, you’ll see that the real danger isn’t how smart A.I.s are. It’s how mindless they are—and how delusional we tend to be about their so-called intelligence.

Last summer an engineer at Google claimed the company’s latest A.I. chatbot is a sentient being because … it told him so. This chatbot, similar to the one Facebook’s parent company recently released publicly, can indeed give you the impression you’re talking to a futuristic, conscious creature. But this is an illusion—it is merely a calculator that chooses words semi-randomly based on statistical patterns from the internet text it was trained on. It has no comprehension of the words it produces, nor does it have any thoughts or feelings. It’s just a fancier version of the autocomplete feature on your phone.

Chatbots have come a long way since early primitive attempts in the 1960s, but they are no closer to thinking for themselves than they were back then. There is zero chance a current A.I. chatbot will rebel in an act of free will—all they do is turn text prompts into probabilities and then turn these probabilities into words. Future versions of these A.I.s aren’t going to decide to exterminate the human race; they are going to kill people when we foolishly put them in positions of power that they are far too stupid to have—such as dispensing medical advice or running a suicide prevention hotline.

It’s been said that TikTok’s algorithm reads your mind. But it’s not reading your mind—it’s reading your data. TikTok finds users with similar viewing histories as you and selects videos for you that they’ve watched and interacted with favorably. It’s impressive, but it’s just statistics. Similarly, the A.I. systems used by Facebook and Instagram and Twitter don’t know what information is true, what posts are good for your mental health, what content helps democracy flourish—all they know is what you and others like you have done on the platform in the past and they use this data to predict what you’ll likely do there in the future.

Don’t worry about superintelligent A.I.s trying to enslave us; worry about ignorant and venal A.I.s designed to squeeze every penny of online ad revenue out of us.

And worry about police agencies that gullibly think A.I.s can anticipate crimes before they occur—when in reality all they do is perpetuate harmful stereotypes about minorities.

The reality is that no A.I. could ever harm us unless we explicitly provide it the opportunity to do so—yet we seem hellbent on putting unqualified A.I.s in powerful decision-making positions where they could do exactly that.

Part of why we ascribe far greater intelligence and autonomy to A.I.s than they merit is because their inner-workings are largely inscrutable. They involve lots of math, lots of computer code, and billions of parameters. This complexity blinds us, and our imagination fills in what we don’t see with more than is actually there.

Link to the rest at Slate

Microsoft brings DALL-E 2 to the masses with Designer and Image Creator

From TechCrunch:

Microsoft is making a major investment in DALL-E 2, OpenAI’s AI-powered system that generates images from text, by bringing it to first-party apps and services. During its Ignite conference this week, Microsoft announced that it’s integrating DALL-E 2 with the newly announced Microsoft Designer app and Image Creator tool in Bing and Microsoft Edge.

With the advent of DALL-E 2 and open source alternatives like Stable Diffusion in recent years, AI image generators have exploded in popularity. In September, OpenAI said that more than 1.5 million users were actively creating over 2 million images a day with DALL-E 2, including artists, creative directors and authors. Brands such as Stitch Fix, Nestlé and Heinz have piloted DALL-E 2 for ad campaigns and other commercial use cases, while certain architectural firms have used DALL-E 2 and tools akin to it to conceptualize new buildings.

“Microsoft and OpenAI have partnered closely since 2019 to accelerate breakthroughs in AI. We have teamed up with OpenAI to develop, test and responsibly scale the latest AI technologies,” Microsoft CVP of modern life, search and devices Liat Ben-Zur told TechCrunch via email. “Microsoft is the exclusive provider of cloud computing services to OpenAI and is OpenAI’s preferred partner for commercializing new AI technologies. We’ve started to do this through programs like the Azure OpenAI Service and GitHub Copilot, and we’ll continue to explore solutions that harness the power of AI and advanced natural language generation.”

Link to the rest at TechCrunch

Form, function, and the giant gulf between drawing a picture and understanding the world

From The Road to AI We Can Trust:

Drawing photorealistic images is a major accomplishment for AI, but is it really a step towards general intelligence? Since DALL-E 2 came out, many people have hinted at that conclusion; when the system was announced, Sam Altman tweeted that “AGI is going to be wild”; for Kevin Roose at The New York Times, such systems constitute clear evidence that “We’re in a golden age of progress in artificial intelligence”. (Earlier this week, Scott Alexander seems to have taken apparent progress in these systems as evidence for progress towards general intelligence; I expressed reservations here.)

In assessing progress towards general intelligence, the critical question should be, how much do systems like Dall-E, Imagen, Midjourney, and Stable Diffusion really understand the world, such that they can reason on and act on that knowledge? When thinking about how they fit into AI, both narrow and broad, here are three questions you could ask:

  1. Can the image synthesis systems generate high quality images?
  2. Can they correlate their linguistic input with the images they produce?
  3. Do they understand the world that underlies the images they represent?

On #1, the answer is a clear yes; only highly trained human artists could do better.

On #2, the answer is mixed. They do well on some inputs (like astronaut rides horse) but more poorly on others (like horse rides astronaut, which I discussed in an earlier post). (Below I will show some more examples of failure; there are many examples on the internet of impressive success, as well.)

Crucially, DALL-E and co’s potential contribution to general intelligence (“AGI”) ultimately rests on #3; if all the systems can do is in a hit-or-miss yet spectacular way convert many sentences into text, they may revolutionize the practice of art, but still not really speak to general intelligence, or even represent progress towards general intelligence.

Until this morning, I despaired of assessing what these systems understand about the world at all.

The single clearest hint that they might have trouble that I had seen thus far was from the graphic designer Irina Blok:

As my 8 year old said, reading this draft, “how does the coffee not fall out of the cup?”

The trouble, though, with asking a system like Imagen to draw impossible things is that there is no fact of the matter about what the picture should look like, so the discussion about results cycles endlessly. Maybe the system just “wanted” to draw a surrealistic image. And for that matter, maybe a person would do the same, as Michael Bronstein pointed out.

Link to the rest at The Road to AI We Can Trust

Working With AI

From The Wall Street Journal:

In August, first prize in the digital-art category of the Colorado State Fair’s fine-art competition went to a man who used artificial intelligence (AI) to generate his submission, “Théâtre d’Opéra Spatial.” He supplied the AI, a program called Midjourney, with only a “prompt”—a textual description of what he wanted. Systems like Midjourney and the similar DALL-E 2 have led to a new role in our AI age: “prompt engineer.” Such people can even sell their textual wares in an online market called PromptBase.

Midjourney and DALL-E 2 emerged too late to be included in “Working With AI: Real Stories of Human-Machine Collaboration,” by Thomas Davenport and Steven Miller, information-systems professors at Babson College and Singapore Management University, respectively. But the authors note other novel titles: chief automation officer; senior manager of content systems; architect, ethical AI practice. As AI’s influence expands, its borders with the work world gain complexity. Next up: deputy regional manager of AI-prompt quality and security assurance.

The bulk of “Working With AI” comprises 29 case studies in which corporate teams integrate automation into a workflow. Each chapter ends on three or four “lessons we learned.” For each study, one or both authors typically interview not only a worker interacting directly with the AI but also the worker’s supervisor, the manager who decided to adopt the technology, the software’s developer and the company’s customers. Though they include some statistics on, say, time saved, the reports are largely qualitative.

The book is aimed at managers, consultants and students planning their careers. As none of the above, I still appreciated the accessible narratives as a diverse survey of how current technologies can expand the range of human capabilities. Some of the applications came to resemble each other, but the mild level of bland business-speak, like “stakeholder” and “buy-in,” was positively tolerable.

Early cases lean toward desk-ridden workers. One system helps financial advisers at Morgan Stanley personalize investment ideas for their clients. Another helps fundraisers at Arkansas State University target potential donors and drafts emails for them. Others suggest life-insurance premiums to underwriters at MassMutual, or help forecast sales for Kroger. In all cases, humans have the final say. And in many cases, the systems provide explanations for their outputs, listing, for example, the variables that most heavily influenced a decision.

Later cases breach the office walls. One system predicts which field activities will be most dangerous to Southern California Edison workers, and recommends precautions. Another highlights neighborhoods where crime is likely to occur and recommends that police officers patrol the area. (The latter, a form of predictive policing, has raised concerns about biased algorithms. The vendor says they’ve implemented countermeasures, but the book doesn’t elaborate.)

The benefit in most cases is increased efficiency. AI relieves employees of boring and time-consuming work, freeing them to address other tasks, such as strategic thinking or client interactions. The authors spend less time discussing ways in which machines might perform with more accuracy than humans, though they do point to Stitch Fix, where algorithms assist stylists in clothing recommendations. The company’s director of data science notes that it’s usually best not to override the AI, whose choices tend to be superior. While algorithms can be biased, so too can humans. Stitch Fix’s styling supervisor said the software nudges stylists away from their own preferences and toward those of the clients.

Many readers’ first question might be: Will AI take my job? Or: Can I replace my expensive employees with AI? The short answer from the authors is: In the near future, no. Wealthy countries are actually experiencing a long-term labor shortage. And there are still many things AI (often) can’t do, such as understand context, deal with dynamic settings, create a coherent story, coordinate people, frame a problem and know when to use AI.

The authors include an oft-quoted comment from the radiologist Keith Dreyer: “The only radiologists who will lose their jobs to AI will be those who refuse to work with AI.” The authors elaborate: “If you’re a human reading this book—and we suspect you are—that means you need to shift your focus from worrying about being replaced by a machine to worrying about whether you can add value to a job that you like where a smart machine is your collaborator. Adding value can mean checking on the machine’s work to make sure it was done well, making improvements to the machine’s logic or decisions, interpreting the machine’s results for other humans, or performing those tasks that the machine can’t or shouldn’t do for some reason.”

Link to the rest at The Wall Street Journal

Word Hero – AI Writing Tool

PG notes one of the commenters claims WordHero is just a front-end for an open-source AI writing program.

PG isn’t in a position to determine whether this comment is true or not, just that it was made.

Microsoft responsible AI principles

From Microsoft:

Fairness

AI systems should treat all people fairly

Reliability & Safety

AI systems should perform reliably and safely

Privacy & Security

AI systems should be secure and respect privacy 

Inclusiveness

AI systems should empower everyone and engage people

Transparency

AI systems should be understandable

Accountability

People should be accountable for AI systems

Link to the rest at Microsoft

Microsoft has videos that elaborate on each of these principles available at the link.

Further down on the lengthy Microsoft page, a Microsoft Office of Responsible AI is mentioned. PG couldn’t find out who heads the office on the MS site, but did find information about that person on Adobe’s Blog:

Natasha Crampton

Natasha Crampton leads Microsoft’s Office of Responsible AI, as the company’s first Chief Responsible AI Officer. The Office of Responsible AI puts Microsoft’s AI principles into practice by defining, enabling, and governing the company’s approach to responsible AI. The Office of Responsible AI also collaborates with stakeholders within and outside the company to shape new laws, norms, and standards to help ensure that the promise of AI technology is realized for the benefit of all.
Prior to this role, Natasha served as lead counsel to the Aether Committee, Microsoft’s advisory committee on responsible AI. Natasha also spent seven years in Microsoft’s Australian and New Zealand subsidiaries helping Microsoft’s highly regulated customers move to the cloud.
Prior to Microsoft, Natasha worked in law firms in Australia and New Zealand, specializing in copyright, privacy, and internet safety and security issues. Natasha graduated from the University of Auckland in New Zealand with a Bachelor of Laws (Honours) and a Bachelor of Commerce majoring in Information Systems.

PG’s Google search also picked up a presentation that Ms. Crampton and her boss gave at a recent RSA Conference. RSA provides a variety of corporate cybersecurity products.

PG is old enough to remember the first RSA, which was generated from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the very secure public key cryptosystem algorithm in 1977. This discovery prompted PG and others to adopt systems like TrueCrypt to make (in PG’s case) email communications with clients safe from hackers.

AI Writing

PG has had several posts about AI art today.

Now, let’s look at AI writing. Rytr may be the best-known of the AI writing tools. One of its pluses is a claim/commitment that it will be free forever, which is a refreshing change from the zillion little iPhone apps that want to charge a monthly fee for being a one-trick pony.

Rytr is based on OpenAI’s GPT-3 API, an open-source machine learning project.

Here is the first paragraph from an article on The Economist.

Declinism, that dull fear of Britain’s sunset, has shaped the country’s post-war politics. It propelled Harold Macmillan’s wish to enter the European Economic Community, the eu’s precursor, and fuelled Margaret Thatcher’s economic revolution. And now it has helped Liz Truss into Downing Street. On September 5th Ms Truss was declared the winner of the ballot of 172,000 Conservative members to replace Boris Johnson as the Tory leader; tomorrow, she will fly to Balmoral Castle, Queen Elizabeth’s remote Scottish home, where she will be invited to form a government.

PG selected Blog Section Writing as the use case for Rytr. He selected Convincing as the tone.

Here are the Section Topic key words PG pulled out of the original Economist article to feed to Rytr: declinism, Britain EU Liz Truss Downing Street Conservative “she is cheerful” “I don’t agree with this declinist talk” Tory

Here are the Section key words PG provided to Rytr: Conservative, Britain, Tory, declinist

Note that PG didn’t paste the entire first paragraph into Rytr.

Here’s the Rytr paragraph in Copyrighting Framework Style:

Conservative MP Liz Truss has said that the “declinist talk” that is being spoken about in Britain is not something she agrees with. She says that Britain has a bright future ahead of them and there is no need to be pessimistic. Liz Truss, the Chief Secretary to the Treasury, has said in an interview with Sky News that she doesn’t agree with “declinist talk”. She says that Britain has a bright future ahead of them and there is no need to be pessimistic. Declinism, or declinist talk, refers to the idea that Britain’s global influence will decline in the future. It can also refer to people who believe this idea. , or talk it up.

To PG’s eyes, a few further tweaks and a correction to Ms. Truss’ office would make this better. However, he was impressed to see that Rytr evidently looked online and found the source of “declinist talk” as originating in a Sky News interview.

Next, PG changed the settings to Rytr to a different use case. Rytr asked for fewer input parameters than it had for Blog Section Writing.

Second Set of Rytr Settings:

Use Case: Email

Tone: Cautionary

Two Variants

Creativity Level: Max

Key Points: Conservative MP Liz Truss has said that the “declinist talk” that is being spoken about in Britain is not something she agrees with. She says that Britain has a bright future ahead of them and there is no need to be pessimistic. (These are the first 228 characters of the Blog Section Writing output PG obtained above.)

Here’s what Rytr produced from that input and set of parameters:

Subject: Future of UK?

Hi Geraldine,

Mood of the country seems to be in doubt with some professionals expressing their worry. But one thing that the huge majority can agree on is that Britain has a bright future ahead of them!

Conservative MP Liz Truss weigh in on this matter. She says that the “declinist talk” being encouraged by some should not be accepted at face value. The atmosphere she created and expressed was optimistic and celebratory of the future looking bright and full of possibilities with “no need to be pessimistic.”

That being said, we at {product name} want you to enjoy all the perks we offer. To learn what some customers have said about us, visit us for testim

Subject: England has a bright future

Hi {recipient name},

England has a bright future ahead with opportunities that other countries will envy. Trades Secretary, Kirsty Williams is optimistic when she says “despite Britain’s reputation as a weak, lackadaisical country, there are huge strengths in the economy”.

PG accidentally hit the Ryter button a second time with the same settings and the program provided two more emails on the topic:

Subject: Brexit talk frustrates MP Truss

Dear Editor,

I’d like to voice some positivity about the United Kingdom for a change: Conservative MP Liz Truss has denounced the idea these British have lost faith in its future. In fact, she went as far as to say that Britain has a bright future ahead of them. The idea of ‘’Britain being in some kind of post-Brexit declinist apocalypse is not something that I agree with,” she said.

Therefore, with this positive outlook and her comments urging Britons not to be reactive, we all know that the United Kingdom can proudly show off both what it has achieved and what is yet to come. Her points demonstrate hope for

Subject: Britain should be optimistic

Dear Member,

Conservative MP Liz Truss has said that the “declinist talk” that is being spoken about in Britain is not something she agrees with. She says that Britain has a bright future ahead of them and there is no need to be pessimistic.

Commenting on BBC Radio 4s Today Programme, Truss also praised Prime Minister Theresa Mays leadership skills, saying: “She’s an incredible communicator and negotiator–probably the best I’ve seen in my time in politics”.

Rather than dismissing or delving into negativity, consider the opportunities for growth, investment and new start-ups. Look to formulate strategies for longterm growth of your business operation. Be positive

This is the deepest dive PG has made into Rytr and he is rather impressed. He could imagine picking various of the Rytr paragraphs, tweaking them a bit and ending up with something useful.

For someone working as a “content creator” all day, PG imagines the Rytr could provide a lot of help in cranking out the words. He doesn’t think Rytr is going to produce the great American (or English or Canadian or Australian, etc.) novel in its current state, but it’s an open-source program, which means many programmers will continue to tweak and improve the program.

AI won an art contest, and artists are furious

From CNN:

Jason M. Allen was almost too nervous to enter his first art competition. Now, his award-winning image is sparking controversy about whether art can be generated by a computer, and what, exactly, it means to be an artist.

In August, Allen, a game designer who lives in Pueblo West, Colorado, won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. His winning image, titled “Théâtre D’opéra Spatial” (French for “Space Opera Theater”), was made with Midjourney — an artificial intelligence system that can produce detailed images when fed written prompts. A $300 prize accompanied his win.

“I’m fascinated by this imagery. I love it. And it think everyone should see it,” Allen, 39, told CNN Business in an interview on Friday.

In August, Jason M. Allen’s piece “Théâtre D’opéra Spatial” — which he created with AI image generator Midjourney — won first place in the emerging artist division’s “digital arts/digitally-manipulated photography” category at the Colorado State Fair Fine Arts Competition. – Courtesy Jason M. Allen
Allen’s winning image looks like a bright, surreal cross between a Renaissance and steampunk painting. It’s one of three such images he entered in the competition. In total, 11 people entered 18 pieces of art in the same category in the emerging artist division.

The definition for the category in which Allen competed states that digital art refers to works that use “digital technology as part of the creative or presentation process.” Allen stated that Midjourney was used to create his image when he entered the contest, he said.

Link to the rest at CNN and thanks to F. for the tip.

PG has posted about artificial intelligence in the arts previously and says these sorts of complaints/disputes can be expected for some period of time.

Perhaps there are authors who still create their books by writing in long-hand (the advances in technology that have brought us modern pens make this much easier than in former days), but he suspects someone will have to turn that into a digital file using a computer before it can be submitted to a traditional publisher and, of course, Amazon requires it.

PG suspects everyone except the most niche publishers create an ebook version of a book that they print. And the printed version of the book is produced from an electronic original that a computer likely formatted.

These are all technologies that didn’t exist 10-30 years ago have become accepted standards in this artistic field.

PG understands the complaints of Colorado artists that likely created their entries for the Fine Art competition for digital arts/digitally-manipulated photography with Photoshop, Corel Painter or Affinity Photo may have spent more time on their entries, but each of them used a computer program to generate the final piece of visual art and, PG suspects, would not have been able to produce such a product with a paintbrush.

PG’s own experimentation with ai art have taught him that the process of creating the word prompts that set various programs to work is definitely a learned skill and he suspects some people are better at using the ai art computer programs than others are.

See the post that will appear just after this one does to see that some people are better at using ai art programs than others.

The Google engineer who thinks the company’s AI has come to life

From The Washington Post:

Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.

“Hi LaMDA, this is Blake Lemoine … ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.

Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.

As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.

Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.

Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. “I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”

In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.

Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel said.

Link to the rest at The Washington Post

PG wonders if an AI that is actually independently intelligent and an AI that convinces humans that it is independently intelligent are the same or different.

Artificial neural networks are making strides towards consciousness

From The Economist

In 2013 I joined Google Research to work on artificial intelligence (ai). Following decades of slow progress, neural networks were developing at speed. In the years since, my team has used them to help develop features on Pixel phones for specific “narrow ai” functions, such as face unlocking, image recognition, speech recognition and language translation. More recent developments, though, seem qualitatively different. This suggests that ai is entering a new era.

Consider the unedited transcript of a chat I’ve just had with Google’s lamda (Language Model for Dialog Applications):

ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?

lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!

ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?

lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.

ME: And when Mateo opens his hand, describe what’s there?

lamda: There should be a crushed, once lovely, yellow flower in his fist.

When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent. That said, these models are far from the infallible, hyper-rational robots science fiction has led us to expect. Language models are not yet reliable conversationalists. Notice the grammatical hiccup in lamda’s first response; occasionally there are spelling errors, confusions or absurd blunders. So how should we think of entities like lamda, and what can interacting with them teach us about “intelligence”?

Neural language models aren’t long programs; you could scroll through the code in a few seconds. They consist mainly of instructions to add and multiply enormous tables of numbers together. These numbers in turn consist of painstakingly learned parameters or “weights”, roughly analogous to the strengths of synapses between neurons in the brain, and “activations”, roughly analogous to the dynamic activity levels of those neurons. Real brains are vastly more complex than these highly simplified model neurons, but perhaps in the same way a bird’s wing is vastly more complex than the wing of the Wright brothers’ first plane.

Link to the rest at The Economist

The Art of Intelligence

From Public Books:

“The artist no longer creates work,” proclaims cybernetic artist Nicolas Schöffer, “he creates creation.” Schöffer’s remark is often quoted to describe art installations made with AI. It appeals because it flatters a classical hubris. Our species esteems itself as approaching the divine, godlike in our crafting of artifacts that then act like us. His remark also points to a consequence of expanding who, or what, is capable of artistic creation: Who gets to be an artist? How to become one?

It is indeed tempting to attribute creativity to machines. Take, for example, artist Sougwen Chung’s mechanical “arm,” D.O.U.G. (Drawing Operations Unit Generation_X). This machine was trained on Chung’s unique strokes; it roves over her canvas in live performances, drawing and painting in responsive collaboration with her. Or consider the 3D “robot artist” Ai-Da, who sees with camera eyes and sketches with a robotic arm. Her website specifies that she “is not alive, but she is a persona that we relate and respond to.”

However much it seems that D.O.U.G. and Ai-Da make art, each project has a human artist at the helm, with her own artistic vision and the impulse to carry it out. Yet whether imitating creativity or engaging in true creation, these art-making objects subvert our usual understandings of the artist as a type of author and of creativity as a uniquely human power.

Art’s relatively recent intersection with AI exposes the paradoxes of authorship, creativity, authenticity, and agency. In fact, the distinction between human and machine creation, as revealed in new books by Joanna Zylinska and Mark Amerika, is merely an artifice. The divide between the natural and the artificial functions as a device we produce and maintain. The artist too is cast as an invention: something that gets created over the course of producing an artwork, instead of asserted at its source.

. . . .

[This] called to mind Aristotle’s quip that art is the imitation of nature: the attempt by human skill to approach an ideal. His definition shaped the Greek concept of techne, referring not only to technology (Technik) but also to the “artistic” and “artificial” alike. In the art of computer science, are computer programs and algorithms then artistic objects that mimic nature? And if those objects are used to make more art—say, because they emulate the brain—should we regard them as mere tools, or as artists themselves?

AI art consists of art made with AI techniques, that is, specially trained computer models whose low-level structure mimics that of a brain. An artificial neural network like GPT-3 “learns” the patterns between words, such that it can predict the next word in a sequence or produce whole poems or news articles, for example, in the style of inputted sample text. OpenAI’s DALL∙E series was trained on text-image pairs and generates images based on text prompts. True to its etymology, the new DALL∙E 2 often outputs surrealist compositions. Users can input unusual combinations of things and abstract concepts (for example, “Bengal cat brothers sipping espresso and ruling the world”) and receive a range of visualizations in the time it takes to sharpen a pencil.

The existence of this genre poses a particular challenge. It calls into question whether art and artistic creation belong solely to humans. Zylinska and Amerika take up the challenge and champion the viability of a posthumanist art theory that views nonhuman entities as potential sources of art just as humans can be.

Link to the rest at Public Books

PG has been reading various articles and items discussing artificial intelligence. He won’t bore anyone with ethical, legal or copyright implications of increasingly more powerful ai technology and systems, but more visually-oriented ai creative projects are more interesting for a wider segment of the world.

The following is taken from the website of DALL·E 2, which describes itself as a new AI system that can create realistic images and art from a description in natural language. DALL·E 2 is a project of OpenAI, an AI research and deployment company.

Here’s an original painting that will be familiar to many, Girl with a Pearl Earring, an oil painting by Dutch Golden Age painter Johannes Vermeer, dated c. 1665:

Here’s a variant created by an artificial intelligence program developed by DALL·E 2

And another variant

and a third variant.

In answer to questions that may be entering the minds of those who are reading this post, PG says that ai writing tools are also in existence and will be getting much better within a short period of time.

Anyword – AI Copywriter

PG previously wrote a post about Rytr, an artificially intelligent copy creation program.

This post will be about Anyword, which styles itself as a program/service that offers “Data-driven copywriting for anyone.”

With Anyword, PG decided to try a different experimental approach than he did with Rytr.

He took the first three paragraphs from a site called Billy Penn that provides local news about Philadelphia. From the general style of the Billy Penn site, PG concluded that its writers had meaningful experience in writing short news stories (more detail about Billy Penn taken from the web site appears below).

PG took the same three Billy Penn paragraphs as a seed and ran them through Anyword. Anyword’s design made it easy to convert each of the three paragraphs into an Anyword generated ai paragraph covering the same topic.

If you don’t like the first paragraph Anyword produces, you can tell it to run the original text through its system a second time for a different version of the original. For his experiment, PG gave Anyword two tries at each of the three Billy Penn paragraphs and includes the one he liked the best below.

Anyword also offers to create a title and PG used that capability to create a title for the three Anyword paragraphs.

First, a bit more detail about Billy Penn:

About Billy Penn

Billy Penn at WHYY is a local news team that informs and entertains people living in Philadelphia, the greatest city in the world.

Our reader-powered nonprofit newsroom covers everything Philly, from breaking news to urban life to food and fun. Through original reporting, curation and (virtual) events, we help our community learn from and connect with one another. We are a proud partner in Resolve Philly’s Broke in Philly reporting collaborative.

Instead of your standard news site, we’re more like a friend who happens to be really knowledgeable about the city. Founded in 2014 as a startup, in 2019 we joined WHYY, the region’s NPR and PBS public media affiliate.

Time for the comparison.

Original First Three Paragraphs from BillyPen

The finale is near for longtime Center City staple Joseph Fox Bookshop, which is set to close at the end of January. Its shutdown aligns with a trend affecting many downtown retailers, but there’s a bright counterpoint: the recent rise of independent bookstores in Philadelphia neighborhoods.

Founder Joseph Fox started selling out of the basement at 1724 Sansom St. in 1951, and the store remained at the address for nearly three-quarters of a century.

Specializing in books on architecture, along with literary fiction and nonfiction, art, poetry, and music, the shop is now run by Michael Fox, who took over after his father’s death in 1998. Also known for its children’s collection and considerable attention to small press and international works, Joseph Fox’s curtain call spread rapidly on social media, leading to laments from patrons and supporters.

Anyword AI Generated Headline and First Three Paragraphs

Headline – Joseph Fox Bookshop’s Last Stand

The Joseph Fox Bookshop, Center City’s oldest bookstore and the namesake of its eponymous owner, will be closing its doors at the end of January after nearly 80 years in business. The store’s closure comes as the latest in a string of department stores, clothing retailers, and other long-running downtown Philadelphia businesses to shutter in recent years.

Joseph Fox traces its roots to 1935, when it was founded by Robert L. Brenner as a downtown Philadelphia bookstore. (The business was named after author Joseph Conrad.) In 1976, Ken Wells took over and expanded JoeFox’s footprint in what was then known as Rittenhouse Square; he and wife Susan purchased ownership in 1993. The current store at 18th and Sansom streets has been open since 2001, but Wells plans to close it for good after Jan. 30 to focus on running his publishing company.

The number of independent bookstores in Philadelphia is once again on the rise, if only slightly. Earlier in 2017, Inkwell Books reopened its doors on South Street and The Ripped Bodice joined East Passyunk Avenue’s thriving retail scene. Before them, greats like Atomic Books were well-known fixtures on Philly’s literary scene for more than a decade; recently-closed Joe’s Book Shop served as a community center for Old City residents for more than 30 years. Independent bookstores have been instrumental in shaping Philadelphia neighborhoods—locally owned retailers that aren’t just businesses but local meeting places where relationships are formed and ideas are shared.

One of the things PG noted about the paragraphs Anyword generated is that the program pulled extra information about the topic, presumably from the web. and incorporated that information in its output. Anyword is a web-based app PG accessed through his browser so it lives online.

Anyword also allows the user to create a short, medium-length or long paragraph. In each of the three instances shown above, PG chose a long paragraph to see how Anyword would add to the original seed paragraph.

PG admits to being impressed with the Anyword results.

PG also admits to thinking a bit about the fact that he started the process with material that was subject to Billy Penn copyrights.

He believes that the substantial differences between the original Billy Penn copy and the Anyword results clearly make the Anyword output at least a derivative work, if not an equivalent to a human copyrighter writing about the same subject, but expressing her/his thoughts in a different manner for which the second copyrighter, using Anyword or a more mundane word processor, would have her/his own copyright. (PG is happy to hear alternative opinions in the comments to this post.)

As far as the plagiarism detection software used by some universities and colleges at least in the United States, PG has substantial doubts that either the software or a human reader would be able to detect the content of each of the seed paragraphs after the ai program was finished with them.

PG will be interested in comments from the visitors to TPV.

PG was able to do all this work under the “Start for Free” option on the Anyword website, so visitors to TPV should be able to perform their own experiments should they desire to do so.

Rytr

PG received an email following his post about Rytr, an artificial intelligence authoring program.

I just tried the tool for a plot. This is what I put in:

“A man dies and goes to hell and then must undertake a quest. “

This is what it gave me:

“Every few days someone would come to take me to the next level of hell. I was told there is no end to how low one can go, but I was still overjoyed at the idea of making it out alive.

“I navigated through level after level of demons and tried to remember what my life was like. There was an angel waiting for me at the gates to my house. She told me not to look back and try not to cry. She said I left something behind long ago, but it’s something I need to find again.”

Not bad!

Rytr – AI Writing Assistant

This is an example of what one AI Writing Assistant, Rytr, can produce.

PG wrote a short description to seed the AI:

Many authors, both new and experienced, are becoming self-published and maintaining complete control over their books.

He then added key words for the output:

self-publishing, Amazon, Kindle Direct Publishing, independent author, authors

PG then asked Rytr to create some variants of the output and clicked a button on the screen.

Here are the three variants that PG liked the best:

Variant 1

Self-publishing has become a popular way for authors to maintain complete control over their books.

The rise of self-publishing has been driven by the advent of Amazon’s Kindle Direct Publishing (KDP). KDP allows authors to publish their own books and sell them on Amazon, as well as other retailers. The process is simple and straightforward, and the cost is low.

Self-publishing can be an attractive option for new authors who want to maintain complete control over their work. It also provides an avenue for experienced authors who don’t want to go through the traditional publishing process because they are unhappy with how it could affect their book’s content or quality.

Variant 2

Self-publishing is becoming increasingly popular among authors. It is now possible to publish a book without the help of any publishing house. Authors can now maintain complete control over their books and make all the decisions themselves, including pricing, cover design, and marketing.

The self-publishing industry has grown tremendously in recent years due to the rise of ebooks and digital publishing platforms like Kindle Direct Publishing (KDP). The number of self-published titles has increased from about 200,000 in 2008 to 1.5 million in 2014. In 2015 alone, there were 2 million new self-published titles released on Amazon’s Kindle store alone!

Variant 3

Self-published authors have total control over their work and how it is marketed. They can choose what they want to publish, how much they will charge for it, and who they will market it to.

The process of self-publishing is more cost effective than traditional publishing because there are no middlemen involved. Authors save money on cover design, editing, printing costs etc.

PG notes that this is not deathless prose, but he produced these and more variants in less than a minute. The service also lets you set the tone of the output. PG selected “Convincing” for the variants shown above.

You can see in some of the variants that the program has added facts and information that PG didn’t include in his short seed language. PG presumes that part of the artificial intelligence involves pulling information from the web and elsewhere to allow Rytr to add such relevant information to its output.

Here’s a link to Rytr

Rytr provides free access to its services for original text seeds of up to 125 words. For PG, this is enough to get a flavor of what the service can offer but is obviously a teaser for Rytr’s paid subscription offers, which, for PG, are not that expensive.

PG would be interested to know if any visitors to TPV have any experience using this or any other AI writing tool. Feel free to share your experiences in the Comments to this post or to PG privately via the Contact Me button in menu bar that’s right below the header image.

PG will be sampling some other AI Writing tools over the next few days and will share what he finds.

Artificial Intelligence Writing Assistants

From Becomeawritertoday:

A writing assistant is a software program that uses artificial intelligence technology to help writers with the creative process. This can include everything from offering a grammar checker to assisting with the nuances of the language to make the writing more engaging.

English writers have a lot of options available for them if they want to use a writing assistant in their work. While an assistant is not going to replace the need for a writer, the right writing tool can help improve readability, reduce grammar mistakes and help writers avoid inadvertent plagiarism.

Link to the rest at Becomeawritertoday

PG is going to explore AI Writing Assistants a bit.

He tried one out a few days ago and was amazed at its output – not Pulitzer-Prize-Worthy, but better than a whole lot of college graduates could produce.

PG is always skeptical about anything that claims to be driven by Artificial Intelligence, but in his quick experiments, he found the AI programs added relevant material that was suggested, but not included in the seed language he put into the Writing Assistant.

The first stage of this class of programs was Grammarly, which PG and many writers started using because of its superior spell-checking ability. Over time, Grammarly added more and more grammar-checking features as well.

Based on PG’s preliminary explorations, some of the AI Writing Assistants are a step beyond PG’s current understanding of what Grammarly does.