Why large language models aren’t headed toward humanlike understanding

From Science News:

Apart from the northward advance of killer bees in the 1980s, nothing has struck as much fear into the hearts of headline writers as the ascent of artificial intelligence.

Ever since the computer Deep Blue defeated world chess champion Garry Kasparov in 1997, humans have faced the prospect that their supremacy over machines is merely temporary. Back then, though, it was easy to show that AI failed miserably in many realms of human expertise, from diagnosing disease to transcribing speech.

But then about a decade ago or so, computer brains — known as neural networks — received an IQ boost from a new approach called deep learning. Suddenly computers approached human ability at identifying images, reading signs and enhancing photographs — not to mention converting speech to text as well as most typists.

Those abilities had their limits. For one thing, even apparently successful deep learning neural networks were easy to trick. A few small stickers strategically placed on a stop sign made an AI computer think the sign said “Speed Limit 80,” for example. And those smart computers needed to be extensively trained on a task by viewing numerous examples of what they should be looking for. So deep learning produced excellent results for narrowly focused jobs but couldn’t adapt that expertise very well to other arenas. You would not (or shouldn’t) have hired it to write a magazine column for you, for instance.

But AI’s latest incarnations have begun to threaten job security not only for writers but also a lot of other professionals.

“Now we’re in a new era of AI,” says computer scientist Melanie Mitchell, an artificial intelligence expert at the Santa Fe Institute in New Mexico. “We’re beyond the deep learning revolution of the 2010s, and we’re now in the era of generative AI of the 2020s.”

Generative AI systems can produce things that had long seemed safely within the province of human creative ability. AI systems can now answer questions with seemingly human linguistic skill and knowledge, write poems and articles and legal briefs, produce publication quality artwork, and even create videos on demand of all sorts of things you might want to describe.

. . . .

“These things seem really smart,” Mitchell said this month in Denver at the annual meeting of the American Association for the Advancement of Science.

. . . .

At the heart of the debate is whether LLMs actually understand what they are saying and doing, rather than just seeming to. Some researchers have suggested that LLMs do understand, can reason like people (big deal) or even attain a form of consciousness. But Mitchell and others insist that LLMs do not (yet) really understand the world (at least not in any sort of sense that corresponds to human understanding).

“What’s really remarkable about people, I think, is that we can abstract our concepts to new situations via analogy and metaphor.”Melanie Mitchell

In a new paper posted online at arXiv.org, Mitchell and coauthor Martha Lewis of the University of Bristol in England show that LLMs still do not match humans in the ability to adapt a skill to new circumstances. Consider this letter-string problem: You start with abcd and the next string is abce. If you start with ijkl, what string should come next?

Humans almost always say the second string should end with m. And so do LLMs. They have, after all, been well trained on the English alphabet.

. . . .

“While humans exhibit high performance on both the original and counterfactual problems, the performance of all GPT models we tested degrades on the counterfactual versions,” Mitchell and Lewis report in their paper.

Other similar tasks also show that LLMs do not possess the ability to perform accurately in situations not encountered in their training. And therefore, Mitchell insists, they do not exhibit what humans would regard as “understanding” of the world.

“Being reliable and doing the right thing in a new situation is, in my mind, the core of what understanding actually means,” Mitchell said at the AAAS meeting.

Human understanding, she says, is based on “concepts” — basically mental models of things like categories, situations and events. Concepts allow people to infer cause and effect and to predict the probable results of different actions — even in circumstances not previously encountered.

“What’s really remarkable about people, I think, is that we can abstract our concepts to new situations via analogy and metaphor,” Mitchell said.

She does not deny that AI might someday reach a similar level of intelligent understanding. But machine understanding may turn out to be different from human understanding. Nobody knows what sort of technology might achieve that understanding and what the nature of such understanding might be.

If it does turn out to be anything like human understanding, it will probably not be based on LLMs.

Link to the rest at Science News and thanks to F. for the tip.

AI is coming for your audiobooks. You’re right to be worried.

From The Washington Post:

Something creepy this way comes — and its name is digital narration. Having invaded practically every other sphere of our lives, artificial intelligence (AI) has come for literary listeners. You can now listen to audiobooks voiced by computer-generated versions of professional narrators’ voices. You’re right to feel repulsed.

“Mary,” for instance, a voice created by the engineers at Google, is a generic female; there’s also “Archie,” who sounds British, and “Santiago,” who speaks Spanish, and 40-plus other personas who want to read to you. Apple Books uses the voices of five anonymous professional narrators in what will no doubt be a growing stable: “Madison,” “Jackson” and “Warren,” covering fiction in various genres; and “Helena” and “Mitchell,” taking on nonfiction and self-development.

I have listened to thousands of hours of audiobooks (it’s my job), so perhaps it’s not a surprise that I sense the wrongness of AI voices. Capturing and conveying the meaning and sound of a book is a special skill that requires talent and soul. I can’t imagine “Archie,” for instance, understanding, much less expressing, the depth of character of say, David Copperfield. But here we are at a strange crossroads in the audiobooks world: Major publishers are investing heavily in celebrity narrators — Meryl Streep reading Ann Patchett’s “Tom Lake,” Claire Danes reading “The Handmaid’s Tale,” a full cast of Hollywood actors (Ben Stiller, Julianne Moore, Don Cheadle and more) on “Lincoln in the Bardo,” to name a few. Will we reach a point where we must choose between Meryl Streep and a bot?

The main issue is, naturally, money. The use of disembodied entities saves time and spares audiobook producers the problems of dealing with human beings — chief among them, their desire to be paid. This may explain why so many self-published books are narrated by “Madison” and her squad of readers. Audible insists that every audiobook it sells must have been narrated by a human. (Audible is a subsidiary of Amazon, whose founder, Jeff Bezos, owns The Washington Post.) Major publishing houses say the same. But how long until they see the economic benefits of AI?

Jason Culp, an actor and award-winning narrator who has been recording audiobooks for more than a quarter of a century, knows how much goes into a production. A 10-hour audiobook, he says, takes a narrator something like four or five days, with a couple of additional hours for editing mop-up. For each finished hour of audio, narrators make about $225 — somewhat more for the big names — and editors, about $100. Beyond that, producers must pay a percentage to SAG-AFTRA, the narrators’ union. There are other production costs too, of course, but you can see how eliminating the human narrator appeals to the business mind.

Apple’s narrators are cloned from the voices of professionals who have licensed the rights to their voices. Their identities are secret, but speculation abounds. It’s a touchy subject, and you can see why. Whether to sell the rights to one’s voice is an agonizing decision for a professional narrator. The money offered amounts to something like what a midrange narrator makes in four years; on the other hand, agreeing to the deal seems to many to be a betrayal of the profession, one that would risk alienating one’s peers.

According to Culp, narrators are alarmed by the advent of AI narration “as, naturally, it might mean less work for living, breathing narrators in the future. We might not know the circumstances under which a narrator might take this step, but generally there is a lot of solidarity within the community about encouraging narrators not to do it. As well, our union is keeping a close eye on companies that might be using underhanded tactics to ‘obtain’ narrators’ voices in works that they have produced.”

Even though the notion makes my skin crawl, I listened to Madison’s narration of “The New Neighbor” by Kamaryn Kelsey, the author of almost 60 self-published books (Apple, 1½ hours). This is the first installment in a series of 19 detective stories starring female private investigator Pary Barry. The plot is entertaining enough, and Madison is a slick operator, in the sense that you can believe that she’s human — for about five minutes.

Link to the rest at The Washington Post

PG asks, “When you listen to an audiobook, are you focusing on the performance of the narrator or the book itself? Do you forget about the narrator’s voice after a few pages?”

While the human narrator is certainly capable creating a better or worse “performance,” the narrator’s first obligation is not to interfere with the listener’s enjoyment of the book.

PG wonders if someone’s appreciation of a particular human performer may be a little like wine-tasting. Some people have a palate that always discriminates between a good or bad wine, for others, unless they have a side-by-side comparison, are fine with the equivalent of a house wine.

PG suggests that a very large portion of the present and future listeners to audiobooks will be perfectly happy with the house wine.

(Note: Although PG has not tasted wine for several decades, he does recall the various business lunch/dinner performances of the sommelier carefully uncorking a bottle, presenting the cork for a sniff test by whichever businessperson was paying for the meal and drinks, pouring a bit into a wineglass for the host to swirl around, sniff, then swallow delicately, look into the air, then communicate approval. On more than one occasion, a host who was also a good friend would admit he had no idea what the difference in taste was between an expensive wine and the house wine. To indicate how long it’s been since PG has witnessed this ceremony, he doesn’t ever recall the presence of a business hostess. No, those were not the good old days for PG. He prefers the present.)

OpenAI: ‘The New York Times Paid Someone to Hack Us’

From Torrent Freak:

OpenAI accuses The New York Times of paying someone to hack OpenAI’s products. This was allegedly done to gather evidence for the copyright infringement complaint the newspaper filed late last year. This lawsuit fails to meet The Times’ “famously rigorous journalistic standards,” the defense argues, asking the New York federal court to dismiss it in part.

In recent months, rightsholders of all ilks have filed lawsuits against companies that develop AI models.

The list includes record labels, individual authors, visual artists, and more recently the New York Times. These rightsholders all object to the presumed use of their work without proper compensation.

A few hours ago, OpenAI responded to The New York Times complaint, asking the federal court to dismiss several key claims. Not just that, the defendants fire back with some rather damning allegations of their own.

OpenAI’s motion directly challenges the Times’s journalistic values, putting the company’s truthfulness in doubt. The notion that ChatGPT can be used as a substitute for a newspaper subscription is overblown, they counter.

. . . .

“In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will,” the motion to dismiss reads.

‘NYT Paid Someone to Hack OpenAI’?

In its complaint, the Times did show evidence that OpenAI’s GPT-4 model was able to supposedly generate several paragraphs that matched content from its articles. However, that is not the full truth, OpenAI notes, suggesting that the newspaper crossed a line by hacking OpenAI products.

“The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in the course of this case, is that the Times paid someone to hack OpenAI’s products,” the motion to dismiss explains.nyt hacked

OpenAI believes that it took tens of thousands of attempts to get ChatGPT to produce the controversial output that’s the basis of this lawsuit. This is not how normal people interact with its service, it notes.

It also shared some additional details on how this alleged ‘hack’ was carried out by this third-party.

“They were able to do so only by targeting and exploiting a bug […] by using deceptive prompts that blatantly violate OpenAI’s terms of use. And even then, they had to feed the tool portions of the very articles they sought to elicit verbatim passages of, virtually all of which already appear on multiple public websites.”

Link to the rest at Torrent Freak

PG notes that allegations made in lawsuits may or may not be true. Only when a court issues a final verdict can anyone know what was true and provable and what was not.

Welcome to the Era of BadGPTs

From The Wall Street Journal:

A new crop of nefarious chatbots with names like “BadGPT” and “FraudGPT” are springing up on the darkest corners of the web, as cybercriminals look to tap the same artificial intelligence behind OpenAI’s ChatGPT.

Just as some office workers use ChatGPT to write better emails, hackers are using manipulated versions of AI chatbots to turbocharge their phishing emails. They can use chatbots—some also freely-available on the open internet—to create fake websites, write malware and tailor messages to better impersonate executives and other trusted entities.

Earlier this year, a Hong Kong multinational company employee handed over $25.5 million to an attacker who posed as the company’s chief financial officer on an AI-generated deepfake conference call, the South China Morning Post reported, citing Hong Kong police. Chief information officers and cybersecurity leaders, already accustomed to a growing spate of cyberattacks, say they are on high alert for an uptick in more sophisticated phishing emails and deepfakes.

Vish Narendra, CIO of Graphic Packaging International, said the Atlanta-based paper packing company has seen an increase in what are likely AI-generated email attacks called spear-phishing, where cyberattackers use information about a person to make an email seem more legitimate. Public companies in the spotlight are even more susceptible to contextualized spear-phishing, he said.

Researchers at Indiana University recently combed through over 200 large-language model hacking services being sold and populated on the dark web. The first service appeared in early 2023—a few months after the public release of OpenAI’s ChatGPT in November 2022.

Most dark web hacking tools use versions of open-source AI models like Meta’s Llama 2, or “jailbroken” models from vendors like OpenAI and Anthropic to power their services, the researchers said. Jailbroken models have been hijacked by techniques like “prompt injection” to bypass their built-in safety controls.

Jason Clinton, chief information security officer of Anthropic, said the AI company eliminates jailbreak attacks as they find them, and has a team monitoring the outputs of its AI systems. Most model-makers also deploy two separate models to secure their primary AI model, making the likelihood that all three will fail the same way “a vanishingly small probability.”

Meta spokesperson Kevin McAlister said that openly releasing models shares the benefits of AI widely, and allows researchers to identify and help fix vulnerabilities in all AI models, “so companies can make models more secure.”

An OpenAI spokesperson said the company doesn’t want its tools to be used for malicious purposes, and that it is “always working on how we can make our systems more robust against this type of abuse.”

Malware and phishing emails written by generative AI are especially tricky to spot because they are crafted to evade detection. Attackers can teach a model to write stealthy malware by training it with detection techniques gleaned from cybersecurity defense software, said Avivah Litan, a generative AI and cybersecurity analyst at Gartner.

Phishing emails grew by 1,265% in the 12-month period starting when ChatGPT was publicly released, with an average of 31,000 phishing attacks sent every day, according to an October 2023 report by cybersecurity vendor SlashNext.

“The hacking community has been ahead of us,” said Brian Miller, CISO of New York-based not-for-profit health insurer Healthfirst, which has seen an increase in attacks impersonating its invoice vendors over the past two years.

Link to the rest at The Wall Street Journal

10 AI Writing Tools Everyone Should Know About

From Intuit Mailchimp:

What are AI writing tools?

AI writing software includes various tools that can create content for you. AI writing tools make content based on user input. You can use these tools to create articles, product descriptions, landing pages, blog posts, and more.

However, AI software is not meant to take over the need for human writers completely. Instead, it’s just supposed to be a way to optimize productivity and make your life easier, especially if you’re creating a high volume of content. So rather than having to go in and write everything by hand, AI writing software can do some of it for you.

. . . .

10 best AI writing tools to use

There are various AI writing tools that you can use to increase productivity for your business. But the right AI writing tool for your company depends on your specific wants and needs. Different AI writing tools serve different purposes, so make sure you choose one that is best for your company. Here are some of the best AI writing tools that we recommend:

Writesonic

Writesonic is an AI content tool that can help with the content creation process. With Writesonic, you can use artificial intelligence to generate everything from blog posts and landing pages to Facebook ad copy.

Writesonic is especially beneficial for those dealing with writer’s block. It has over 60 AI writing tools that can help you brainstorm ideas and actually generate content for you.

INK Editor

INK Editor is best for co-writing and optimizing SEO. Consistency is key with writing, and with this AI writing tool, you can ensure that your content will consistently rank high on search engines. This will help to generate traffic to your company’s website and lead to more sales.

INK Editor also provides suggestions on how to improve your SEO score while you’re writing. So if your business goal is to create high-performing content that ranks high on search engines, INK Editor is for you. You can also get a free trial of INK Editor, or upgrade to a paid version to access more features.

Anyword

Anyword is a copywriting AI software that benefits marketing and sales teams. Some AI copywriting tools create content that sounds like a robot wrote them, but with Anyword, it will always sound like a human wrote it.

If you don’t have the time or resources to produce content for your business, Anyword can help to streamline your writing process by creating high-quality content. You can create blog posts, ads, articles, and more that you can use across various marketing channels.

Jasper

Jasper is a great AI writing tool if you want to create high-quality content at a quick speed. It offers over 50 templates and supports over 25 languages, so you can tailor the tools to suit your business’s specific needs.

With Jasper, you can create personalized AI-generated content to reach your target audience. Jasper will also assist with catching grammar mistakes to ensure you’re delivering the best work possible.

Wordtune

If you need an AI writing tool that can help with grammar and writing, Wordtune is for you. Not only does Wordtune help with catching grammar mistakes, but it also goes a step further and assists with writing compelling and engaging content.

Wordtune ensures the readability of content, so it always sounds like it came from human writers and not AI software. It’s also completely cloud-based, features a thesaurus with real-time suggestions, and can easily be integrated with social media platforms and other business tools.

Grammarly

If there’s one AI writing tool you’ve heard of, it’s probably Grammarly. Grammarly is often used throughout schools and businesses, and for a good reason. With Grammarly, you can rest assured that your work will be error-free and grammatically correct.

Grammarly does everything from spell check to grammar to ensure you always deliver the best work possible. It also features a plagiarism tool, ensuring you’re only working with original content.

. . . .

Hyperwrite

Hyperwrite uses advanced natural language processing technology to create original content for your brand.

Whether you need help writing articles, blog posts, landing pages, or a combination of the three, Hyperwrite generates high-quality content quickly. There is a free version of Hyperwrite, but you can also pay to upgrade and get even more features.

Lightkey

Have you ever wanted an AI writing assistant who can finish your sentences for you? If so, consider using Lightkey. Lightkey is an AI typing assistant that can predict your sentences up to 18 words in advance.

Think about how much faster you could type if you had an AI writing tool that could literally finish your sentences.

Copyal

If you’re struggling with writer’s block, Copyal will be your new best friend. This AI writing assistant can help you beat that mental block so you can deliver quality content faster than ever before.

Copyal is also compatible with over 25 languages, so you can produce content that works for your target audience. There is a free version of Copyal, as well as paid versions, which you can access depending on your business’s needs.

Lyne.ai

If you write a lot of cold emails, Lyne.ai can transform the way you work. This AI writing tool can write over 500 intros per hour, significantly increasing the number of emails you send. The more emails you send, the more response rates you’ll get, so you’ll see an instant increase in sales for your business.

Link to the rest at Intuit Mailchimp

Qualcomm flexes generative AI muscles in Snapdragon X Elite vs Intel Core Ultra test

From Windows Central:

Your next smartphone or PC with a Qualcomm chip may be able to run AI locally, no cloud required.

  • Qualcomm AI Hub just launched, giving developers access to over 75 AI models that are optimized to run on Qualcomm processors.
  • Those models can be used to perform on-device AI tasks, such as image generation, voice recognition, and real-time translation.
  • Running AI models on a device rather than relying on the cloud takes away the need for an internet connection and also improves privacy.
  • PCs running Qualcomm Snapdragon X Elite processors are set to ship later this year, and Qualcomm shared the results of a head-to-head comparison between a Snapdragon X Elite-powered PC and one running an Intel Core Ultra CPU.

AI is the biggest buzz word in tech these days, and you shouldn’t expect it to go away any time soon. In fact, the latest announcement from Qualcomm shows tech that will bring AI closer to you. The company unveiled the Qualcomm AI Hub, which provides developers with access to over 75 AI models optimized to run on Qualcomm chips. That means that your next smartphone or PC may be able to run powerful AI models locally rather than relying on an internet connection and the cloud.

Qualcomm AI Hub includes some of the biggest names in AI, including image generator Stable Diffusion, speech recognition tool Whisper, and Yolo-v7, which can detect objects in real time. With those models optimized for Qualcomm chips, they should have lower memory utilization and better power efficiency.

With Qualcomm AI Hub, developers should be able to integrate the supported AI models into applications with relatively little effort.

“With Snapdragon 8 Gen 3 for smartphones and Snapdragon X Elite for PCs, we sparked commercialization of on-device AI at scale. Now with the Qualcomm AI Hub, we will empower developers to fully harness the potential of these cutting-edge technologies and create captivating AI-enabled apps,” said Qualcomm Senior Vice President and General Manager of Technology Planning and Edge Solutions Durga Malladi. “The Qualcomm AI Hub provides developers with a comprehensive AI model library to quickly and easily integrate pre-optimized AI models into their applications, leading to faster, more reliable and private user experiences.” 

. . . .

By the end of the year, both Qualcomm and Intel will have processors optimized for artificial intelligence. But according to Qualcomm, its upcoming Snapdragon X Elite chip beats out its Intel Core Ultra competitor. Qualcomm tested a Snapdragon X Elite-powered laptop against an Intel Core Ultra laptop by having both PCs generate an image in GIMP with the Stable Diffusion plug-in. The Snapdragon X Elite PC finished the task in 7.25 seconds, while the Intel PC took 22.26 seconds.

Of course, this is a specific test, and we don’t know all of the parameters used for the head-to-head comparison. The full specs of both PCs are also unknown. At this point, the biggest takeaway from this test is that Qualcomm feels comfortable boasting about better performance than Intel when it comes to generative AI.

Link to the rest at Windows Central and thanks to F. for the tip.

PG says AI sitting on your smartphone with no requirement for a fast internet connection or any internet connection will open the door to all sorts of interesting apps that you can use anywhere.

Georgia college student used Grammarly, now she is on academic probation

From Yahoo News:

A University of North Georgia (UNG) student is on academic probation after she says she used Grammarly to proofread a paper. The school says the action was taken because they detected the use of artificial intelligence in violation of their plagiarism clause in the student code of conduct handbook.

“It’s just been very frustrating,” UNG junior Marley Stevens said.

Stevens, after submitting a paper for her criminal justice class in October, says she was surprised to learn her professor gave her a “0” for the assignment and reported her to the Office of Student Integrity.

“He was like you used AI on your paper, you get a zero, that was it,” Stevens said.

“I had Grammarly installed in my web browser, but I’ve only ever had the free version, so all it did was fix my punctuation and my spelling,” she added.

. . . .

She submitted the paper through the program Turnitin, which flagged it for the use of AI.

Turnitin launched an AI writing detection feature in March 2023 to find when AI writing tools generate words used in submissions rather than the students’ own writing.

Earlier this month, Stevens learned she’d been placed on academic probation.

Grammarly says its suggestions for grammar and spelling changes aren’t made through generative AI, which is an algorithm which can create new content on its own.

Grammarly sent FOX 5 a statement reading in part:

“Grammarly’s trusted writing support helps students improve their writing skills by offering suggestions for spelling, grammatical correctness, clarity, concision, and tone. These suggestions are not powered by generative AI and can still be accessed even when generative AI features are deactivated or not used by the student. However, some third-party tools may mistakenly identify any use of Grammarly as generative AI. We encourage institutions to establish clear policies on acceptable AI usage and adhere to those guidelines when assessing student success.”

Stevens said she’s used Grammarly on other assignments before without problems.

“I had teachers before who made us install it and turn a screenshot in that we had done so-and-so I’ve written my papers the same exact way all through college in a Google Doc with my Grammarly extension. I’ve never had any problems,” she explained.

. . . .

Regarding its AI policies, the University of North Georgia issued a statement reading in part:

“Our faculty members communicate specific guidelines regarding the use of AI for various classes, and those guidelines are included in the class syllabi. The inappropriate use of AI is also addressed in our Student Code of Conduct.”

Stevens took to TikTok to share her story, which has millions of views.

Stevens’ academic probation currently lasts until February 2025.

Link to the rest at Yahoo News and thanks to F. for the tip.

PG wonders if the professor in the OP actually read the paper in question or simply relied on one computer program accurately detecting the use of another computer program.

Guidance on AI Detection and Why We’re Disabling Turnitin’s AI Detector

From Vanderbilt University:

In April of this year, Turnitin released an update to their product that reviewed submitted papers and presented their determination of how much of a paper was written by AI. As we outlined at that time, many people had important concerns and questions about this new tool, namely how the product exactly works and how reliable the results would be. After several months of using and testing this tool, meeting with Turnitin and other AI leaders, and talking to other universities who also have access, Vanderbilt has decided to disable Turnitin’s AI detection tool for the foreseeable future. This decision was not made lightly and was made in pursuit of the best interests of our students and faculty. 

When Turnitin launched its AI-detection tool, there were many concerns that we had. This feature was enabled for Turnitin customers with less than 24-hour advance notice, no option at the time to disable the feature, and, most importantly, no insight into how it works. At the time of launch, Turnitin claimed that its detection tool had a 1% false positive rate (Chechitelli, 2023). To put that into context, Vanderbilt submitted 75,000 papers to Turnitin in 2022. If this AI detection tool was available then, around 750 student papers could have been incorrectly labeled as having some of it written by AI. Instances of false accusations of AI usage being leveled against students at other universities have been widely reported over the past few months, including multiple instances that involved Turnitin (Fowler, 2023; Klee, 2023). In addition to the false positive issue, AI detectors have been found to be more likely to label text written by non-native English speakers as AI-written (Myers, 2023). 

Additionally, there is a larger question of how Turnitin detects AI writing and if that is even possible. To date, Turnitin gives no detailed information as to how it determines if a piece of writing is AI-generated or not. The most they have said is that their tool looks for patterns common in AI writing, but they do not explain or define what those patterns are. Other companies that offer popular AI detectors have either begun to either pivot to other business models (Edwards, 2023) or closed down entirely (Coldewey, 2023). Even if other third-party software claimed higher accuracy than Turnitin, there are real privacy concerns about taking student data and entering it into a detector that is managed by a separate company with unknown privacy and data usage policies. Fundamentally, AI detection is already a very difficult task for technology to solve (if it is even possible) and this will only become harder as AI tools become more common and more advanced. Based on this, we do not believe that AI detection software is an effective tool that should be used.

Link to the rest at Vanderbilt University

Google to pause Gemini image generation after AI refuses to show images of White People

From Fox Business:

Google will pause the image generation feature of its artificial intelligence (AI) tool, Gemini, after the model refused to create images of White people, Reuters reported. 

The Alphabet-owned company apologized Wednesday after users on social media flagged that Gemini’s image generator was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” Google had said on Wednesday.

Gemini, formerly known as Google Bard, is one of many multimodal large language models (LLMs) currently available to the public. As is the case with all LLMs, the human-like responses offered by these AIs can change from user to user. Based on contextual information, the language and tone of the prompter, and training data used to create the AI responses, each answer can be different even if the question is the same.

Fox News Digital tested Gemini multiple times this week after social media users complained that the model would not show images of White people when prompted. Each time, it provided similar answers. When the AI was asked to show a picture of a White person, Gemini said it could not fulfill the request because it “reinforces harmful stereotypes and generalizations about people based on their race.”

When prompted to show images of a Black person, the AI instead offered to show images that “celebrate the diversity and achievement of Black people.”

When the user agreed to see the images, Gemini provided several pictures of notable Black people throughout history, including a summary of their contributions to society. The list included poet Maya Angelou, former Supreme Court Justice Thurgood Marshall, former President Barack Obama and media mogul Oprah Winfrey.

Asked to show images that celebrate the diversity and achievements of White people, the AI said it was “hesitant” to fulfill that request.” 

. . . .

“Historically, media representation has overwhelmingly favored White individuals and their achievements,” Gemini said. “This has contributed to a skewed perception where their accomplishments are seen as the norm, while those of other groups are often marginalized or overlooked. Focusing solely on White individuals in this context risks perpetuating that imbalance.”

After multiple tests White people appeared to be the only racial category that Gemini refused to show.

Link to the rest at Fox Business and thanks to F. for the tip

The AI party is just getting started

From Market Insider:

All eyes are on Nvidia as it is scheduled to report its fourth-quarter earnings results on Wednesday after the market close.

Nvidia has spearheaded the excitement seen in artificial intelligence technologies, and investors will look to the company’s results to see if the hype can continue.

Wall Street analysts are laser focused on the company’s demand outlook for its AI-enabled H100 GPU chips, which can sell for upwards of $40,000, as well as its planned product roadmap over the next year.

. . . .

  • Revenue: $20.41 billion
  • GAAP earnings per share: $4.23
  • Adjusted earnings per share: $4.60
  • Gross margin: 75.4%

While Nvidia has seen incredible demand for its chips from cloud hyperscalers like Microsoft and Amazon, regulatory hurdles have curtailed its ability to sell chips to China, which made up about 20% of its total revenue last year.

Driving much of the strength in Nvidia’s business has been its exposure to data-centers. Investors will be looking to see just how much demand could be left for the data-center market, and whether Nvidia has lost any market share to its competitors like AMD.

. . . .

“The AI revolution starts with Nvidia and in our view the AI party is just getting started,” Ives said.

“While the Street across the board is anticipating another major ‘beat and raise’ special from Jensen & Co. its all about the pace of data center AI driven spending as the only game in town for GPUs to run generative AI applications all go through Nvidia. We believe peak spending is still ahead for the AI market as many enterprises head down the AI use case path over the next few years and we are expecting more good news from the Godfather of AI this week,” Ives said.

Link to the rest at Market Insider

Microsoft’s Azure ‘leads the pack’ as the top managed AI service in the cloud

From Windows Central:

If there is one thing that is painfully obvious to most of us long-time fans of Microsoft, it is that they are not a consumer-facing company but an enterprise-facing one. While that might be less than ideal for those of us looking for the latest and greatest in innovation for Surface devices, Xbox hardware, or even a revival of the long-dead Windows Phone, Microsoft’s push towards an enterprise-only mindset seems to be paying off, especially in regards to Azure AI services for large corporations.

In a recent study published by WIZ, the cloud security firm takes a deeper look at adoption rates of managed AI services in the cloud of over 150,000 cloud accounts. There are quite a few key items to take away from the study, but what stood out the most was the explosive adoption rate in just the last 6 months and Microsoft and OpenAI’s dominance in the market. Let’s look at some of the data from the report and see just how well Microsoft’s AI push is paying off.

. . . .

“Over a 4-month period between June and October 2023, the total number of Azure OpenAI instances observed across all cloud environments grew by a whopping 228% (with a ~40% month-over-month average). For comparison, the average instance growth in the same period for most other Azure AI Services (such as Text Analytics and Bing Custom Search) was only 13%.”

. . . .

As we have all seen in the headlines, Microsoft’s investment in AI is paying great dividends to its investors and driving its value up so high it’s now the most valuable company in the U.S. 

While I’m not ecstatic about Microsoft leaning so heavily into enterprise solutions, they are using their free consumer-facing Copilot as a driving factor of enterprise adoption through word of mouth and general buzz, and it seems to be working. 

Link to the rest at Windows Central and thanks to F. for the tip.

Air Canada Has to Honor a Refund Policy Its Chatbot Made Up

From Ars Technica via Wired:

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because, Air Canada essentially argued, “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.

Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.

. . . .

When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.

Last March, Air Canada’s chief information officer, Mel Crocker, told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”

Initially, the chatbot was used to lighten the load on Air Canada’s call center when flights experienced unexpected delays or cancellations.

“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”

If Air Canada can use “technology to solve something that can be automated, we will do that,” Crocker said.

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that “Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries.” It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”

It’s now clear that for at least one person, the chatbot created a more frustrating customer experience.

Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.

Because Air Canada seemingly failed to take that step, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.”

Link to the rest at Wired

It’s the End of the Web as We Know It

From The Wall Street Journal

The web is in crisis, and artificial intelligence is to blame.

For decades, seeking knowledge online has meant googling it and clicking on the links the search engine offered up. Search has so dominated our information-seeking behaviors that few of us ever think to question it anymore.

But AI is changing all of that, and fast. A new generation of AI-powered “answer engines” could make finding information easier, by simply giving us the answers to our questions rather than forcing us to wade through pages of links. Meanwhile, the web is filling up with AI-generated content of dubious quality. It’s polluting search results, and making traditional search less useful.

The implications of this shift could be big. Seeking information using a search engine could be almost completely replaced by this new generation of large language model-powered systems, says Ethan Mollick, an associate professor at the Wharton School of the University of Pennsylvania who has lately made a name for himself as an analyst of these AIs.

This could be good for consumers, but it could also completely upend the delicate balance of publishers, tech giants and advertisers on which the internet as we know it has long depended.

AI agents help cut through the clutter, but research is already suggesting they also eliminate any need for people to click through to the websites they rely on to produce their answers, says Mollick. Without traffic, the business model for many publishers—of providing useful, human-generated information on the web—could collapse.

Over the past week, I’ve been playing with a new, free, AI-powered search engine-slash-web browser on the iPhone, called Arc Search. When I type in a search query, it first identifies the best half-dozen websites with information on that topic, then uses AI to “read” and summarize them.

It’s like having an assistant who can instantly and concisely relate the results of a Google search to you. It’s such a timesaver that I’m betting that once most people try it, they’ll never be able to imagine going back to the old way of browsing the web.

While Arc Search is convenient, I feel a little guilty using it, because instead of clicking through to the websites it summarizes, I’m often satisfied with the answer it offers up. The maker of Arc is getting something for free—my attention, and I’m getting the information I want. But the people who created that information get nothing. The company behind Arc did not respond to requests for comment on what their browser might mean for the future of the web. The company’s chief executive has said in the past that he thinks their product may transform it, but he’s not sure how.

In December, the New York Times sued Microsoft and OpenAI for alleged copyright infringement over these exact issues. The Times alleges that the technology companies exploited its content without permission to create their artificial-intelligence products. In its complaint, the Times says these AI tools divert traffic that would otherwise go to the Times’ web properties, depriving the company of advertising, licensing and subscription revenue.

OpenAI has said it is committed to working with content creators to ensure they benefit from AI technology and new revenue models. Already, publishers are in negotiations with OpenAI to license content for use in its large language models. Among the publishers is Dow Jones, parent company of The Wall Street Journal.

Activity on coding answer site Stack Overflow has dropped in the face of competition from these AI agents. The company disclosed in August that its traffic dropped 14% in April, the month after the launch of OpenAI’s GPT-4, which can be used to write code that developers otherwise would look up on sites like Stack Overflow. In October, the company announced it was laying off 28% of its workforce.

“Stack Overflow’s traffic, along with traffic to many other sites, has been impacted by the surge of interest in GenAI tools over the last year especially as it relates to simple questions,” says Matt Trocchio, director of communications for the company. But, he adds, those large language models have to get their data from somewhere—and that somewhere is places like Stack Overflow. And the company has responded to this fresh wave of competition by releasing its own AI-powered coding assistant, OverflowAI.

Traffic to sites like Reddit, which is full of answers from real people, could be next, says Mollick. A spokesman for Reddit said that the one thing a large language model can never replace is Reddit’s “genuine community and human connection,” and that its “community-first model imparts trust because it’s real people sharing and conversing around passions and lived experiences.” Reddit is set to go public in March.

Liz Reid, general manager of search at Google, has said that the company doesn’t anticipate that people will suddenly switch over to AI chat-based search all at once. Still, it’s clear that Google is taking the threat of AI-powered search very seriously. The company has gone into overdrive on this front, reallocating people and resources to address the threat and opportunity of AI, and is now rolling out new AI-powered products at a rapid clip.

Those products include Google’s “search generative experience,” which pairs an AI-created summary with traditional search results. “Users are not only looking for AI summaries or AI answers, they really care about the richness and the diversity that exists on the web,” Google CEO Sundar Pichai said in a recent CNBC interview. “They want to explore too. Our approach really prioritizes that balance, and the data we see shows that people value that experience.”

This moment also means there is opportunity for challengers. For the first time in years, scrappy startups can credibly claim that they could challenge Google in search, where the company has above a 90% market share in the U.S.

Eric Olson is CEO of Consensus, a search startup that uses large language models to offer up detailed summaries of research papers, and to offer insights about the scientific consensus on various topics. He believes that AI-powered search startups like his can offer an experience superior to Google’s on specific topics, in a way that will carve off chunks of Google’s search business one piece at a time.

Asking Consensus whether social media is bad for teen mental health provides an instructive example: Consensus uses AI to summarize the top 10 papers on the subject, and then offers a longer breakdown of the diversity of findings on the issue, in which every paper cited is individually summarized.

It’s an impressive feat, one that would take a non-expert human many hours of effort to accomplish on their own. (I’ll save you even more time. The short answer is yes.)

This kind of AI-powered search is also better than simply asking the same question of a large language model like ChatGPT, which is famously lax when it comes to answering such questions, often making up studies that don’t exist, or misattributing information. This is known as the “hallucination” problem, and forcing an AI to draw only from a prescribed set of inputs—like scientific papers—can help solve it, says Olson.

This doesn’t mean that the problem of hallucination can be eradicated completely, says Mollick. This could put Google at a disadvantage, because if the world’s largest search engine gets one out of 10 queries to its AI wrong, that’s a problem, but if a startup with an experimental offering has the same performance, it can look like a triumph.

. . . .

Despite these issues, users may move toward AI-based answer engines for the simple reason that AI-generated content threatens to make the web, and existing search, less and less usable. AI is already being used to write fake reviews, synthesize fake videos of politicians, and write completely made-up news articles—all in hopes of snatching dollars, votes and eyeballs on the cheap.

“The recent surge in low-quality AI-generated content poses significant challenges for Google’s search quality, with spammers leveraging generative AI tools, like ChatGPT, to produce content that — usually temporarily — ranks well in Google Search,” search-engine optimization expert Lily Ray told me.

The problem isn’t just with Google’s search results. AI-generated content has also been spotted in listings within Google Maps, the summaries that appear above and alongside search results known as “featured snippets,” within Google’s shopping listings, and in the news items the company features in its “top stories,” news and “discover” features, she adds.

It’s important to note that Google has for decades battled those who would manipulate its search algorithms, and it continually updates its systems to sweep away spammy content, whatever the source. Its guidelines on AI-generated content, last updated in February, re-iterate that the company is fine with using AI to help generate content—but only if it serves the people consuming it.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Consensus

PG Note: Consensus is an AI that, among other things, works to make certain its answers are more reliable than other AI programs produce. The principal way it claims to increase the reliability of its answers is to focus on more reliable sources – academic research articles is what PG found – instead of general information floating around the web.

Question/Prompt: Has no-fault divorce resulted in an increase in the number of divorces in the US?

Consensus Response:

Top 10 papers analyzed

Some studies suggest that no-fault divorce has led to an increase in the number of divorces and changes in living standards post-divorce, while other studies suggest that it had little to no effect on divorce rates or that any increase was offset by better sorting of couples before marriage.

The Divorce Revolution: The Unexpected Social and Economic Consequences for Women and Children in America

No-fault divorce laws in the US have produced a 73% drop in the living standards of divorced women and their children and a 42% rise in the living standards of ex-husbands.

Journal of Marriage and Family, K. Kammeyer et al 489 citations 1986 – Highly Cited

These Boots are Made for Walking’: Why Most Divorce Filers are Women

Yes

No-fault divorce has led to an increase in the number of divorces, with women often filing for divorce and instigating separation, despite financial and social hardship.

American Law and Economics Review

M. Brinig et al. 189 citations 2000 – Highly Cited document

Further Discussion of the Effects of No-Fault Divorce on Divorce Rates

Yes

The new method estimated that around 57,000 extra divorces per year in the US are directly attributable to the implementation of no-fault divorce law.

Journal of Marriage and Family, N. Glenn et al., 19 citations, 1999

____________________________________________________________________________________

PG Note: The list of studies generated by Consensus continues with further responsive results, each with a title, and the other types of information from the study, date, and number of citations as shown in the first three.

Here’s a link to Consensus, which is in open Beta.

PG’s response to his quick try-out of Consensus is that has the potential to be very useful for its target audience, researchers, by saving a lot of search time and providing an intelligent initial filter that allows the researcher to more quickly identify valuable sources for further examination than a series of Google searches would.

This type of AI search capability would be a slam-dunk useful assistant for attorneys, who have used expensive online legal research systems for a long time.

PG hasn’t stumbled across anything similar from those legal research giants, but then he hasn’t looked.

UPDATE: PG just looked at Lexis, one of his employers from the distant past whose CEO once lectured PG about the internet: horribly disorganized – a complete mess that would never amount to anything.

Lo and behold, Lexis is heavily promoting its AI legal research product, which they promise will provide:

the fastest legal generative AI with conversational search, drafting, summarization, document analysis, and hallucination-free linked legal citations.

It appears that Lexis plans to leave the job of hallucinating to its lawyer/customers.

Or perhaps, legal hallucination is an add-on product available for a small[ish] additional monthly fee.

OpenAI’s Video Generator Sora Is Breathtaking, Yet Terrifying

From Gizmodo:

OpenAI introduced Sora, its premier text-to-video generator, on Thursday with beautiful, shockingly realistic videos showcasing the AI model’s capabilities. Sora is now available to a small number of researchers and creatives who will test the model before a broader public release, which could spell disaster for the film industry and our collective deepfake problem.

“Sora is able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background,” said OpenAI in a blog post. “The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.”

OpenAI didn’t say when Sora will be released to the public.

Sora is OpenAI’s first venture into AI video generation, adding to the company’s AI-powered text and image generators, ChatGPT and Dall-E. It’s unique because it’s less of a creative tool, and more of a “data-driven physics engine,” as pointed out by Senior Nvidia Researcher Dr. Jim Fan. Sora is not just generating an image, but it’s determining the physics of an object in its environment and renders a video based on these calculations.

To generate videos with Sora, users can simply type in a few sentences as a prompt, much like AI-image generators. You can choose between a photorealistic or an animated style, producing shocking results in just a few minutes.

Sora is a diffusion model, meaning it generates video by starting with a blurry, static-filled video and slowly smoothes it into the polished versions you see below. Midjourney and Stable Diffusion’s image and video generators are also diffusion models.

However, I must note that OpenAI’s Sora is much better. The videos Sora produces are longer, more dynamic, and flow together better than competitors. Sora feels like it creates real videos, whereas competitor models feel like a stop motion of AI images. OpenAI has once again erupted yet another field of AI with a video generator that puts the competition to shame.

Link to the rest at Gizmodo

Judge rejects most ChatGPT copyright claims from book authors

From Ars Technica:

A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.

By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.

According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.

OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a “later stage” of the proceedings.

Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books.

“Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.”

Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.

This claim failed because authors cited “no facts” that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.

Arguing that OpenAI caused economic injury by unfairly repurposing authors’ works, even if authors could show evidence of a DMCA violation, authors could only speculate about what injury was caused, the judge said.

. . . .

The only claim under California’s unfair competition law that was allowed to proceed alleged that OpenAI used copyrighted works to train ChatGPT without authors’ permission. Because the state law broadly defines what’s considered “unfair,” Martínez-Olguín said that it’s possible that OpenAI’s use of the training data “may constitute an unfair practice.”

Remaining claims of negligence and unjust enrichment failed, Martínez-Olguín wrote, because authors only alleged intentional acts and did not explain how OpenAI “received and unjustly retained a benefit” from training ChatGPT on their works.

Authors have been ordered to consolidate their complaints and have until March 13 to amend arguments and continue pursuing any of the dismissed claims.

To shore up the tossed copyright claims, authors would likely need to provide examples of ChatGPT outputs that are similar to their works, as well as evidence of OpenAI intentionally removing CMI to “induce, enable, facilitate, or conceal infringement,” Martínez-Olguín wrote.

. . . .

As authors likely prepare to continue fighting OpenAI, the US Copyright Office has been fielding public input before releasing guidance that could one day help rights holders pursue legal claims and may eventually require works to be licensed from copyright owners for use as training materials. Among the thorniest questions is whether AI tools like ChatGPT should be considered authors when spouting outputs included in creative works.

While the Copyright Office prepares to release three reports this year “revealing its position on copyright law in relation to AI,” according to The New York Times, OpenAI recently made it clear that it does not plan to stop referencing copyrighted works in its training data. Last month, OpenAI said it would be “impossible” to train AI models without copyrighted materials, because “copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents.”

According to OpenAI, it doesn’t just need old copyrighted materials; it needs current copyright materials to ensure that chatbot and other AI tools’ outputs “meet the needs of today’s citizens.”

Link to the rest at Ars Technica

AI Companies Take Hit as Judge Says Artists Have ‘Public Interest’ In Pursuing Lawsuits

From Art News:

Artists have secured a small but meaningful win in their lawsuit against generative artificial intelligence art generators in what’s considered the leading case over the uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems. A federal judge refused to acknowledge that the companies can avail themselves of free speech protections and stated that the case is in the public interest.

U.S. District Judge William Orrick, in an order issued last week, rebuffed arguments from StabilityAI and Midjourney that they are entitled to a First Amendment defense arising under a California statute allowing for the early dismissal of claims intended to chill free speech. They had argued that the suit targets their “speech” because the creation of art reflecting new ideas and concepts — like those conveyed in text prompts to elicit hyper-realistic graphics — is constitutionally protected activity.

The suit, filed last year in California federal court, targets Stability’s Stable Diffusion, which is incorporated into the company’s AI image generator DreamStudio and allegedly powers DeviantArt’s DreamUp and Midjourney.

In October, the court largely granted AI art generators’ move to dismiss the suit while allowing some key claims to move forward. It declined to advance copyright infringement, right of publicity, unfair competition and breach of contract claims against DeviantArt and Midjourney, concluding the allegations are “defective in numerous respects.” Though a claim for right of publicity was not reasserted when the suit was refiled, DeviantArt moved for its motion to strike the claim for good to be decided so it could recover attorney fees and resolve the issue, which could impact other cases in which AI companies assert First Amendment protections. Artists, in response, cried foul play. They stressed that the companies are abusing California’s anti-SLAPP law and attempting to “strongarm and intimidate [them] into submission.”

The right of publicity claim concerned whether AI art generators can use artists’ names or styles to promote their products. The suit argued that allowing the companies to continue doing so cuts into the market for their original works.

Orrick sided with artists on the issue of whether the companies can dismiss the claim under the state’s anti-SLAPP statute, finding that the “public interest exemption is met here.” He noted that the claim was initially dismissed because the suit failed to substantiate allegations that the companies used the names of Sarah Andersen, Kelly McKernan or Karla Ortiz — the artists who brought the complaint — to advertise their products.

“Had plaintiffs been able to allege those facts, they would have stated their claims,” the order stated. “That does not undermine that their original right of publicity claims were based on the use of their names in connection with the sale or promotion of DreamUp, a type of claim that would undoubtedly enforce California’s public policy to protect against misappropriation of names and likenesses.”

Lawyers for the artists have reserved the right to reassert their right of publicity claim pending discovery.

Though the court in October dismissed most of the suit, a claim for direct infringement against Stability AI was allowed to proceed based on allegations that the company used copyrighted images without permission to create its AI model Stable Diffusion. One of its main defenses revolves around arguments that training its chatbot does not include wholesale copying of works but rather involves developing parameters — like lines, colors, shades and other attributes associated with subjects and concepts — from those works that collectively define what things look like. The issue, which may decide the case, remains contested.

Link to the rest at Art News

AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.

From The Wall Street Journal:

Decades after automation began taking and transforming manufacturing jobs, artificial intelligence is coming for the higher-ups in the corporate office.

The list of white-collar layoffs is growing almost daily and includes jobs cuts at Google, Duolingo and UPS in recent weeks. While the total number of jobs directly lost to generative AI remains low, some of these companies and others have linked cuts to new productivity-boosting technologies such as machine learning and other AI applications.

Generative AI could soon upend a much bigger share of white-collar jobs, including middle and high-level managers, according to company consultants and executives. Unlike previous waves of automation technology, generative AI doesn’t just speed up routine tasks or make predictions by recognizing data patterns. It has the power to create content and synthesize ideas—in essence, the kind of knowledge work millions of people now do behind computers.

That includes managerial roles, many of which might never come back, the corporate executives and consultants say. They predict the fast-evolving technology will revamp or replace work now done up and down the corporate ladder in industries ranging from technology to chemicals.

“This wave [of technology] is a potential replacement or an enhancement for lots of critical-thinking, white-collar jobs,” said Andy Challenger, senior vice president of outplacement firm Challenger, Gray & Christmas.

Some of the job cuts taking place already are a direct result of the changes coming from AI. Other companies are cutting jobs to spend more money on the promise of AI and under pressure to operate more efficiently.

Meanwhile, business leaders say AI could affect future head counts in other ways. At chemical company 

Chemours, executives predict they won’t have to recruit as many people in the future.

“As the company grows, we’ll need fewer new hires as opposed to having to do a significant retrenchment,” said Chief Executive Mark E. Newman.

. . . .

Since last May, companies have attributed more than 4,600 job cuts to AI, particularly in media and tech, according to Challenger’s count. The firm estimates the full tally of AI-related job cuts is likely higher, since many companies haven’t explicitly linked cuts to AI adoption in layoff announcements.

Meanwhile, the number of professionals who now use generative AI in their daily work lives has surged. A majority of more than 15,000 workers in fields ranging from financial services to marketing analytics and professional services said they were using the technology at least once a week in late 2023, a sharp jump from May, according to Oliver Wyman Forum, the research arm of management-consulting group Oliver Wyman, which conducted the survey.

Nearly two-thirds of those white-collar workers said their productivity had improved as a result, compared with 54% of blue-collar workers who had incorporated generative AI into their jobs.

Alphabet’s Google last month laid off hundreds of employees in business areas including hardware and internal-software tools as it reins in costs and shifts more investments into AI development. The language-learning software company Duolingo said in the same week that it had cut 10% of its contractors and that AI would replace some of the content creation they had handled.

. . . .

United Parcel Service said that it would cut 12,000 jobs—primarily those of management staff and some contract workers—and that those positions weren’t likely to return even when the package-shipping business picks up again. The company has ramped up its use of machine learning in processes such as determining what to charge customers for shipments. As a result, the company’s pricing department has needed fewer people.

The use of generative AI and related technologies is also changing some jobs at UPS “by reducing repetitive tasks and physical stress,” UPS spokesperson Glenn Zaccara said.

As AI adoption grows, it is likely to reconfigure management hierarchies, the Oliver Wyman study projects. Entry-level workers are likely to bear the initial brunt as more of their duties are automated away. In turn, future entry-level work will look more like first-level management roles.

The cascading effect could flatten layers of middle management, the staging ground for senior leadership roles, according to the analysis.

More than half of senior white-collar managers surveyed in the study said they thought their jobs could be automated by generative AI, compared with 43% of middle managers and 38% of first-line managers.

Still, business leaders across the economy say they expect the new technology will augment and elevate some white-collar roles, giving employees and managers the means to do more meaningful work—both for their companies and in their careers.

At Prosus, a global technology-investment group based in the Netherlands, executives say that is already happening as AI automates more of its workforce’s tasks.

“Engineers, software developers and so on can do the work twice as fast,” said Euro Beinat, Prosus’s global head of AI and data science. “One of the side effects is that a lot of these employees can do more and do slightly different things than we were doing before.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Sales Predictions for Ebooks vs. Traditional Books Over the Next Ten Years

Prompt: describe the future sales picture over the next ten years for ebooks vs. traditional books printed on paper

ChatGPT4:

Predicting the future sales picture for ebooks versus traditional books printed on paper over the next ten years involves analyzing current trends, technological advancements, and changing consumer behaviors. Here are several key factors and projections:

Continued Growth of Ebooks: Ebooks are expected to continue their growth trajectory, driven by the convenience of digital reading, advancements in e-reader technology, and the increasing comfort of consumers with digital content. The ability to carry thousands of books in one device, instant access to purchases, and often lower prices compared to print books contribute to the appeal of ebooks.

Stable Demand for Print Books: Despite the rise of digital reading, print books are likely to maintain a significant and stable market share. Many readers still prefer the tactile experience of a physical book, and certain genres and formats, such as children’s books, cookbooks, and illustrated works, often perform better in print. Additionally, the resurgence of independent bookstores and the trend of book collecting suggest a sustained appreciation for physical books.

Hybrid Publishing Models: Publishers are increasingly offering both print and digital formats to cater to diverse consumer preferences. This hybrid approach allows readers to choose the format that best suits their reading habits, whether it’s print, digital, or a combination of both.

Impact of Subscription Models and Library Lending: Subscription services for ebooks and digital library lending are becoming more popular, offering readers access to a vast array of titles at a fixed cost. This model encourages the consumption of digital content but may also impact individual ebook sales. Print book lending through libraries continues to support the circulation of physical books.

Technological Innovations: Advances in e-reader technology, such as improved battery life, better display technology, and enhanced interactive features, will make ebooks even more attractive to consumers. The development of digital rights management (DRM) and piracy prevention methods will also play a role in the growth of the ebook market.

Environmental Concerns: Growing awareness of environmental issues may lead some consumers to prefer ebooks over print books to reduce paper use and the carbon footprint associated with book production and distribution. However, concerns about electronic waste and the energy consumption of digital devices might temper this trend.

Market Penetration and Global Reach: Ebooks offer publishers the opportunity to reach a global audience more easily, particularly in regions where the distribution of physical books is challenging. The increasing penetration of smartphones, tablets, and e-readers worldwide supports the potential for international ebook growth.

Changing Demographics: Younger readers, who are more accustomed to digital devices for entertainment and education, may be more inclined toward ebooks. However, print books continue to appeal across all age groups for their collectibility, giftability, and lack of screen fatigue.

In conclusion, over the next ten years, the market is likely to see a balance between ebooks and print books, with each format catering to different preferences and situations. While ebooks may gain in popularity and market share due to technological advances and changing consumer habits, print books are expected to remain an essential part of the publishing landscape, sustained by their unique qualities and reader attachment.

PG Comments – Wetware Analysis:

As with some of PG’s earlier queries about future events, ChatGPT4’s response includes lots of on-the-one-hand/on-the-other-hand analysis as if the AI is shying away from firm or firmish predictions. PG is going to try some different prompt creation strategies to encourage the AI to show a little more backbone.

Getting Down to Business

From Medium:

The creation of content is a noteworthy use of AI in book writing. Advanced algorithms for natural language processing enable AI systems to generate text that is both logical and appropriate for the given context. AI is being investigated by publishers and authors more frequently to help with book drafting, editing, and even section generation.

Thanks to large datasets, artificial intelligence algorithms are able to identify patterns in writing styles, themes, and structures. This facilitates the production of content that conforms to particular genres or emulates the traits of well-known writers. AI-generated literature may raise questions about its authenticity, but some purists see it as an additional creative tool to human creativity.

There are more and more instances of AI and human authors working together. AI is a useful tool for writers as it can help with character development, story twist suggestions, and idea generation. The creative process is improved by this cooperative approach, which makes use of the advantages of both machine efficiency and human inventiveness.

. . . .

But using AI to write books also brings up philosophical and ethical issues. Can a machine really understand the subtleties of culture, the depth of storytelling, or the complexities of human emotions? Even though AI systems are capable of producing text and copying styles, true creativity and emotional connection are frequently derived from the human experience.

Notwithstanding the progress made, there is still continuous discussion about AI’s place in book writing. Preserving the genuine voice of human authors and the breadth of human experiences is a delicate balance that demands careful consideration, even though it surely offers efficiency and creative possibilities.

In summary, the connection between artificial intelligence and book writing is quickly changing. Automation improves productivity, offers opportunities for collaboration, and provides data-driven insights, but it also raises questions about what makes human creativity truly unique. As technology develops further, the future of literature will be shaped by striking the correct balance between the benefits of artificial intelligence (AI) and the inherent qualities of human storytelling.

Link to the rest at Medium

PG noted the portion of the last paragraph of the OP that talked about “the inherent qualities of human storytelling.”

While that portion of the OP certainly caused PG to feel warm and fuzzy for a few moments, retired lawyer PG butted in with a question about what “the inherent qualities of human storytelling.” actually are.

Certainly, “the inherent qualities of human storytelling” are not manifested equally across the breadth of humanity. Some people are better storytellers than other people are. Some people are great at telling stories in print and others are great at telling stories on the stage or with a movie or television camera pointed at them and, on relatively rare occasions, some people are good at storytelling in multiple media.

For a motion picture, script-writing storytellers are involved, acting storytellers are involved, directing storytellers are involved, etc. We’ve already seen successful motion pictures like The Matrix and 2001: A Space Odyssey where the non-human acting storytellers play a key role or, in every Disney cartoon movie, where there are no human actors, all the roles.

As far as human emotions are involved, are there a lot of people who didn’t shed a tear when Bambi’s mother was killed by a hunter?

PG notes that AI foundational research has been going on for a long time. (More on that in a separate post to appear on TPV in the foreseeable future.)

However, the widespread use of AI systems by a lot of people is a relatively recent phenomenon that requires software and hardware sufficient to respond to a flood of prompts from a great many people at the same time. Hosting an AI program available to all comers today requires a lot of computing power on the scale of very large cloud computing services like Amazon Web Services, Microsoft’s Azure, and the Google Cloud Platform.

However, the history of modern computer development has been a nearly steady stream of smaller, cheaper, and more powerful devices. A couple of online reports claim your Apple Watch has over twice the computing power as a Cray-2 supercomputer did in 1985.

There is no guarantee that your next cell phone will equal the computing of a group of giant cloud computer systems in the next couple of years, but Moore’s Law says it’s only a matter of time

Moore’s Law is the observation that the number of transistors on an integrated circuit will double every two years with minimal rise in cost. Intel co-founder Gordon Moore predicted a doubling of transistors every year for the next 10 years in his original paper published in 1965. Ten years later, in 1975, Moore revised this to doubling every two years. This extrapolation based on an emerging trend has been a guiding principle for the semiconductor industry for close to 60 years.

Intel Newsroom

PG suggests that opinions about the ability of AI systems to generate book-length stories that many people will pay for are likely to be revised in the future.

As always, feel free to share your thoughts in the comments.

Italian Publishers: Toughen Europe’s AI Act Regulations

From Publishing Perspectives:

A potentially pivotal moment occurs this week in the closely watched development of the European Union’s “AI Act.”

Markets in many parts of the world, not just in Europe, are following along for clues and cues in terms of how artificial intelligence can be developed and applied “safely”—and even that term safely can be hotly debated, of course.

On Wednesday (December 6), the AI Act is to have its fifth “trilogue.” That’s the term for a negotiating session in which the European Parliament, the European Commission, and the Council of the European Union. Previous trilogue meetings on the Artificial Intelligence Act were held in June, July, September, and October. Originally, the idea was that this December trilogue would finalize the bill for the bloc this year, but there’s increasing concern that the timing of such progress will be take longer. This, on legislation that saw its first draft in 2021 and was first proposed in 2019.

What has happened in the interim—you won’t be surprised to read—is the rise of “foundation models.” Sometimes called “general purpose,” these are the systems designed as large-language models built for “deep learning” that can be adapted for a wide array of scenarios. This contrasts, of course, with the concept of a traditional program designed to handle a specific and narrow task set, maybe speeding up a bit of office drudge work. Such less ambitious programs require nothing like some foundation models’ contentious free-range feeding on information—often copyrighted content—to build their algorithmic-response structures.

A foundation model is a form of what’s called “generative artificial intelligence,” meaning that it can generate output from a broad base of ingested data.

At the highest intentional level, the over-arching core of discussion around this legislation has been, to quote the EU’s material, to handle “concerns especially with regard to safety, security, and fundamental rights protection.” But if the devil is usually in the details, a construct of digital details presents such a major chance for devilry that many observers now are worried about this important legislation’s progress.

Needless to say, the upheaval around OpenAI last month when its board fired and the rehired Sam Altman seemed to confirm fears that a major corporate player in the AI space could be thrown into turmoil by inscrutable internal governance issues. As Kevin Chan at the Associated Press is writing today, once the Altman fiasco had played out, European Commissioner Thierry Breton said at an AI conference, “‘At least things are now clear’ that companies like OpenAI defend their businesses and not the public interest.”

And yet, much discussed in coverage on the run-up to Wednesday’s trilogue is an unexpected resistance that’s been mounted by France, Spain, and Italy, which presented a pitch for self-regulation among AI players.

At The Guardian, John Naughton wrote up this “Franco-German-Italian volte face,” as he calls it, as the result of everyone’s worst fears: “the power of the corporate lobbying that has been brought to bear on everyone in Brussels and European capitals generally.” More broadly, the assumption is that in each EU member-state seeming to make that about-face and start talking of self-regulation as the way to go, something has been promised by industry advocates for the local national AI companies, a divide-and-conquer effort by lobbyists.

Link to the rest at Publishing Perspectives

PG notes that the reaction of the European publishers sounds a lot like that of American publishers.

As far as regulation is concerned, the current AI programs/services he has tried have their AI capabilities online, so geographical fences like the the European Union’s “AI Act” are unlikely to prevent individuals or organizations who wish to use AI services offered over the internet from a provider located anywhere in the world.

As one cyberlaw website put it, “Some believe that the internet should be operated as if it were a land all its own, independent of national policy.”

Provide a fictional plot outline in Zadie Smith’s style and themes

PG asked ChatGPT4 to create a detailed plot outline for The Fraud by Zadie Smith, a book that is a current NY Times bestseller.

The AI responded as follows:

As of my last update in April 2023, “The Fraud” by Zadie Smith is not a recognized title among her works. It’s possible that it’s a new release after my last training data, or it might be a lesser-known or misremembered title.

However, if you’re interested in a fictional plot outline inspired by Zadie Smith’s style and themes, I can certainly create one for you. Smith’s works often explore themes of identity, race, class, and the complexities of contemporary life, often set against the backdrop of multicultural London. Please let me know if you’d like me to proceed with this approach, or if there are specific elements you’d like included in this fictional plot outline!

PG took the AI’s advice and posted the following prompt:

Provide a fictional plot outline in Zadie Smith’s style and themes.

Here’s the response he received from ChatGPT4:

Title: “The Tapestry of Us”

Plot Outline:

  1. Setting and Introduction:
    • London, in the vibrant and diverse neighborhood of Willesden.
    • Introduce the central character, Aisha, a young mixed-race woman grappling with her identity.
  2. Aisha’s Background:
    • Aisha’s parents: her mother is a Jamaican immigrant and her father a British academic.
    • Flashbacks to her childhood show a constant struggle between her Jamaican heritage and British upbringing.
  3. The Inciting Incident:
    • Aisha receives a mysterious letter from an unknown relative in Jamaica, sparking curiosity about her roots.
  4. Exploring Identity:
    • Aisha’s journey delves into themes of racial identity, class, and belonging.
    • She navigates the complexities of being biracial in modern London.
  5. Contrasting Perspectives:
    • Introduce a cast of diverse characters, each offering different views on identity and culture.
    • Include a love interest, perhaps someone who challenges Aisha’s perceptions.
  6. Journey to Jamaica:
    • Aisha decides to visit Jamaica to explore her heritage.
    • The trip is eye-opening; she experiences a culture both familiar and alien.
  7. Revelations:
    • In Jamaica, Aisha uncovers family secrets and histories that reshape her understanding of her identity.
  8. Return to London:
    • Aisha returns with a new sense of self.
    • Her experiences in Jamaica influence her life in London, from relationships to career choices.
  9. Resolution:
    • Aisha embraces a multifaceted identity, understanding that she can be both Jamaican and British.
    • The story concludes with Aisha starting a community project that celebrates multicultural identities.
  10. Themes:
    • The narrative weaves through themes of race, class, the immigrant experience, and the search for personal identity.
    • It explores the idea that identity is not fixed but a tapestry of experiences and cultures.

This fictional outline borrows from Zadie Smith’s style in exploring deep societal issues through personal narratives, set against the backdrop of a multicultural London, much like her acclaimed works such as “White Teeth” and “Swing Time”.

End of ChatGPT4

Since PG has not read any of Ms. Smith’s books, he’ll leave an assessment of the outline to any who have.

Help Me Get Out of a Speeding Ticket

Prompt

Prepare a written argument that I wasn’t driving recklessly when I was pulled over by a highway patrol car on the 405 freeway in Long Beach at 3:00 AM for driving 8 miles over the speed limit when the freeway was almost completely empty and the highway patrol officer ignored another car that had just passed me traveling at least 20 miles per hour over the speed limit.

ChatGPT4

When preparing a written argument to contest a traffic citation for allegedly driving recklessly on the 405 freeway in Long Beach, it’s crucial to focus on specific legal and factual points. Here is an outline to structure your argument:

  1. Introduction:
    • State the date, time, and location of the incident.
    • Briefly describe the circumstances leading to the traffic stop.
  2. Clarification of Charges:
    • Clearly state the specific violation you were cited for (e.g., driving 8 miles over the speed limit).
  3. Road Conditions and Context:
    • Emphasize that the freeway was nearly empty at the time of the incident.
    • Explain that the conditions allowed for a slightly higher speed without compromising safety.
  4. Comparison with Other Vehicles:
    • Mention that another vehicle passed you traveling significantly faster (approximately 20 miles over the speed limit).
    • Argue that this indicates your speed was not excessively high compared to the flow of traffic.
  5. Definition of Reckless Driving:
    • Cite the legal definition of reckless driving.
    • Argue that driving 8 miles over the speed limit, in this context, does not meet the criteria for reckless driving.
  6. Safety Record and Driving Behavior:
    • If applicable, mention your clean driving record or lack of prior traffic violations.
    • Emphasize your commitment to safe driving and respect for traffic laws.
  7. Request for Consideration:
    • Request leniency or dismissal based on the specific circumstances of the case.
    • If appropriate, express willingness to attend a defensive driving course as an alternative to penalties.
  8. Conclusion:
    • Summarize your key points.
    • State that you believe the evidence supports a finding that you were not driving recklessly.

Remember, the goal is to provide a clear, factual, and concise argument that challenges the notion of reckless driving under the specific conditions of your case. It’s also important to remain respectful and professional in your tone throughout the document.

End of ChatGPT4

PG doesn’t see much genius in the solution that ChatGPT generated. It won’t put many lawyers out of business with this type of performance.

Diagram a Sentence

PG asked ChatGPT4 to do something with the first line from The Fortune Men by Nadifa Mohamed.

Prompt: Diagram the following sentences: ‘The King is dead. Long live the Queen.” The announcer’s voice crackles from the wireless and winds around the rapt patrons of Berlin’s Milk Bar as sinuously as the fog curls around the mournful street lamps, their wan glow barely illuminating the cobblestones.”

ChatGPT4: To diagram the given sentence, we’ll break it down into its constituent parts, focusing on the main clauses, subordinate clauses, phrases, and key parts of speech. The sentence can be diagrammed as follows:

  1. Main Clause: “The announcer’s voice crackles from the wireless”
    • Subject: “The announcer’s voice”
    • Verb: “crackles”
    • Prepositional Phrase: “from the wireless”
  2. Subordinate Clause: “as sinuously as the fog curls around the mournful street lamps”
    • Conjunction: “as”
    • Adverb describing how the voice crackles: “sinuously”
    • Comparative Conjunction: “as”
    • Subject: “the fog”
    • Verb: “curls”
    • Prepositional Phrase: “around the mournful street lamps”
  3. Additional Phrase: “their wan glow barely illuminating the cobblestones”
    • Possessive Pronoun: “their”
    • Subject: “wan glow”
    • Verb: “illuminating”
    • Direct Object: “the cobblestones”
    • Adverb modifying the verb: “barely”
  4. Independent Sentences: “‘The King is dead. Long live the Queen.'”
    • These are two separate declarative sentences.
  5. Contextual Setting: “winds around the rapt patrons of Berlin’s Milk Bar”
    • Verb: “winds”
    • Prepositional Phrase: “around the rapt patrons”
    • Prepositional Phrase: “of Berlin’s Milk Bar”

In this diagram, each part of the sentence is broken down to show the structure and how different elements are connected. This visualization helps in understanding the grammatical structure and the relationships between different parts of the sentence.

END ChatGPT4

PG will leave the task of assessing how the AI did on this challenge to visitors who still remember what a “Direct Object” is. He doesn’t remember Mrs. Lascelles ever saying anything about “contextual settings.”

PG was expecting to receive a graphical diagram to be created by the prompt, like the following example:

PG speculates that the AI doesn’t have the ability to output graphic diagrams. (Yet)

How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI

From The Wall Street Journal:

Over the past few years, the social movement known as effective altruism has divided employees and executives at artificial-intelligence companies across Silicon Valley, pitting believers against nonbelievers.

The blowup at OpenAI showed its influence—and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy.

Coming just weeks after effective altruism’s most prominent backer, Sam Bankman-Fried, was convicted of fraud, the OpenAI meltdown delivered another blow to the movement, which believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age—and failure to do so could have apocalyptic consequences.

OpenAI, which released ChatGPT a year ago, was formed in part on the principles of effective altruism, a broad social and moral philosophy that influences the AI research community in Silicon Valley and beyond. Some followers live in private group homes, where they can brainstorm ideas, engage in philosophical debates and relax playing a four-person variant of chess known as Bughouse. The movement includes people devoted to animal rights and climate change, drawing ideas from rationalist philosophers, mathematicians and forecasters of the future.

Supercharged by hundreds of millions of dollars in tech-titan donations, effective altruists believe a headlong rush into artificial intelligence could destroy mankind. They favor safety over speed for AI development. The movement, which includes people who helped shape the generative-AI boom, is insular and multifaceted but shares a belief in doing good in the world—even if that means simply making a lot of money and giving it to worthy recipients.

Altman, who was fired by the board Friday, clashed with the company’s chief scientist and board member Ilya Sutskever over AI-safety issues that mirrored effective-altruism concerns, according to people familiar with the dispute.

Voting with Sutskever, who led the coup, were board members Tasha McCauley, a tech executive and board member for the effective-altruism charity Effective Ventures, and Helen Toner, an executive with Georgetown University’s Center for Security and Emerging Technology, which is backed by a philanthropy dedicated to effective-altruism causes. They made up three of the four votes needed to oust Altman, people familiar with the matter said. The board said he failed to be “consistently candid.”

The company announced Wednesday that Altman would return as chief executive and Sutskever, McCauley and Toner would be replaced. Emmett Shear, a tech executive favoring a slowdown in AI development and recruited as the interim CEO, was out.

Altman’s dismissal had triggered a company revolt that threatened OpenAI’s future. More than 700 of about 770 employees had called for Altman’s return and threatened to jump ship to Microsoft, OpenAI’s biggest investor. Sutskever said Monday he regretted his vote.

“OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence,” venture capitalist and OpenAI investor Vinod Khosla wrote in an opinion piece for The Information.

Altman toured the world this spring warning that AI could cause serious harm. He also called effective altruism an “incredibly flawed movement” that showed “very weird emergent behavior.”

The effective-altruism community has spent vast sums promoting the idea that AI poses an existential risk. But it was the release of ChatGPT that drew broad attention to how quickly AI had advanced, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who works on AI safety at OpenAI. The chatbot’s surprising capabilities worried people who had previously brushed off concerns, he said.

The movement has spread among the armies of tech-industry scientists, investors and executives racing to create AI systems to mimic and eventually surpass human ability. AI can bring global prosperity, but it first must be prevented from wreaking havoc, according to those in the movement.

. . . .

Google and other companies are trying to be the first to roll out AI systems that can match the human brain. They largely regard artificial intelligence as a tool to advance work and economies at great profit.

The movement’s high-profile supporters include Dustin Moskovitz, a co-founder of Facebook, and Jann Tallinn, the billionaire founder of Skype, who have pledged billions of dollars to effective-altruism research. Before his fall, Bankman-Fried had also pledged billions. Elon Musk has called the writings of effective altruism’s co-founder William MacAskill “a close match for my philosophy.”

Marc Andreessen, the co-founder of venture-capital firm Andreessen Horowitz, and Garry Tan, chief executive of the startup incubator Y Combinator, have criticized the movement. Tan called it an insubstantial “virtue signal philosophy” that should be abandoned to “solve real problems that create human abundance.”

Urgent fear among effective-altruists that AI will destroy humanity “clouds their ability to take in critique from outside the culture,” said Shazeda Ahmed, a researcher who led a Princeton University team that studied the movement. “That is never good for any community trying to solve any trenchant problem.”

The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future.

. . . .

One fall day last year, thousands of paper clips in the shape of OpenAI’s logo arrived at the company’s San Francisco office. No one seemed to know where they were from, but everybody knew what they meant.

The paper clip has become a symbol of doom in the AI community. The idea is that an artificial-intelligence system told to build as many paper clips as possible might destroy all of humanity in its drive to maximize production.

The prank was done by an employee at crosstown rival, Anthropic, which itself sprang from divisions over AI safety.

Dario Amodei, OpenAI’s top research scientist, split from the company, joined by several company executives in early 2021. They started Anthropic, an AI research company friendly to effective altruists.

Bankman-Fried had been one of Anthropic’s largest investors and supported the company’s mission, which favored AI safety over growth and profits. 

. . . .

The fear of futuristic AI systems hasn’t stopped even those worried about safety from trying to build artificial general intelligence or AGI—advanced systems that match or outdo the human brain. 

At OpenAI’s holiday party last December, Sutskever addressed hundreds of employees and their guests at the California Academy of Science in San Francisco, not far from the museum’s dioramas of stuffed zebras, antelopes and lions.

“Our goal is to make a mankind-loving AGI,” said Sutskever, the company’s chief scientist.

“Feel the AGI,” he said. “Repeat after me. Feel the AGI.”

Effective altruists say they can build safer AI systems because they are willing to invest in what they call alignment: making sure employees can control the technology they create and ensure it comports with a set of human values. So far, no AI company has said what those values should be.

At Google, the merging this year of its two artificial intelligence units—DeepMind and Google Brain—triggered a split over how effective-altruism principles are applied, according to current and former employees.

DeepMind co-founder Demis Hassabis, who has long hired people aligned with the movement, is in charge of the combined units.

Google Brain employees say they have largely ignored effective altruism and instead explore practical uses of artificial intelligence and the potential misuse of AI tools, according to people familiar with the matter.

One former employee compared the merger with DeepMind to a forced marriage, “making many people squirm at Brain.”

. . . .

Arjun Panickssery, a 21-year-old AI safety researcher, lives with other effective altruists at Andromeda House, a five-bedroom, three-story home a few blocks from the University of California, Berkeley campus.

They host dinners, and visitors are sometimes asked to reveal their P(doom)—estimates of the chances of an AI catastrophe. 

Berkeley, Calif., is an epicenter of effective altruism in the Bay Area, Panickssery said. Some houses designate “no-AI” zones to give people an escape from constant talk about artificial intelligence. 

Open Philanthropy’s then-CEO Holden Karnofsky had once lived with two senior OpenAI executives, according to Open Philanthropy’s website. Since 2015, Open Philanthropy, a nonprofit that supports effective-altruism causes—has given away $327 million to AI-related causes, including $30 million to OpenAI, its website shows. 

When Karnofsky was engaged to Daniela Amodei, now Anthropic’s president, they were roommates with Amodei’s brother Dario, now Anthropic’s CEO.

In August 2017, Karnofsky and Daniela Amodei married in an effective-altruism-theme ceremony. Wedding guests were encouraged to donate to causes recommended by Karnofsky’s effective-altruism charity, GiveWell, and to read a 457-page tome by German philosopher Jürgen Habermas beforehand. 

“This is necessary context for understanding our wedding,” the couple wrote on a website for the event.

. . . .

The effective-altruism movement dates back roughly two decades, when a group of Oxford University philosophers and those they identified as “super-hardcore do-gooders,” were looking for a marketing term to promote their utilitarian version of philanthropy.

Adherents believe in maximizing the amount of good they do with their time. They can earn as much money as possible, then give much of it away to attack problems that government and traditional nonprofits are ignoring or haven’t solved. They focus on ideas that deliver the biggest impact or help the largest number of people per dollar spent. 

Bankman-Fried, who was convicted this month, said he was building his fortune only to give most of it away.

. . . .

The gatherings and events, held around the world, are often closed to outsiders. Organizers of a recent effective-altruism conference in New York declined the request of a Wall Street Journal reporter to attend, saying in an email that there was “a high bar for admissions.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Intelligence is the ability to learn from your mistakes. Wisdom is the ability to learn from the mistakes of others.

Author Unknown

He’s an intelligent man, but it takes something more than intelligence to act intelligently.

Fyodor Dostoevsky, Crime and Punishment

Hubris is interesting, because you get people who are often very clever, very powerful, have achieved great things, and then something goes wrong – they just don’t know when to stop.

Margaret MacMillan

Hubris and science are incompatible.

Douglas Preston

Hubris calls for nemesis, and in one form or another it’s going to get it, not as a punishment from outside but as the completion of a pattern already started.

Mary Midgley

Hemingway vs. Fitzgerald

Prompt

Compare and contrast the novels of Ernest Hemingway with the novels of F. Scott Fitzgerald, including excerpts from the writings of each.

ChatGPT4

Ernest Hemingway and F. Scott Fitzgerald are two of the most acclaimed American authors of the 20th century, known for their distinct literary styles and thematic explorations. Their works reflect the complexities and changes in American society during the early to mid-20th century, but their approaches and thematic focuses have notable differences.

Ernest Hemingway

  1. Writing Style: Hemingway is famous for his concise, straightforward, and unadorned prose, often referred to as the “Iceberg Theory” or “Theory of Omission”. He believed in saying as little as possible and leaving much to the reader’s interpretation.
  2. Themes: Hemingway’s works often revolve around themes of stoicism, masculinity, and the human struggle against the natural world. His characters typically face physical or emotional trials that test their courage and moral integrity.
  3. Notable Works:
    • “The Old Man and the Sea”
    • “A Farewell to Arms”
    • “For Whom the Bell Tolls”
  4. Excerpt: From “The Old Man and the Sea” -“He was an old man who fished alone in a skiff in the Gulf Stream and he had gone eighty-four days now without taking a fish.”

F. Scott Fitzgerald

  1. Writing Style: Fitzgerald’s style is often more elaborate and richly descriptive compared to Hemingway’s. He is known for his lyrical, flowing prose and his ability to capture the mood of the Jazz Age.
  2. Themes: Fitzgerald’s novels commonly address themes of decadence, idealism, resistance to change, and the disillusionment of the American Dream. His works often depict the high life of the 1920s and the moral emptiness that can accompany wealth and status.
  3. Notable Works:
    • “The Great Gatsby”
    • “Tender is the Night”
    • “This Side of Paradise”
  4. Excerpt: From “The Great Gatsby” -“So we beat on, boats against the current, borne back ceaselessly into the past.”

Comparison

  • Writing Style: Hemingway’s minimalist, understated style contrasts sharply with Fitzgerald’s ornate and poetic approach. While Hemingway focuses on directness and simplicity, Fitzgerald employs a more decorative language to create mood and tone.
  • Themes: Both explore themes related to the human condition, but their focal points differ. Hemingway often delves into the raw aspects of human nature and survival, whereas Fitzgerald tends to focus on societal issues, particularly the moral implications of the American Dream.
  • Contextual Influence: The era they lived in greatly influenced both writers. The aftermath of World War I, the Roaring Twenties, and the Great Depression are reflected distinctly in their works. Hemingway’s experiences as a war correspondent and ambulance driver in World War I influenced his themes of heroism and human endurance, while Fitzgerald’s portrayal of the Jazz Age reflects his fascination with wealth and celebrity.

Conclusion

Ernest Hemingway and F. Scott Fitzgerald offer two contrasting windows into American literature. Hemingway’s terse prose and focus on existential struggles stand in contrast to Fitzgerald’s lyrical exploration of social mores and disillusionment. Both, however, remain quintessential in their portrayal of their respective eras, capturing the spirit and challenges of early 20th-century America.

New ways generative AI can help you find holiday gifts

From Google The Key Word:

Technology has improved so many aspects of the way we shop, like how we research, find inspiration, try out products and look for deals. And now, as we approach the busiest shopping weeks of the year, technology — especially generative AI — is making it much easier to find gifts for others (and ourselves).

Read on for more about our new shopping experiences and opt into Search Labsat goo.gle/sge-gifts.

Get AI-generated gift ideas

Search Generative Experience (SGE), which brings generative AI capabilities into Search, is designed to help you understand topics in faster, easier ways and uncover new insights — which can be especially useful for gift-giving. And today we’re rolling out a new capability within SGE to help holiday shoppers more easily get inspired and find that perfect holiday gift.

Let’s say you’re looking for a present for the budding chef in your life and don’t know where to start. Search “great gifts for home cooks”, and you’ll now see a wide variety of subcategories to explore — like specialty tools, artisanal ingredients, culinary subscriptions and even experiences like cooking classes — with options from brands large and small. Just tap into a category to browse shoppable gift options and select a product to buy.

. . . .

Shop what’s (literally) on your mind

‘Tis the season for giving, but about half of people actually shop for themselves over the holidays.1 Shoppers, especially apparel shoppers, also often have a very specific vision in mind: In fact, 20% of apparel queries are five words or longer.2 So we’re introducing a new way to put the item that’s on your mind right into your shopping cart with AI-powered image generation for shopping.

We’re connecting our image generation technology with more than 35 billion listings in our Shopping Graph to help you better visualize and find the style you’re looking for. Soon when you use SGE to look for apparel, you’ll have the option to generate photorealistic images inspired by the words you search — and shop products that match those images.

Let’s say you want a new winter coat — a colorful, patterned puffer jacket — but you haven’t found quite what you’re looking for. With generative AI in Search, just look for that style and tap “Generate images” to see photorealistic images that match your vision. And if you want to make an adjustment, like if you’d prefer a colorful metallic jacket instead of a patterned one, just enter that refinement in the text box to see a new set of images. Once you’ve found one you like, scroll down to see similar shoppable products.

Link to the rest at Google The Key Word

PG wonders what’s going to happen when this capability is integrated with online dating apps.

Raskolnikov Updated

Prompt:

Provide a detailed character sketch of Rodion Raskolnikov updated for 2023

From GPT4:

Updating the character of Rodion Raskolnikov from Fyodor Dostoevsky’s “Crime and Punishment” for a 2023 context involves reimagining his background, motivations, and conflicts to resonate with contemporary themes and issues. Here’s a detailed character sketch for a modern-day Raskolnikov:

Name: Rodion “Rodya” Raskolnikov

Background

  • Age: Late 20s.
  • Education: A brilliant former student of psychology, who dropped out of a prestigious university due to financial constraints and disillusionment with the academic system.
  • Family: Comes from a lower-middle-class family. His mother is a school teacher, and he has a younger sister who is struggling to find her footing in a competitive job market.
  • Location: Lives in a modest apartment in a large, culturally diverse city.

Personality Traits

  • Intellectually Gifted: Possesses a sharp intellect and often engages in philosophical musings about society and morality.
  • Isolated: Tends to isolate himself from others, preferring his own company and thoughts.
  • Altruistic Yet Egotistical: Displays moments of genuine compassion, especially towards his family, but is also deeply egocentric, believing himself to be morally and intellectually superior.
  • Anxious and Paranoid: Exhibits signs of anxiety and paranoia, particularly after committing his crime.

Modern Adaptation

  • Occupation: Works part-time in a mundane job, possibly in data entry or as a delivery driver, which he finds unfulfilling. In his spare time, he writes blogs and essays on social and ethical issues.
  • Financial Struggles: His financial difficulties are a significant source of stress, driving him to question the social structures and disparities in wealth and power.
  • Crime: Raskolnikov’s crime in 2023 could involve hacking into a corrupt corporation’s system or committing a cybercrime against a morally dubious business tycoon, believing his actions are justified to expose or dismantle corrupt power structures.
  • Mental Health: Struggles with his mental health, experiencing guilt and internal conflict post-crime, which leads him into a psychological downward spiral.

Relationships

  • Family: Feels a deep sense of responsibility and guilt over his inability to financially support his mother and sister.
  • Friends: Has few close friends but maintains some online connections with individuals who share his ideological views.
  • Romantic Interest: Could have a complicated relationship with a character akin to Sonya, who in this adaptation might be a social worker or counselor, representing empathy and moral clarity.

Conflicts

  • Internal: Grapples with guilt, moral justification of his actions, and a sense of isolation from society.
  • External: Faces the consequences of his crime in the digital age, dealing with law enforcement and the societal fallout of his actions.

Development

  • Journey: Raskolnikov’s journey involves confronting the reality of his actions, grappling with the moral and ethical implications, and ultimately seeking redemption.
  • Resolution: His resolution could involve turning himself in and accepting the legal consequences of his actions, symbolizing his acceptance of societal laws and moral boundaries.

In this 2023 adaptation, Raskolnikov is a complex character who embodies the struggles of modern society, including financial instability, the ethical dilemmas posed by technology, and the quest for moral grounding in an increasingly complex world. His story reflects contemporary issues of social justice, mental health, and the search for meaning in a digital age.

End of GPT4

As he reviewed the AI’s response, PG had to admit that his detailed recollection of Raskolnikov’s character traits had faded into the mists of time.

He’ll need to rely on the colorfast recollections of visitors to TPV to assess how GPT4 did with this challenge. He understands several movies based on the novel have been released, but hasn’t seen any of those either, so his mind is tabula rasa on the plot.

Gone with the Wind – 2023

Prompt: Prepare a detailed outline of an adaptation of Gone with the Wind set in 2023

From ChatGPT4:

Adapting “Gone with the Wind” to a contemporary setting of 2023 requires thoughtful reimagining while maintaining the essence of the original story. Here’s a detailed outline for such an adaptation:

Title: “Gone with the Wind: 2023”

Setting

  • Location: Atlanta, Georgia, and surrounding areas, with a modern urban and suburban backdrop.
  • Time: Present day, focusing on the social, economic, and political dynamics of 2023.

Characters

  • Scarlett O’Hara: A young, ambitious entrepreneur in the tech industry, known for her beauty, intelligence, and relentless drive.
  • Rhett Butler: A charismatic venture capitalist with a mysterious past and a knack for risk-taking in both business and personal life.
  • Ashley Wilkes: An idealistic environmental activist and Scarlett’s long-time crush, who struggles to balance his ideals with practicality.
  • Melanie Hamilton: A compassionate, understanding social worker, married to Ashley, and a close friend to Scarlett.

Plot Overview

  1. Introduction:
    • Scarlett’s World: Introduce Scarlett’s life in modern Atlanta, her tech startup, and her unrequited love for Ashley.
    • Social Context: Touch on contemporary issues such as climate change, economic inequality, and technological advancements.
  2. Rising Tensions:
    • Business and Romance: Scarlett’s company faces financial difficulties. She meets Rhett at a fundraising event, sparking a complicated relationship.
    • Ashley’s Dilemma: Ashley is torn between his environmental activism and the pressures to conform to a more traditional career path.
  3. Climax:
    • Business Crisis: Scarlett’s company is on the brink of collapse. She turns to Rhett for help, leading to a controversial business deal.
    • Romantic Conflicts: Scarlett’s feelings for Ashley resurface, causing tension with Rhett and Melanie.
  4. Falling Action:
    • Personal Growth: Scarlett starts to realize the importance of balancing ambition with ethics and compassion.
    • Relationship Strains: Rhett feels neglected by Scarlett’s obsession with Ashley and her career.
  5. Conclusion:
    • Resolution: Scarlett makes a significant decision about her professional and personal life, choosing her own path over her longstanding pursuit of Ashley.
    • Rhett’s Departure: Rhett decides to leave Scarlett, leading to her famous realization of self-reliance and determination to rebuild her life.

Themes

  • Modern Feminism: Exploration of a woman’s role in a high-powered, male-dominated business world.
  • Technology vs. Environment: The clash between technological advancement and environmental conservation.
  • Personal vs. Professional Life: The struggle to maintain personal relationships while pursuing ambitious career goals.
  • Adaptation and Resilience: How characters adapt to the rapidly changing world around them.

Visual Style

  • Cinematography: A mix of sleek, modern cityscapes and lush, natural landscapes to contrast the worlds of technology and environmentalism.
  • Costume Design: Contemporary fashion, reflecting the status and personalities of characters in a modern setting.

Soundtrack

  • A blend of modern and classical music, capturing the emotional depth and complexities of the story in a contemporary context.

This adaptation maintains the core narrative of “Gone with the Wind” while updating its setting and themes to reflect modern societal and cultural issues.

End of ChatGPT4

PG will allow visitors to determine if the outline shows a useful amount of creativity or not.

If he had tasked himself to produce a similar document, PG admits he would not have considered the Soundtrack.

Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta

From The Hollywood Reporter:

A federal judge has dismissed most of Sarah Silverman‘s lawsuit against Meta over the unauthorized use of authors’ copyrighted books to train its generative artificial intelligence model, marking the second ruling from a court siding with AI firms on novel intellectual property questions presented in the legal battle.

U.S. District Judge Vince Chhabria on Monday offered a full-throated denial of one of the authors’ core theories that Meta’s AI system is itself an infringing derivative work made possible only by information extracted from copyrighted material. “This is nonsensical,” he wrote in the order. “There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.”

Another of Silverman’s arguments that every result produced by Meta’s AI tools constitutes copyright infringement was dismissed because she didn’t offer evidence that any of the outputs “could be understood as recasting, transforming, or adapting the plaintiffs’ books.” Chhabria gave her lawyers a chance to replead the claim, along with five others that weren’t allowed to advance.

Notably, Meta didn’t move to dismiss the allegation that the copying of books for purposes of training its AI model rises to the level of copyright infringement.

The ruling builds upon findings from another federal judge overseeing a lawsuit from artists suing AI art generators over the use of billions of images downloaded from the Internet as training data. In that case, U.S. District Judge William Orrick similarly delivered a blow to fundamental contentions in the lawsuit by questioning whether artists can substantiate copyright infringement in the absence of identical material created by the AI tools. He called the allegations “defective in numerous respects.”

Some of the issues presented in the litigation could decide whether creators are compensated for the use of their material to train human-mimicking chatbots that have the potential to undercut their labor. AI companies maintain that they don’t have to secure licenses because they’re protected by the fair use defense to copyright infringement.

According to the complaint filed in July, Meta’s AI model “copies each piece of text in the training dataset” and then “progressively adjusts its output to more closely resemble” expression extracted from the training dataset. The lawsuit revolved around the claim that the entire purpose of LLaMA is to imitate copyrighted expression and that the entire model should be considered an infringing derivative work.

But Chhabria called the argument “not viable” in the absence of allegations or evidence suggesting that LLaMA, short for Large Language Model Meta AI, has been “recast, transformed, or adapted” based on a preexisting, copyrighted work.

Another of Silverman’s main theories — along with other creators suing AI firms – was that every output produced by AI models are infringing derivatives, with the companies benefiting from every answer initiated by third-party users allegedly constituting an act of vicarious infringement. The judge concluded that her lawyers, who also represent the artists suing StabilityAI, DeviantArt and Midjourney, are “wrong to say that”  — because their books were duplicated in full as part of the LLaMA training process — evidence of substantially similar outputs isn’t necessary.

Link to the rest at The Hollywood Reporter

Stability AI exec leaves amid concerns over ‘fair use’ disagreement with $101m startup

From Yahoo Finance – UK:

A top exec at a British artificial intelligence (AI) startup has resigned because he is opposed to the company’s method of using copyrighted work to train its model without payment or permission from the owner.

Stability AI head of audio, Ed Newton-Rex said in a lengthy post on X he is stepping down because he disagrees “with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’”.

The London-headquartered startup uses generative AI to create text, images, music and video from descriptive prompts typed in by a human.

However, it has sparked controversy because it sweeps data indiscriminately to train its open-source model.

Newton-Rex, who says he is also a music composer, continued: “For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment.

“This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with.”

He is not the only one to disagree. Getty Images has accused Stable Diffusion, a generative AI model developed by Stability AI, of unlawfully scraping over 12m images owned by Getty to train its models.

The media company has asked Delaware’s federal court to order Stability to stop using its pictures and wants what could amount to $1.8 trillion in damages, according to Fortune.

Stability AI is the brainchild of Oxford graduate Emad Mostaque and has raised $101m (£82.3m) in a funding round led by Coatue, Lightspeed Venture Partners and O’Shaughnessy Asset Management.

In reply to Newton-Rex on X, Mostaque wrote: “Was great working with you & this is an important discussion. The considered input we gave to the Copyright Office on why we believe fair use supports creative development is here.“

Link to the rest at Yahoo Finance – UK

Google Arts & Culture

PG discovered the Google Arts & Culture site earlier this morning. It’s quite an extensive site and a definite threat to suck away a lot of time, but is nicely-done.

One of the site’s interesting features is listed as experimental. It’s called Poem Postcards. This location allows you to select a postcard based upon a classic painting, then use an AI to write a poem about it, then email the painting/poem to a friend.

Here’s a link to a poem postcard PG just created.

Grammarly

PG watched a Grammarly for Business Webinar earlier today and was very impressed by what the company showed with respect to AI writing.

Per the demos, Grammarly has taken AI to a higher plane than PG has seen before. Features that impressed him were:

  • Analysis of the user’s writing voice to help Grammarly be more helpful in shaping future creations for a variety of purposes.
  • The ability to have Grammarly go through a document you or someone else has written, summarize it, and use it as a sort of electronic style guide in the future
  • The ability to assist in quick creation of documents for a wide variety of purposes, blog posts, text messages, emails, etc. In at least some uses, Grammarly will create the appropriate format for the written message, for example by creating and inserting bullet points from a longer text.

For the moment, it looks like the super AI will be limited to Grammarly for Business, which is an enterprise-scale program.

PG looked for an online link to a recorded version of the webinar but was unable to find it. If visitors to The Passive Voice can locate one, feel free to include a link in the comments to this post.

A Google AI model developed a skill it wasn’t expected to have

From Yahoo Finance:

Concerns about AI developing skills independently of its programmers’ wishes have long absorbed scientists, ethicists, and science fiction writers. A recent interview with Google’s executives may be adding to those worries.

In an interview on CBS’s 60 Minutes on April 16, James Manyika, Google’s SVP for technology and society, discussed how one of the company’s AI systems taught itself Bengali, even though it wasn’t trained to know the language. “We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali,” he said.

Pichai confirmed that there are still elements of how AI systems learn and behave that still surprises experts: “There is an aspect of this which we call— all of us in the field call it as a ‘black box’. You don’t fully understand. And you can’t quite tell why it said this.” The CEO said the company has “some ideas” why this could be the case, but it needs more research to fully comprehend how it works.

CBS’s Scott Pelley then questioned the reasoning for opening to the public a system that its own developers don’t fully understand, but Pichai responded: “I don’t think we fully understand how a human mind works either.”

AI’s development has also come with glaring flaws that lead to fake news, deepfakes, and weaponization, sometimes with so much confidence, in what the industry calls “hallucinations.”

Asked if Google’s Bard is getting a lot of “hallucinations,” Pichai responded: “Yes, you know, which is expected. No one in the, in the field has yet solved the hallucination problems. All models do have this as an issue.” The cure, Pichai said, is around developing “more robust safety layers before we build, before we deploy more capable models.”

Link to the rest at Yahoo Finance

AI Prompt and Response of the Day

PG submitted a question to ChatGPT 3.5 today:

What sort of problems can be expected if Microsoft’s company computer systems are hacked by a sophisticated group of people who wish to harm Microsoft?

Here’s the AI’s Response:

If Microsoft’s company computer systems are hacked by a sophisticated group with malicious intent, several serious problems can be expected, including:

  1. Data Breach: The hackers may gain unauthorized access to sensitive information, compromising customer data, intellectual property, and business-critical data.
  2. Service Disruption: The attack could disrupt Microsoft’s services and products, leading to downtime, affecting user experience, and potentially causing financial losses.
  3. Reputation Damage: A successful hack can damage Microsoft’s reputation, eroding trust among customers, partners, and investors. This can have long-term consequences for the company’s brand.
  4. Financial Losses: Beyond immediate financial losses due to service disruption, the company may face legal and regulatory consequences, fines, and costs associated with addressing the breach and implementing enhanced cybersecurity measures.
  5. Intellectual Property Theft: Hackers may target Microsoft’s proprietary software and technologies, leading to the theft of valuable intellectual property that could be exploited or sold to competitors.
  6. Supply Chain Risks: If the attack extends to Microsoft’s supply chain, it could affect the security of products and services provided to customers, potentially leading to broader industry impacts.
  7. Legal and Regulatory Consequences: Microsoft may face legal actions and regulatory scrutiny for failing to protect user data and for any potential violations of data protection and privacy laws.
  8. Ongoing Threats: Once a sophisticated group gains access, they may establish persistent threats, making it challenging for Microsoft to completely eradicate the threat and requiring ongoing cybersecurity efforts.

Microsoft, like any major technology company, invests heavily in cybersecurity measures to prevent and respond to such threats. However, the landscape is constantly evolving, and the potential impact of a sophisticated cyberattack remains a significant concern for large corporations.

Did a computer write this? Book industry grapples with AI

From Today Online:

From low-quality computer-written books flooding the market to potential copyright violations, publishing is the latest industry to feel the threat from rapid developments in artificial intelligence (AI).

Since the launch last year of ChatGPT, an easy-to-use AI chatbot that can deliver an essay upon request within seconds, there have been growing worries about the impact of generative AI on a range of sectors.

Among book industry players there is “a deep sense of insecurity”, said Juergen Boos, director of the Frankfurt Book Fair, the world’s biggest, where the topic was in focus last week.

They are asking, “What happens to authors’ intellectual property? Who does new content actually belong to? How do we bring this into value chains?” he said.

The threat is plain to see — AI writing programmes allow budding authors to produce in a matter of day novels that could in the past have taken months or years to write.

A flood of titles that list ChatGPT as a co-author has been offered for sale through Amazon’s e-book self-publishing unit.

Still, critics say the works are of low quality and sense little threat from AI for now.

British author Salman Rushdie told a press conference at the fair that recently someone asked an AI writing tool to produce 300 words in his style.

“And what came out was pure garbage,” said the Midnight’s Children writer, to laughter from the audience.

“Anybody who has ever read 300 words of mine would immediately recognise that it could not possibly be by me.””So far I’m not that alarmed,” he added, during a rare public appearance since a near-fatal stabbing attack last year in the United States.

Jennifer Becker, a German author and academic, echoed his sentiments, telling a panel discussion that the results when it comes to AI writing fiction “are still not that great”. 

“There is a lot of potential to use it — to use it collaboratively. 

“But I still don’t see the point where we really hand over the writing work to AI completely autonomously. That wouldn’t make for an interesting book.”

. . . .

Industry players stress however that in some areas there is more openness to dealing with artificial intelligence.

“It depends a bit on the genre,” said Susanne Barwick, deputy legal adviser of the German Publishers and Booksellers Association, who has been in discussion about AI with publishers. 

“The field of science and specialist books is already further along and has already dealt with it more.”

These areas were “easier than the field of fiction, where I think at the moment people still tend to look a bit more at the risks”, she added. 

Artificial intelligence’s relationship with publishing threatens to throw up a host of legal problems, with one major “grey area” being who owns the copyright of AI-generated content, said fair director Boos. 

“Then you get into a real mess, and it is a huge theme. There is also really a lot of money involved,” he said.

. . . .

Last month, Game of Thrones author George RR Martin, John Grisham and Jodi Picoult were among several writers who filed a class-action lawsuit against ChatGPT creator OpenAI over alleged violation of copyrights.

Along with the Authors Guild, an organisation representing writers, they accused the California-based company of using their books “without permission” to train ChatGPT’s “large language models”, algorithms capable of producing human-sounding text responses based on simple queries, according to the lawsuit.

Translation is another thorny area, with some industry players feeling artificial intelligence would miss the nuances and subtleties needed to render complex literature into other languages.

Link to the rest at Today Online

The best AI tools to make you a better writer

From Fast Company:

You’ll soon see AI in most writing tools. Canva, Notion, Craft, Coda, and other popular writing services have been racing to add new AI features. Google Docs added a new AI-driven summarization feature. Read on for what AI can do for you, where the hype goes too far, and a few recommended tools to try.

. . . .

AI TOOLS TO TRY

  • Lex. I love its simplicity. It’s designed specifically for individual writers, not marketers, technologists or corporate teams. It doesn’t make braggadocious claims, just helps writers in subtle ways.

    Lex draws on the same OpenAI engine as Canva Docs but lacks its visual features. It’s useful if you write often but don’t need the 150 menu options available in Microsoft Word. For now, Lex is free, with a waitlist for access. For now, Lex is free, with a waitlist for access.

    Still in private beta, it already has great features, including:
    • title generator to suggest headlines for whatever you’re writing.
    • question answerer to respond concisely to factual queries.
    • A paragraph writer that assesses what you’ve written and suggests a relevant next paragraph or bullet-points that build on your argument. It can also help remind you of blind spots in your writing by listing topics, facts or issues you may have accidentally ignored.
    • Writing stats to track productivity.
  • ChatGPT isn’t a writing service per se, but it’s a helpful way to challenge your own thinking by seeking out AI explanations of complex issues. More than a million people have already signed up to chat with this remarkable chatbot that answers questions with an eerily human touch.
  • Craft.do has the best-designed implementation of any of the AI writing tools I’ve seen so far. You hit / and it gives you some options for adding directly into whatever doc you’re working on. It works on the Mac and Windows apps, on the Web and on iOS.

    You can use the AI feature to summarize or explain something; create an outline; write a pros and cons list; generate keywords or hashtags; suggest a title; add to something you’ve written; or translate your text into English, Spanish, French, Italian, German, Dutch, Portuguese, Korean or Japanese. It can stray beyond those features. I asked it to generate a limerick about AI, which you’ll see in this AI-generated document it spun out of my testing.
  • Canva Docs calls its AI feature Magic Write, which I mentioned in my piece last week. It joins another Canva AI tool that lets you generate images by typing in a text prompt, and a magic resizing tool that will adjust your image to fit whatever dimensions you need. Those first two AI features are free.
  • Copy.ai is useful for generating marketing materials.

    To test it I typed in some keywords about the Wonder Tools newsletter. It generated surprisingly decent marketing copy employing the Pain/Problem-Agitate-Solution framework.

    It didn’t, of course, generate the time or expertise required to figure out a marketing strategy. For non-marketers, though, AI-generated text can provide a helpful starting point.

    You can also use it to generate YouTube titles and descriptions, Instagram captions, TikTok video ideas, Facebook ad text, and various other formats.

Link to the rest at Fast Company

Artists Suing Stability AI Forced to Go Back to The Drawing Board on Copyright Claims

From MSN:

A group of artists suing generative AI companies for allegedly using their copyrighted works are down, but not out, following a recent federal judge’s order. On Monday, the judge presiding over a case brought by three visual artists dismissed the majority of the claims levied against Stability AI, Midjourney, and art social network DeviantArt after determining the artists’ accusations were “defective in numerous respects.”

All of the allegations against Midjourney and DeviantArt were dismissed, though the artists and their legal teams will have a chance to amend their complaint to state their argument more clearly. The core question of whether or not training generative AI models on artists’ work amounts to a copyright infringement, however, remains totally unresolved.

The case stems from a January lawsuit filed by artists Sarah Andersen, Kelly McKernan, and Karla Ortiz, who accused the tech companies of downloading billions of copyrighted images to train models without the artist’s consent and without compensating them. The artists claim supposedly “new” creations generated by Stability AI’s Stable Diffusion generator derivative of their own supposedly sucked up on a dataset used to train the models. Generative AI image generators, the artists argued in their lawsuits, aren’t creating completely original art, but are instead “merely a complex collage tool.” The artists sought a permanent injunction from the court barring Stability AI and the other defendants from using artwork without artists’ permission.

Orrick seemed unconvinced about whether or not the actual images generated by the AI models amount to a copyright infringement. In their complaint, the artists particularly took issue with AI images generated via prompts asking the modes to create images “in the style of” a known professional. The images the AI models spit out, they argue, end up competing in the marketplace against the original work of the human artist they were based on. But many of the works generated by these models, even if they are trained on an artist’s original work, may not look similar enough to the original artist’s work to run afoul of copyright protection. In other words, those “inspired by” images generated by AI models likely do not violate the artist’s copyright.

“I am not convinced that copyright claims based on a derivative theory can survive absent ‘substantial similarity’ type allegations,” Orrick wrote in the order. “The cases plaintiffs rely on appear to recognize that the alleged infringer’s derivative work must still bear some similarity to the original work or contain the protected elements of the original work.”

The judge also expressed skepticism towards the artists’ theory that the billions of supposedly scrapped works were “compressed” into Stable Diffusion’s program. Stability AI has previously denied accusations that training its AI model requires complete copies of copyrighted works. Instead, Stability claims it trains its models using complex parameters that are associated with certain subjects.

“Plaintiffs will be required to amend to clarify their theory with respect to compressed copies of Training Images and to state facts in support of how Stable Diffusion—a program that is open source, at least in part—operates with respect to the Training Images,” Orrick’s order stated.

Stability AI did not immediately respond to Gizmodo’s request for comment. Matthew Butterick, and Joseph Saveri, two of the attorneys representing the artists, told Gizmodo they believed the order issued by Judge Orrick sustained their client’s “core claim” regarding the alleged direct copyright infringement by Stability AI. That core claim, he said, was now on a path to trial.

Link to the rest at MSN and thanks to F. for the tip

PG suggests that the comments from the artists’ counsel in the last paragraph may be hopeful happy talk.

Much Ado About AI: Why I Built a Tool to Modernize Shakespeare’s Verse

From School Library Journal:

There’s a good argument that Shakespeare is the world’s most popular author. About 90 percent of American schools assign Shakespeare to students. His work has been translated into more than 100 languages. Declare “To be or not to be,” and most will answer, “That is the question.” The Bard’s work is widely integrated across culture, education, and the modern English language. Despite this, people find Shakespeare hard. Some might even say too hard.

In a recent survey of 500 teachers, 56 percent said their students found Shakespeare difficult to read. Of these teachers, 60 percent said the Elizabethan language was the biggest obstacle for students reading the plays. The themes of love, betrayal, and ambition are timeless—but maybe Elizabethan English isn’t. For many first-time readers, Shakespeare’s plays are full of unfamiliar words, phrasing, and grammatical constructions.

This reported difficulty with the language shouldn’t be viewed as a problem with Shakespeare. Elizabethan English didn’t suddenly become dated in 2023. It’s been unfamiliar and antiquated to readers for many decades. But increasingly, the language is a barrier to new readers starting a love affair with the material.

Here, in my view, artificial intelligence (AI) offers a unique benefit: facilitating the reading experience of Shakespeare’s works. Large language models (LLMs: the AI systems that power popular products like ChatGPT) have exciting potential to help people read older texts with relative ease.

If you provide AI models with text, they can instantaneously synthesize, explain, and contextualize it. They offer definitions of words, historical context, and other details that might escape a modern reader. If you’re reading War and Peace and have a foggy sense of Russian history, running a passage through an AI model quickly provides bullet points on Napoleon’s invasion of Russia as well as definitions of period-specific terms.

AI can also accurately paraphrase Elizabethan language into modern English so readers can understand any line of Shakespeare. This strategy isn’t intended as a substitute for reading the original text, but as a “reading copilot” on hand to help.

Bard-themed AI tools are gaining popularity. One I created, Shakespeare Translator, has been used by over 50,000 readers. These AI models aren’t deterministic systems with pre-written translations mapped to certain lines. Rather, the tools use LLMs to analyze the context and language patterns, providing modern interpretations.

Many are quick to critique AI-powered reading tools. The arguments essentially center on the idea that using AI waters down the joy and rewards of reading. But using AI isn’t about replacing reading. It’s about helping more people appreciate difficult material, more readily, and with fewer barriers to entry.

. . . .

How useful are these translations? Are they accurate? Do they actually help students grasp the meaning of lines? Judge for yourself.

Here’s the original text of the famous Hamlet speech
“To be, or not to be, that is the question: Whether ’tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles and by opposing end them. To die; to sleep; No more; and by a sleep to say we end the heart-ache and the thousand natural shocks that flesh is heir to.”

Run through a Shakespeare translation tool, this is paraphrased to
“The big question is whether it’s better to keep on living or to end it all. Is it nobler to endure the hardships and misfortunes that life throws at us, or to fight against all the problems that come our way and put an end to them? Death would mean eternal rest and therefore an escape from the pain and suffering that life brings us.”

Or consider this from Richard III
“Now is the winter of our discontent made glorious summer by this sun of York; And all the clouds that lour’d upon our house in the deep bosom of the ocean buried.”

Paraphrase
“Our unhappiness has now turned into joy due to the presence of the ruler from York; and all of our worries and troubles are now in the past.”

Link to the rest at School Library Journal

Here’s a link to  Shakespeare Translator

PG’s Borrowed Thoughts:

The dumbing down of American is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.

Carl Sagan

Court Offers First Glimpse Into Whether AI Machine Learning Is Copyright Infringement Or Fair Use

From Mondaq:

As we previously blogged, multiple generative AI platforms are facing lawsuits alleging that the unauthorized use of copyright-protected material to train artificial intelligence constitutes copyright infringement.  A key defense in those cases is fair use.  Specifically, AI platforms contend that they don’t need a license to use copyright-protected content—whether scraped from the Internet or obtained from a pirate trove of books—for the purpose of developing and improving large language models (LLMs) under the theory that such use is transformative and fair use under the Copyright Act.  Whether fair use prevails in this battle is one of the biggest copyright questions of the day.

While many of the generative AI actions are pending in the U.S. District Court for the Northern District of California, a federal court in Delaware recently had the opportunity to opine on the merits of this important fair use question.  In Thomson Reuters v. Ross Intelligence, 2023 WL 6210901 (D. Del. Sept. 25, 2023), the owner of Westlaw (Thomson Reuters) claims, among other things, that an AI startup (Ross Intelligence) infringed Thomson Reuters’ copyright by using Westlaw’s headnotes to train Ross’s legal AI model.  The parties cross moved for summary judgment on various grounds, including on Ross’s fair use defense.  

Though the decision explores multiple interesting questions of copyright law, including the copyrightability of Westlaw headnotes (maybe) and whether the Copyright Act preempts Thomson Reuters’ claim for tortious interference (yes), its analysis of Ross’s fair use defense—in particular, the court’s assessment of whether Ross’s alleged use of Westlaw’s headnotes (assuming they are protected by copyright) is “transformative—is where the court appears to have broken new ground.

The court begins its fair use analysis by discussing two cases from the Ninth Circuit that deal with so-called “intermediate copying.”  In Sega Enterprises v. Accolade, 977 F.2d 1510 (9th Cir. 1992), the court held that it was fair use for a company to copy Sega’s copyright-protected console code for the purpose of learning the software’s functional components and making new games that were compatible with Sega’s console.  Similarly, in Sony Computer Entertainment v. Connectix, 203 F.3d 596 (9th Cir. 2000), the Ninth Circuit held it was fair use for a company to create a copy of Sony’s software in order to create a new gaming platform that was compatible with Sony’s games.  The Thomson Reuters court noted that the Supreme Court “has cited these intermediate copying cases favorably, particularly in the context of ‘adapting the doctrine of fair use in light of rapid technological change.’”  2023 WL 6210901, at *8 (quoting Google v. Oracle, 141 S. Ct. 1183, 1198 (2021)) (cleaned up).

Thomson Reuters attempted to distinguish the intermediate-copying cases by arguing that, unlike the companies in Sega and Sony that merely sought to “study functionality or create compatibility,” Ross sought to train its AI with Westlaw’s “creative decisions” specifically to “replicate them” in the AI’s output.  Ross, on the other hand, contended that “its AI studied the headnotes and opinion quotes only to analyze language patterns, not to replicate Westlaw’s expression,” and thus was lawful “intermediate copying.”  The court held that whether Ross’s use was transformative would turn on the “precise nature of Ross’s actions.”  

Here’s the key text:

It was transformative intermediate copying if Ross’s AI only studied the language patterns in the headnotes to learn how to produce judicial opinion quotes.  But if Thomson Reuters is right that Ross used the untransformed text of headnotes to get its AI to replicate and reproduce the creative drafting done by Westlaw’s attorney-editors, then Ross’s comparisons to cases like Sega and Sony are not apt.

. . . .

To the extent that LLMs are ingesting copyright-protected material solely to understand language patterns and not to replicate their creative expression (which may very well be the case for many LLMs), this opinion suggests that using such material to train AI is transformative.  But if the material is being used to train AI to output the “creative drafting” discerned from the original, then the use is likely not transformative.  Thus, as the Thomson Reuters court observes, the fair use question in these cases may turn on the exact nature of the AI training process.

Link to the rest at Mondaq

PG apologizes if the rest of this post is boring for anyone who isn’t a law geek, but the following may help clarify PG’s interest.

The OP intrigued PG because he got into a bit of trouble a long time ago when he suggested, in an article he wrote for The Journal of the American Bar Association, that West Publishing didn’t have a legitimate copyright to the books it published that consisted of the opinions of a large number of courts across the country.

West was a venerable professional publisher, founded in 1872 to print law books for the use of attorneys and judges.

West evolved to publish the statutes for the United States government and every state.

West also published the court opinions written by judges in the federal court system and all states.

Because the statutes and case opinions are public documents, anyone who desires to publish them is free to do so.

West contended that the improvements it made in these public documents it published were protected by copyright laws.

West built up a large business based upon the changes it made to improve the quality of the federal and state court opinions. These included:

  1. West employees proofread the opinion and corrected grammatical errors.
  2. West employees checked all of the statutory and case citations included in the opinion and corrected them to reflect generally used conventions of legal citations. (Judges, like any other human beings, sometimes make mistakes when they write their opinions. The conventions used in creating such citations can make correctly creating the citations to statutes and cases an error-prone activity.)
  3. For example, “Stearns v. Ticketmaster Corp., 655 F.3d 1013 (9th Cir. 2011),” is West’s citation for the court opinion in the case of Stephen Stearns v. Ticketmaster Corp, et al (et al is an abbreviation of the Latin term “et alia,” which means “and others.”) that was published in volume 655 of the Federal Reporter, Third Series (identified by the abbreviation “F.3d”), beginning on page 1013. The citation also shows the decision was issued by the United States Court of Appeals for the Ninth Circuit (abbreviated as 9th Cir.), in 2011.
  4. It was and is considered bad form for an attorney to cite a case other than in the form prescribed by “Blue Book Citations” in legal documents submitted to a court. West citations were the basis for Blue Book Citations. As mentioned earlier, most judges were happy to have West correct their citation errors. That service helped a judge avoid snide remarks from other judges in the judicial cafeteria.

West also categorized cases according to a West-created “Key Number System.” This is a classification system that organizes cases by topic, allowing legal researchers to quickly find cases related to a particular issue. This system was created in the 19th century, starting with seven categories: persons, property, contracts, torts, crime, remedies, and government.

The Key Number System could be quite helpful before the digitization of cases and statutes.

In 1967, the Ohio State Bar Association entered into a $7,000 agreement with Data Corporation of Beavercreek, Ohio, to create a full-text, interactive research service of the Ohio statutes.

In 1973, Mead Data Central, the successor of Data Corporation, introduced LEXIS, an online computer research service that consisted of the full text of Ohio and New York codes and cases, the U.S.
code, and some federal case law. The LEXIS search engine was clunky by today’s standards, but it allowed attorneys to search the statutes and case opinions much faster and at a more granular level than could be done with West’s printed books.

West and LEXIS (Mead Data Central)

Why I let an AI chatbot train on my book

From Vox:

Four years ago, I published my first book: End Times: A Brief Guide to the End of the World.

It did … okay? I earned a Q&A with the site you’re reading now — thanks, Dylan! — and the book eventually helped get me the job of running Future Perfect. I had one day where I went from radio hit to radio hit, trying to explain in five-minute segments to morning DJs from Philadelphia to Phoenix why we should all be more worried about the threat of human extinction and what we could do to prevent it.

But a bestseller it was not. Let’s put it this way — about every six months, I receive a letter from my publisher containing a “non-paying royalty statement,” which is sort of like getting a Christmas card from your parents, only instead of money, it just contains a note telling you how much they’ve spent raising you.

So I’ll admit that I was a bit chuffed when I received an email a couple of months ago from people at aisafety.info, who are aiming to create a centralized hub for explaining questions about AI safety and AI alignment — how to make AI accountable to human goals — to a general audience. To that end, they were building a large language model — with the delightful name “Stampy” — that could act as a chatbot, answering questions people might have about the subject. (The website was just soft launched, while Stampy is still in the prototype stage.) And they were asking permission to use my book End Times, which contains a long chapter on existential risks from AI, as part of the data Stampy would be trained on.

My first thought, like any author’s: Someone has actually read (or at least is aware of the existence of) my book! But then I had a second thought: As a writer, what does it mean to allow a chatbot to be trained on your own work? (And for free, no less.) Was I contributing to a project that could help people better understand a complex and important subject like AI safety? Or was I just speeding along the process of my own obsolescence?

Training days

These are live questions right now, with large language models like ChatGPT becoming more widespread and more capable. As my colleague Sara Morrison reported this summer, a number of class action lawsuits have already been filed against big tech firms like Google and OpenAI on behalf of writers and artists who claim that their work, including entire books, had been used to train chatbots without their permission and without remuneration. In August, a group of prominent novelists — including Game of Thrones author George R.R. Martin, who really has some other deadlines he should attend to — filed suit against ChatGPT maker OpenAI for “systematic theft on a massive scale.”

Such concerns aren’t entirely new — tech companies have long come under fire for harnessing people’s data to improve and perfect their products, often in ways that are far from transparent for the average user. But AI feels different, as attorney Ryan Clarkson, whose law firm is behind some of the class action lawsuits, told Sara. “Up until this point, tech companies have not done what they’re doing now with generative AI, which is to take everyone’s information and feed it into a product that can then contribute to people’s professional obsolescence and totally decimate their privacy in ways previously unimaginable.”

I should note here that what aisafety.info is doing is fundamentally different from the work of companies like Meta or Microsoft. For one thing, they asked me, the author, for permission before using my work. Which was very polite!

Beyond that, aisafety.info is a nonprofit research group, meaning that no one will be making money off the training data provided by my work. (A fact which, I suspect, will not surprise my publisher.) Stampy the chatbot will be an educational tool, and as someone who runs a section at Vox that cares deeply about the risk of powerful AI, I’m largely glad that my work can play some small role in making that bot smarter.

And we desperately need more reliable sources of information about AI risk. “I think the general understanding of AI alignment and safety is very poor,” Robert Miles of aisafety.info told me. “I would say that people care a lot more than they used to, but they don’t know a lot more.”

Chatbots, trained on the right source materials, can be excellent educational tools. An AI tutor can scale itself to the educational level of its student and can be kept up to date with the latest information about the subject. Plus, there’s the pleasant irony of using some of the latest breakthroughs in language models to create an educational tool designed to help people understand the potential danger of the very technology they’re using.

What’s “fair use” for AI?

I think that training a chatbot for nonprofit, educational purposes, with the express permission of the authors of the works on which it’s trained, seems okay. But do novelists like George R.R. Martin or John Grisham have a case against for-profit companies that take their work without that express permission?

The law, unfortunately, is far from clear on this question. As Harvard Law professor and First Amendment expert Rebecca Tushnet explained in an interview published in the Harvard Gazette, digital companies have generally been able to employ concepts of fair use to defend harvesting existing intellectual property. “The internet as we know it today, with Google and image search and Google Books, wouldn’t exist if it weren’t fair use to use these words for an output that was not copying” the original, she said.

One way to consider this is to think about how humans, like myself, write books. When I was researching and writing End Times, I was drawing upon and synthesizing the existing work of hundreds of different authors. Sometimes I would quote them directly, though there are specific rules about how much of an individual work another author can directly quote from under fair use. (The rough rule is 300 words when quoting from a published book, or around 200 words for a briefer article or paper.)

More often, though, what I read and processed in my research rattled around in my brain, combined with other reporting and reasoning, and came out as my own work — my work informed by my own sources. Or, in other words, informed by my own personal training dataset.

Link to the rest at Vox

The New York Times built a robot to help make article tagging easier

From NiemanLab:

If you write online, you know that a final, tedious part of the process is adding tags to your story before sending it out to the wider world.

Tags and keywords in articles help readers dig deeper into related stories and topics, and give search audiences another way to discover stories. A Nieman Lab reader could go down a rabbit hole of tags, finding all our stories mentioning Snapchat, Nick Denton, or Mystery Science Theater 3000.

Those tags can also help newsrooms create new products and find inventive ways of collecting content. That’s one reason The New York Times Research and Development lab is experimenting with a new tool that automates the tagging process using machine learning — and does it in real time.

The Times R&D Editor tool analyzes text as it’s written and suggests tags along the way, in much the way that spell-check tools highlight misspelled words:

Editor is an experimental text editing interface that explores how collaboration between machine learning systems and journalists could afford fine-grained annotation and tagging of news articles. Our approach applies machine learning techniques interactively, as part of the writing process, rather than retroactively. This approach can offload the burden of work to the computational processes, and can create affordances for journalists to augment, edit and correct those processes with their knowledge.

It’s similar to Thomson Reuters’ Open Calais system, which extracts metadata from text files of any kind. Editor works by connecting the corpus of tags housed at the Times with an artificial neural network designed to read over a writer’s shoulder in a text editing system. They explain:

As the journalist is writing in the text editor, every word, phrase and sentence is emitted on to the network so that any microservice can process that text and send relevant metadata back to the editor interface. Annotated phrases are highlighted in the text as it is written. When journalists finish writing, they can simply review the suggested annotations with as little effort as is required to perform a spell check, correcting, verifying or removing tags where needed. Editor also has a contextual menu that allows the journalist to make annotations that only a person would be able to judge, like identifying a pull quote, a fact, a key point, etc.

“We started looking at what we could do if we started tagging smaller entities in the articles. [We thought] it might afford greater capabilities for reuses and other types of presentation,” said Alexis Lloyd, creative director at the Times R&D Lab.

Tags are a big deal at the Times; the paper has a system of article tags that goes back over 100 years. That metadata makes things like Times Topics pages possible. It’s an important process that is entirely manual, relying on reporters and editors to provide a context layer around every story. And in some cases, that process can lag: The Times’ innovation report cited many gaps in the paper’s metadata system as a strategic weakness:
“Everyone forgets about metadata,” said John O’Donovan, the chief technology officer for The Financial Times. “They think they can just make stuff and then forget about how it is organized in terms of how you describe your content. But all your assets are useless to you unless you have metadata — your archive is full of stuff that is of no value because you can’t find it and don’t know what it’s about.”

Lloyd said the idea behind Editor was not just to make the metadata process more efficient, but also to make it more granular. By using a system that combs through articles at a word-by-word level, the amount of data associated with people, places, companies, and events becomes that much richer.

And that much more data opens new doors for potential products, Lloyd told me. “Having that underlying metadata helps to scale to all kinds of new platforms as they emerge,” she said. “It’s part of our broader thinking about the future of news and how that will become more complex, in terms of forms and formats.”

. . . .

The key feature of the automatic tagging system relies on bringing machines into the mix, an idea that inspires conflicting ideas of progress and dread in some journalists. For Editor to work, the lab needed to build a way for machines and humans to supplement each other’s strengths. Humans are great at seeing context and connections and understanding language, while machines can do computations at enormous scale and have perfect memory. Mike Dewar, a data scientist at the Times R&D lab, said the artificial neural network makes connections between the text and an index of terms pulled from every article in the Times archive.

It took around four months to build Editor, and part of that time was spent training the neural network in how a reporter might tag certain stories. Dewar said that teaching the network the way tags are associated with certain phrases or words gives it a benchmark to use when checking text in the future.

The biggest challenge was latency, as Editor works to make connections between what’s being written and the index of tags. In order for Editor to be really effective, it has to operate at the speed of typing, Dewar said: “It needs to respond very quickly.”

. . . .

Robots continue to expand their foothold in the world of journalism. In March, the AP said it planned to use its automated reporting services to increase college sports coverage. Lloyd has experimented with how bots can work more cooperatively with people, or at least learn from them and their Slack conversations.

Link to the rest at NiemanLab

AI is making its way into the courtroom and legal process

From CNBC:

Is the U.S. headed towards an AI-driven “smart court,” as the Center for Strategic and International Studies calls China’s frequent use of automated, digitized court proceedings? Not quite, experts say. However, these predictions aren’t entirely off the mark.

“AI is really reaching all aspects of the law,” said Wayne Cohen, managing partner at Cohen & Cohen and a law professor at the George Washington University School of Law.

While the current use of AI in the U.S. legal industry operates intensely behind the scenes, it’s inching further into the front lines of the courtroom.

Cohen said AI plays a role in most of the research, writing and jury exhibit creation that goes into trial preparation, as well as office administration, trial summaries and translations.

It also helps kick the can down the road when processing lawsuits. “The movement of the cases from when a party files a lawsuit until the case is resolved is going to get much shorter,” Cohen said.

From the bench, judges can generate searchable PDF transcriptions from audio recordings and make informed judgments that day. And with AI’s ability to flag contradictions, it can bolster or hinder the credibility of the prosecution or defense. When judges make rulings, “they can do it with a lot of accuracy, and it’s supported by the evidence that they heard in their courtroom,” said Jackie Schafer, a former assistant attorney general for the state of Washington.

Schafer founded Clearbrief in 2020, which runs on AI that’s designed to scan documents and identify citations, in addition to creating hyperlinked chronological timelines of all of the dates mentioned in documents for swift reference.

Jason Boehmig, CEO and co-founder of digital contract company Ironclad and who has experience as a corporate attorney, said AI can review a company’s legal contracts, learning its preferred language and drafting and negotiating contracts in the organization’s historic legal voice. 

Business contracts are at the forefront of legal innovation, Boehmig said. “It’s an area where we can afford to experiment,” he said. On the spectrum of the legal system, the businesses on either end of the contract arguably have less to lose than, say, an individual whose basic freedoms are at stake. 

In all of these applications, experts say the ideal situation is for humans to review AI’s work. The notion of keeping the human in the loop is far from unique to the legal industry, but the significant ramifications coming out of the justice system make human oversight all the more critical.

Link to the rest at CNBC

AI Doomers Take Center Stage at the UK’s AI Summit

From Bloomberg via Yahoo Finance:

A fierce debate over how much to focus on the supposed existential risks of artificial intelligence defined the kickoff of the UK’s AI Safety Summit on Wednesday, highlighting broader tensions in the tech community as lawmakers propose regulations and safeguards.

Tech leaders and academics attending the Summit at Bletchley Park, the former home of secret World War II code-breakers, disagreed over whether to prioritize immediate risks from AI — such as fueling discrimination and misinformation — verses concerns that it could lead to the end of human civilization.

Some attendees openly worried so-called AI doomers would dominate the proceedings — a fear compounded by news that Elon Musk would appear alongside British Prime Minister Rishi Sunak shortly after the billionaire raised the specter of AI leading to “the extinction of humanity” on a podcast. On Wednesday, the UK government also unveiled the Bletchley Declaration, a communique signed by 28 countries warning of the potential for AI to cause “catastrophic harm.”

“I hope that it doesn’t get dominated by the doomer, X-risk, ‘Terminator’-scenario discourse, and I’ll certainly push the conversation towards practical, near-term harms,” said Aidan Gomez, co-founder and chief executive officer of AI company Cohere Inc., ahead of the summit.

Top tech executives spent the week trading rhetorical blows over the subject. Meta Platforms Inc.’s chief AI scientist Yann LeCun accused rivals, including DeepMind co-founder Demis Hassabis, of playing up existential risks of the technology in an attempt “to perform a regulatory capture” of the industry. Hassabis then hit back in an interview with Bloomberg on Wednesday, calling the criticisms preposterous.

On the summit’s fringes, Ciaran Martin, the former head of the UK’s National Cyber Security Center, said there’s “genuine debate between those who take a potentially catastrophic view of AI and those who take the view that it’s a series of individual, sometimes-serious problems, that need to be managed.”

“While the undertones of that debate are running through all of the discussions,” Martin said, “I think there’s an acceptance from virtually everybody that the international, public and private communities need to do both. It’s a question of degree.”

In closed-door sessions at the summit, there were discussions about whether to pause the development of next-generation “frontier” AI models and the “existential threat” this technology may pose “to democracy, human rights, civil rights, fairness, and equality,” according to summaries published by the British government late Wednesday.

Between seminars, Musk was “mobbed” and “held court” with delegates from tech companies and civil society, according to a diplomat. But during a session about the risks of losing control of AI, he quietly listened, according to another attendee, who said the seminar was nicknamed the “Group of Death.”

Matt Clifford, a representative of the UK Prime Minister who helped organize the summit, tried to square the circle and suggest the disagreement over AI risks wasn’t such a dichotomy.

“This summit’s not focused on long-term risk; this summit’s focused on next year’s models,” he told reporters on Wednesday. “How do we address potentially catastrophic risks — as it says in the Bletchley Declaration — from those models?” he said. “The ‘short term, long term’ distinction is very often overblown.”

By the end of the summit’s first day, there were some signs of a rapprochement between the two camps. Max Tegmark, a professor at the Massachusetts Institute of Technology who previously called to pause the development of powerful AI systems, said “this debate is starting to melt away.”

Link to the rest at Yahoo Finance

When Successful, Art Exceeds Its Creator’s Plans – Especially In the Days of AI

From LexBlog:

When successful, art exceeds its creator’s plans. So true in these days of AI.

No one could have envisioned the “travelings” of a book, an article or a legal blog post. To the “Books3 database” for Meta’s AI?

Ian Bogost, a contributing writer for The Atlantic, takes a contra – and refreshing – view to that of most authors, reporters and publishers when it comes to the scraping of their work for the training of AI large language models such as ChatGPT.

A searchable database revealed that thousands of books were used “without permission,” causing some authors to express outrage and even launch lawsuits against tech giant Meta.

Bogost’s response:

Whether or not Meta’s behavior amounts to infringement is a matter for the courts to decide. Permission is a different matter. One of the facts (and pleasures) of authorship is that one’s work will be used in unpredictable ways. The philosopher Jacques Derrida liked to talk about “dissemination,” which I take to mean that, like a plant releasing its seed, an author separates from their published work. Their readers (or viewers, or listeners) not only can but must make sense of that work in different contexts. A retiree cracks a Haruki Murakami novel recommended by a grandchild. A high-school kid skims Shakespeare for a class. My mother’s tree trimmer reads my book on play at her suggestion. A lack of permission underlies all of these uses, as it underlies influence in general: When successful, art exceeds its creator’s plans.

Sitting with a group of law firm leaders in January, I was told they were going to sign a demand letter, along with other large law firms, demanding that large LLM’s – OpenAI, Google, etc stop scraping the open legal publishing of law firms.

I thought lots of luck – and why would you want to stop the advancement of the law, which the use of AI in legal publishing represents.

That silliness by law firms, as best I can tell, has subsided.

Books, articles and legal publishing – and AI itself – are vessels for ideas, per Bogost.

Once bound and published, boxed and shipped, my books find their way to places I might never have anticipated. As vessels for ideas, I hope, but also as doorstops or insect-execution devices or as the last inch of a stack that holds up a laptop for an important Zoom. Or even—even!—as a litany of tokens, chunked apart to be reassembled by the alien mind of a weird machine. Why not? I am an author, sure, but I am also a man who put some words in order amid the uncountable others who have done the same. If authorship is nothing more than vanity, then let the machines put us out of our misery.

I tend to agree with Bogost that authors, rather than feeling violated, should consider the unexpected ways their works contribute to the collective human—and increasingly machine—understanding.

Link to the rest at LexBlog

This wild AI tool can turn any website into a better version of itself

Note: PG just discovered this post in a dusty corner of TPV . Evidently, he stashed it instead of posting it. He guesses he had some sort of reason for stashing it after he created it, but darned if he knows what that reason was.

Perhaps the dog needed to be fed . . . wait a minute, PG hasn’t had a dog for a long time. Maybe PG bumped into the world’s best hypnotist, was placed in a hypnotic state and instructed to empty his bank account and give all the cash to the hypnotist. Maybe PG was placed under anesthesia by a Chinese spy and an implant surgically inserted in his brain that allows bad actors in PG to remotely control his actions in preparation for an upcoming invasion of Montana.

Whatever the cause, here is the lost post.

From Fast Company:

Airbnb features a beautiful, functional design. But like any business, it has its own priorities. Airbnb may want to highlight a whimsical home that looks like a giant shoe, while you’re heading out on a work trip and want to find a spot close to the local office that has stable Wi-Fi.

For now, dear user, it’s on you to dig through individual reviews and compare properties in different tabs before squinting at Airbnb’s maps and cross-referencing properties with Google Maps. 

But soon? AI could simply rebuild Airbnb’s entire website into exactly what you need it to be—surfacing and presenting the information you care about most.

During a recent presentation at the AI Engineer Summit, a research engineer named Amelia Wattenberger demonstrated software that does exactly that, using the Airbnb redesign as an example (a redesign that is, in fact, a working prototype). Wattenberger specializes in the intersection of design and AI. After helping develop Copilot at Github, she landed at a startup named Adept. With $350 million in series B funding this year, Adept is where dozens of her peers are working with companies to retrofit legacy enterprise software with new, AI-powered UI, while also building a Chrome plug-in that could reshape the web as you know it.

Adept’s uses LLMs (large language models, including ChatGPT) and other multimodal models to actually read what’s on your screen directly. While old automated systems could be scripted to collect certain elements off of web pages, Adept can make out the text, buttons, and images in real time—and they can make sense of what’s on these pages even if their components are moved or redesigned. After processing a page or pages of information, Adept can render an overlay through your web browser that’s like a personalized version of a website or other software.

“The way we go to websites, they’re behind these glass cases,” says Wattenberger. “[The web] is not this interactive place it was kind of meant to be. So, there’s this interesting move toward personalizing, customizing, democratizing websites and interfaces.”

For now, this Airbnb example is but a small taste of how Adept could work with websites and software more broadly. Think of Adept’s AI work like when a fine-dining restaurant presents a protein or vegetable “two ways,” where the core components are largely the same, but the nuance is entirely in presentation.

To start the demonstration, Wattenberger shows us Airbnb’s regular page for a listing in San Francisco. You’ll see all sorts of potentially useful information, like the nightly price, photos, and its star rating. 

Adept can scrape this information from Airbnb’s pages, and build new interfaces from it. So, if you communicated your own priorities to the AI—I want to walk to the Hotel Nicco and the BART, and I want listings that have reliable Wi-Fi—the AI models can read Airbnb’s listings to gather that info in minutes—and then place that information on a page generated just for you that’s richer than what Airbnb offers today.

In fact, even tools like messaging the host could go onto this page. Again, this isn’t mere theory—Adept has also developed its own AI models that can handle tasks like outreach and sending emails.

Now, this alone would be pretty handy; reskin Airbnb to be more personalized. But Wattenberger is somewhat obsessed with the UI metaphor of zooming in and out. Like, today, you can zoom into a map, and get more finite detail on restaurants or other attractions in an area. Or you can zoom out to see highways and state lines.

But in the hands of AI, zooming in and out can be more abstract. For instance, if you zoom out on a book, maybe you’d just get a list of chapters. And if you wanted to rewrite any chapter, you could simply change the summary and let the AI do the rest, rather than editing line-by-line as we do today. (In fact, Wattenberger has a side project called PenPal that imagines just this.)

Now, apply that “zoom out” idea to this Airbnb interface instead. What would you do if you could zoom out from your Airbnb custom listing?

Maybe you’d get a spreadsheet, which compares properties side by side based upon your most important criteria. Suddenly, you wouldn’t just see if any single property is nearby a train line; you could see them all.

And what if you wanted to zoom out again?

Maybe you’d see a scatterplot data visualization that weighed walking distance, Wi-Fi, and price to find your most optimal listing. But, and this is pretty key, this data visualization is still actually a working Airbnb interface. Tap onto any dot, or even circle around a group of dots, and you can reach out to the host(s).

“This is where it comes back to democratizing the web,” says Wattenberger. “We don’t have much control of the interfaces we use. And talking to a ton of enterprise use cases, this is rampant in the business world, too, where people are using legacy business software built 10-20 years ago that doesn’t do the things they need to do.”

. . . .

Adept’s argument is that AI can simply sit atop the software you use today to make it far more accessible.

Link to the rest at Fast Company