Hagrid – Harry Potter and the Philosopher’s Stone

Prompt: If the motorcycle was huge, it was nothing to the man sitting astride it. He was twice as tall as a normal man and at least five times as wide. He looked simply too big to be allowed, and so wild — long tangles of bushy black hair and beard hid most of his face, he had hands the size of trash can lids, and his feet in their leather boots were like baby dolphins

DALL-E Image

Charles Dickens AI

Prompt: a boat of dirty and disreputable appearance, with two figures in it, floated on the Thames, between Southwark Bridge, which is of iron, and London Bridge which is of stone, as an autumn evening was closing in. The figures in the boat were those of a strong man with ragged grizzled hair and a sun-browned face, and a dark girl of nineteen or twenty

DALL-E Images

Dashiell Hammett AI

Prompt: write the first paragraph of a noir detective novel in the style of Dashiell Hammett

Chat GPT 3.5: The night draped itself over the city like a worn-out trench coat, all shadows and whispers. I sat behind my desk, the dim glow of a solitary lamp flickering like the last ember in a dying fire. The cigarette smoke curled lazily, weaving tales of sin and secrets into the air. The rain tapped on the window, a rhythm that mirrored the clandestine dance of crime on the mean streets. The dame who walked through my door was trouble, and I could smell it like cheap perfume in a crowded gin joint. She had legs that stretched for miles, and eyes that had seen one too many double-crosses. As she spoke, her words cut through the smoke, revealing a plot thicker than the fog rolling in from the waterfront. Another night in this city of broken dreams, where the only thing certain was the uncertainty of the next twist in the dark alleys of deception.

And two DALL-E images generated from the first two sentences of the Chat GPT response above:

First PC AI accelerator cards from MemryX, Kinara debut at CES

From PC World:

Since Intel doesn’t plan a desktop CPU with AI capabilities until later this year, PC makers are turning to chip startups instead –and the future may be in the Lenovo ThinkCentre Neo Ultra, potentially with AI cards from MemryX and Kinara inside.

Lenovo will launch the ThinkCentre Neo Ultra in June for about $1,000, product manager Bryan Lin said from Lenovo’s booth at CES 2024. While Lenovo’s documentation does not officially include either AI processor, it’s likely. And the small content-creation desktop was at CES showcasing both AI cards.

While AMD, Intel and Qualcomm have all shown mobile processors with integrated AI NPUs, only AMD has announced a desktop Ryzen processor with an APU inside. Intel, which holds the dominant share in the PC processor industry, will have to wait until the launch of Arrow Lake to make an NPU available for desktop PC makers.

Meanwhile, more PC makers are realizing that an “AI PC” can actually be constructed with just a CPU and a GPU, while NPUs provide more power-efficient AI. If you’re a desktop PC maker, with traditionally fewer concerns about power consumption, that may be sufficient. But businesses, which want to apply AI to making money, want AI now — and they do care about minimizing power consumption at scale. In this, at least, the business market may push ahead of consumer PCs.

Link to the rest at PC World and thanks to F. for the tip.

Everyone in the publishing industry should be watching CES this month. Because AI is not about us. We’re just a sideshow.

From The New Publishing Standard:

How many of us today are doing our job in the same way it was done late last century? More importantly, how many of us today are doing jobs that even existed last century?

“This Year’s CES Will Totally Be All AI, All the Time.”

That’s according to an opinion piece over at LifeWire, where Charlie Sorrel last week advised, as if we needed telling, that AI might pop up here and there at CES 2024 in Las Vegas.

It’s the kind of place I would not be seen dead in. I still struggle to find the flashlight on my mobile phone, and I have to ask the kids at nursery school to adjust volume or ringtones. I think Fred Flintstone was its previous owner. I use it for making calls, for the very occasional text message, and for the Kindle app, which is the sole reason I need or want a smartphone.

But fortunately for me, my job as a publishing industry observer and analyst doesn’t require hands-on familiarity with every new tech development, any more than being a good driver means knowing the mysteries of the internal combustion engine, or a connoisseur of art needs to be good with a paintbrush and easel.

So I’m glued to the CES reportage this year, when the local internet lets me, for anything that might directly or indirectly impact the publishing industry, because, per Charlie Sorrel’s headline, CES this year is “totally” about AI. But so far (the event finishes Jan 12) you’d be hard-pressed to find much mention of publishing (although kudos to Kaleeg Hainsworth and the Legible team for being there!).

And there’s a reason for that, that many in the publishing industry seem not to grasp.

Let me whisper this quietly: AI, generative or otherwise, is not actually about the publishing industry. In fact, AI barely knows we exist, and cares even less.

. . . .

This past fifteen months I’ve seen authors, artists, translators, editors and publishers running about like headless chickens screaming the sky is falling, convinced AI is coming for their jobs, their families and their pet dog. And that it is going to destroy civilisation as we know it (because as we all know, civilisation didn’t exist until publishers came along).

They are convinced the AI bullet has “publishing industry” written on it, and they alone, among all the world’s people and all the world’s industries, have been singled out for extinction by this new techno-menace.

We so love to play the victim card!

And this is precisely why everyone in the publishing industry should be watching the CES reportage closely. And why they should be looking more closely at all their non-publishing activities and interests. Because pretty much everything we do, hear and see is already being heavily influenced by AI, in just about every field imaginable

It’s the first week of 2024 and we’d have to be a monk living in a cave on a desert island not to have AI working its magic for us each day, without our even realising it.

And we’re still in the early hours of AI Day One.

From Business Insider: “AI is saving sales professionals more than two hours of work each day.”

No, this wasn’t specifically about publishing sales people. And yes, that will be Business Insider as owned by Axell Springer, which recently signed a deal with OpenAI to lawfully train and use its content.

. . . .

In my second life as a teacher, I’m constantly having to reassure education sector people that the AI bullet does not have their name on it either.

Articles like this, from a teacher in Ireland help to allay their irrational fears:

“I’m a high school math and science teacher who uses ChatGPT, and it’s made my job much easier.”

There’s an endless list. But not all fears are irrational.

Yes, jobs will go. No question.

So what else is new?

How many of us today are doing our job in the same way it was done late last century? Or even a decade ago?

. . . .

Yes, audiobook narrators, I’m looking at you, among others.

Audiobook narration has been a sideshow for the industry for most of the time audiobooks have existed (1932, since you ask). Audiobooks were not on any publishers’ lists of priorities. Only the blind read them, and if you were blind you probably weren’t working so couldn’t afford them anyway.

As BookBeat CEO Niclas Sandin said, “No market survey said anyone wanted audiobooks.“

. . . .

But with the arrival of meaningfully-sized mobile devices, digital audio took off. Suddenly we have an audiobook boom and tons of new jobs are being created for narrators.

This graphic from Statista, showing audiobook production in the US between 2007-2021, gives an idea of the scale of change in the audiobook narration job market.

Put simply, there was almost no work for narrators in 2007, and many people who now call themselves narrators probably had never even considered the idea back before 2010.

. . . .

So out of nowhere, new jobs suddenly materialised. And how we rejoiced! More narrators were needed. And more translators. And of course authors and publishers were taking full advantage. New jobs were being created in audio studios, and more jobs in distribution. Entire new platforms were being created around digital audio that simply could never have existed twenty years ago.

But how did all this come about?

Technological evolution

. . . .

Now we’re all screaming foul because technological change is threatening those same jobs that mostly did not exist ten years ago, and certainly did not come with lifetime guarantees.

Not all jobs will go. The publishing apocalypse being touted by the Luddite fringe is just knee-jerk nonsense. Publishing will change, adapt and grow in new ways. Just as it changed with, adapted to, and delivered the audiobook market that exists today.

In fact, the audio market will just get bigger and better, assisted by the very AI the Luddite fringe are resisting. And the more adaptable among the narrators, translators, production crews and authors and publishers will move with the times and emerge stronger for the change.

. . . .

Right now the smart guys and gals in the publishing industry are asking themselves, how can AI benefit my career. They are taking courses, reading everything they can, writing and refining their prompting skills, and having great fun while they do it.

. . . .

But per the headline for this op-ed, AI is not about us. It is not about the publishing industry. We are just a sideshow.

And we need to understand that, when we make our irrational rants against AI, we are not just selfishly saying that our jobs are more important than anyone else’s. We are saying that we really don’t have a clue about what AI is, because we are already using it every day, and we are certainly going to be taking full advantage of it in the future, with absolutely no regard for how many jobs it costs anyone not in our own “special” industry.

Link to the rest at The New Publishing Standard

PG has seen a great deal of technological change during his working career. At the first law firm he worked in, there were lots of secretaries, each using an IBM Selectric typewriter.

The first word processors PG saw were in a prestigious law firm headquartered in a high-rise building in Beverly Hills. The firm had a lot of Entertainment lawyers and PG rode in an elevator with Marlon Brando on one occasion. He was very fluent with body language that said, “Don’t talk to me or look at me.”

The firm’s word processors and their operators were located in a separate, sound-proofed room. There were eight machines operated by the fastest typists PG had ever seen, each wearing a set of headphones which PG guessed was for listening to dictation. The soundproofing was necessary because eight very noisy printers were going full-speed all the time. The firm had enough fast typists to keep the printers going 24 hours per day, seven days a week. PG didn’t ask if the room was staffed on holidays.

When PG opened his own practice, he was the first attorney in the area to purchase a word processor for his secretary. It made him a lot of money. Ditto for being the first attorney to have a personal computer at his desk. He was grateful on many occasions that his mother had insisted he take typing in high school.

Just Say No to Artificial Intelligence in Your Creative Pursuits, Please

From Chuck Wendig: Terrible Minds:

Art is about people.

This is obvious and simplistic on the face of it but I think it’s important to remind ourselves of this–

Art is about people.

It is by people. It is for people. Art — and by proxy, storytelling — is a conduit between the maker of the art and the witness to that art. I made this, the maker says, and they did so for myriad possible reasons. They did it because it was beautiful, because it was horrible, because it scared them or enraged them or titillated them, or some combination of all of that. They were driven to portray a thing, or subvert a thing, or invent a thing.

The art forms a connection. The witness to the art — the one on the other end of that connection — experiences it however they must. They relate to it. They rebuke it. They adore it. They obsess over it. They detest it even as they can’t look away. Art, story, music — they form this ephemeral thing that is a way for us to talk to each other metatextually, across spans of distance great and small, and even across time itself. We scream our strange creations out into the void in the hope of being heard. A signal that we’re not alone. And we witness art in much the same way: as a reminder that we are not alone.

. . . .

The introduction of so-called “artificial intelligence” — which, really, is just a keenly-designed high-tech mimeograph — has gunked up the conduit between artist and audience with great clotted gobs of digital snot. It’s a pipe crawling with the Too-Many-Fingers monsters waggling their many bent digits at you while screaming twee authorial pablum and dipshitted disinformation in your ear. It’s gunk. It’s a mess.

I’ve spoken before about how “artificial intelligence” is really about the fetishization of idea —

. . . .

Artificial intelligence isn’t a person. It’s not even really, despite how I describe it, a machine. It’s the representative of a company. It’s the tool of not just one corporation, but many.

And it only exists because real people did real art.

Without something to chew up, it has nothing to spit out.

It steals our stuff, milks it, and kicks it aside, then shows it proudly to the world as if it did anything other than bleed an actual artist dry. It turns the artist and the art into dirt, then just regrows stuff from that same earth.

It’s a thief.

Link to the rest at Chuck Wendig: Terrible Minds

PG says that Chuck reliably produces some of the best rants on the internet.

However, PG believes that AI is another tool an author may choose to use to write more easily/better in the same manner as typewriters were selected by some authors as preferable to fountain pens and word processors were selected by some authors as preferable to typewriters.

Does anybody want to get rid of SpellCheck or Grammarly because they’re a form of computerized writing assistance?

If AI works for you as a part of your writing toolkit, PG says you should use it. If you don’t like AI for any reason, don’t use it.

Microsoft is adding an AI button to PC keyboards, their first major change in 30 years

From Yahoo!Finance:

Microsoft is doubling down on its commitment to AI with the introduction of a new keyboard key dedicated specifically to the company’s Windows Copilot. The first update to the Windows PC keyboard in almost 30 years, the Copilot key will be available on laptops and desktop keyboards from Microsoft’s OEM partners beginning in February.

According to Microsoft corporate vice president and consumer chief marketing officer Yusuf Mehdi, pressing the key will bring up Copilot, allowing you to search for content on the web, as well as for your own content and for general PC features via the app’s generative AI functionality. The idea is for Copilot to function as a kind of smart assistant that you can pull up at any time.

While Microsoft didn’t announce any specific partners that will launch with the Copilot key, it said that companies should begin unveiling desktops and laptops equipped with the feature in the coming days. We’ll also get an up-close look at the capability at CES 2024, which kicks off Jan. 9.

By getting its hardware partners to add a Copilot key to their laptops and desktops, Microsoft is showing that it believes generative AI is inextricable from the future of personal computing. The last time Microsoft added a new keyboard key was when it debuted the Windows key in 1994.

Link to the rest at Yahoo!Finance

PG has never worked in the product design division of a hardware company. PG also doesn’t know whether the Justice Department Antitrust Division has fully-recovered from its last big bash with Microsoft.

However, unless MS does something that prevents users of Windows from using any AI program they choose, an unlikely and extremely foolish decision by MS, PG thinks a dedicated Copilot key is interesting, but far from earth-shaking.

How Forbes Built Its AI-Powered Tool, Adelaide On Google Cloud Vertex

From Forbes:

Forbes recently announced the beta launch of its new AI-powered news search engine called Adelaide.

. . . .

What does Adelaide do?

Adelaide provides a more intuitive search and discovery experience by having natural conversations with users to understand their interests and needs.

As one of the first major publishers to leverage generative AI for news curation, the launch of Adelaide represents an exciting development in how media companies can enhance audience engagement.

. . . .

Exploring Generative AI for Enhanced User Experience on Forbes.com

Our first step in our process of exploration of generative AI tools was to examine all of the potential use cases that might enhance the experience on forbes.com. While our team was excited about the prospects of many different utilizations of Gen AI, we quickly settled on a tool that could help improve search for users that were interested in a more conversational approach to exploring the Forbes archive.

The Search for the Ideal AI Tool

Finding a tool that was able to combine a fast, reliable and accurate search experience with the flexibility and creativity of a large language model was our first requirement for our new product.

Our team prioritized finding a tool that would be able to surface supporting articles that were closely related to user queries while also being context aware.

If at all possible, we wanted to be able to have both a chatbot-like experience that also surfaced articles for users to dive deeper into their queries.

We believed it was essential for our AI products to be deeply integrated with Forbes data. To set apart our use of large language models, we aimed to leverage the writings from our hundreds of journalists and contributors.

This approach helps the model minimize inaccuracies and offer responses that are more contextually relevant in a broad range of scenarios.This combination of requirements lead us to Google Cloud Platform’s new Vertex AI Search & Conversation product.

Seamless Integration with Vertex AI Search & Conversation

Getting set up with Vertex Search & Conversation turned out to be incredibly straightforward and fast. Since our team already has a lot of our data in BigQuery and Google Cloud Storage which directly integrate with Vertex, building out the datastore which would power the large language model was a simple matter of determining a criteria for inclusion, formatting and uploading.

We were also able to include article metadata in our datastore which improved model performance and made surfacing responses more relevant. From there, all we needed to do was make a call to the Vertex Search API and we were off and running. The simplicity of getting started in using Vertex Search ended up being a huge plus for our team.

Development resources could be used on other projects while we focused product development on the user experience and usability.

Refining and Testing the AI Tool

Once we had a functional proof of concept to use, we wanted to focus on testing, refining and structuring the responses from our new tool. We had a lot of productive back and forth with GCP Support and our account management team which led to fast iterations on bugs and configurations on both sides. Our first version of the tool was as an app in our internal messaging and communication platform, Slack.

Since many of our internal, non-technical stakeholders were already familiar with Slack, allowing them to test out Adelaide’s functionality there was a no brainer. This allowed us to quickly iterate on the feature set for Adelaide without having to rebuild the entire front end each time.

We wanted to be absolutely sure that the responses coming from Adelaide were suitable for the millions of users that would have access to the tool so thorough testing was a huge priority for our team.

Designing a User-Friendly Interface for Adelaide

Our design and front end teams were focused on building an experience for Adelaide that was clean and clear and focused on showcasing both the generative summary produced by the large language model as well as the related articles from Forbes authors.

Since generative AI is such a new technology, we also wanted to be able to provide some suggested prompts for users who were experiencing this kind of search experience for the first time.

Another key feature of Adelaide is the ability for users to have long conversational journeys, asking as many follow up questions as they would like while still retaining the context of their previous asks.

Launch and Reception of Adelaide

As we approached the launch date for the product, we were cautiously optimistic about how Adelaide would be received. As with every product launch, we worked through last minute bug fixes, security patches and technical glitches but were able to roll out the tool to a subset with great success!

Reception for the product was positive among senior leadership and in the press and we were happy to see the number of searches using Adelaide was steadily growing. Our team is continuously monitoring the tool and testing out improvements to make the experience the best possible one for our users.

Reflections and Future Directions

For many on our team, this was one of the more highly visible projects that we have worked on with lots of attention being paid to our work. A key takeaway for us internally is that this project worked the same way as many others that were much more internal or backend and that our process should be the same for projects that will be seen by dozens of people or by thousands. From the generative AI side, we also had a few learnings.

Link to the rest at Forbes

PG suggests that Forbes made a far more intelligent decision regarding its publication than The New York Times did with its retrograde copyright infringement lawsuit against Microsoft and OpenAI.

For one thing, Forbes is aggressively making money from an AI program by using its content while The New York Times will be spending a great deal of money on a multi-year lawsuit against Microsoft, an organization that has much deeper pockets than the newspaper and its investors do.

NY Times sues OpenAI, Microsoft for infringing copyrighted works

From Reuters:

The New York Times sued OpenAI and Microsoft on Wednesday, accusing them of using millions of the newspaper’s articles without permission to help train chatbots to provide information to readers.

The Times said it is the first major U.S. media organization to sue OpenAI, creator of the popular artificial-intelligence platform ChatGPT, and Microsoft, an OpenAI investor and creator of the AI platform now known as Copilot, over copyright issues associated with its works.

Writers and others have also sued to limit the scraping — or the automatic collection of data — by AI services of their online content without compensation.

The newspaper’s complaint, filed in Manhattan federal court, accused OpenAI and Microsoft of trying to “free-ride on The Times’s massive investment in its journalism” by using it to provide alternative means to deliver information to readers.

. . . .

“There is nothing ‘transformative’ about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it,” the Times said.

OpenAI and Microsoft did not immediately respond to requests for comment. They have said that using copyrighted works to train AI products amounts to “fair use.”

Fair use is a legal doctrine governing the unlicensed use of copyrighted material.

On its website, the U.S. Copyright Office says “transformative” uses add “something new, with a further purpose or character” and are “more likely to be considered fair.”

The Times is not seeking a specific amount of damages, but the 172-year-old newspaper estimated damages in the “billions of dollars.”

It also wants the companies to destroy chatbot models and training sets that incorporate its material. Talks this year to avert a lawsuit and allow “a mutually beneficial value exchange” with the defendants were unsuccessful, the newspaper said.

. . . .

The Times filed its lawsuit seven years after the U.S. Supreme Court refused to revive a challenge to Google’s digital library of millions of books.

A federal appeals court had found that the library, which gave readers access to snippets of text, amounted to fair use of authors’ works.

“OpenAI is giving the copyright industry a second bite at control,” said Deven Desai, a professor of business law and ethics at the Georgia Institute of Technology.

“It’s outputs that matter,” Desai said. “Part of the problem in assessing OpenAI’s liability is that the company has altered its products as copyright issues arose. A court could say its outputs at this moment in time are enough to find liability.”

Chatbots have compounded the struggle among major media organizations to attract and retain readers, though the Times has fared better than most.

. . . .

The Times’ lawsuit cited several instances in which OpenAI and Microsoft chatbots gave users near-verbatim excerpts of its articles.

These included a Pulitzer Prize-winning 2019 series on predatory lending in New York City’s taxi industry, and Pete Wells’ 2012 review of Guy Fieri’s since-closed Guy’s American Kitchen & Bar that became a viral sensation.

The Times said such infringements threaten high-quality journalism by reducing readers’ perceived need to visit its website, reducing traffic and potentially cutting in to advertising and subscription revenue.

It also said the defendants’ chatbots make it harder for readers to distinguish fact from fiction, including when their technology falsely attributes information to the newspaper.

The Times said ChatGPT once falsely attributed two recommendations for office chairs to its Wirecutter product review website.

“In AI parlance, this is called a ‘hallucination,'” the Times said. “In plain English, it’s misinformation.”

Link to the rest at Reuters

As PG has opined for some time, he believes that the way AI’s use materials protected by copyright from the Times and others is fair use.

A traditional definition of fair use is any copying of copyrighted material done for a limited and “transformative” purpose, such as to comment upon, criticize, or parody a copyrighted work.

PG finds it difficult to regard the way AI programs use copyright-protected material as anything but extraordinarily transformative. He doubts there is any way someone can prompt an AI program to reproduce an article that first appeared in The New York Times.

If PG still had free access to the huge NEXIS database of newspapers, periodicals, books, etc., etc., he speculates he could perform a search using a paragraph from an NYT article and find more than one identical or quite similar earlier use in another publication.

Apple is in talks with publishers to use their content to train GenAI models

From The New Publishing Standard:

Apple has been slow to jump on the AI bandwagon in any meaningful way, which has given rivals a head-start, but also let them make the mistakes and the public-relations fails so Apple can jump in now looking like the good guy.

“Apple has reached out to news entities like Condé Nast, the publisher of Vogue and the New Yorker and more.”

. . . .

For western businesses, Christmas is a good time to bury controversial news, so no surprise Apple waited until the publishing industry was in “Holidays” limbo before making this announcement.

Per Business Today, Apple has put $50 million on the table to entice news publishers to give permission to use their content to train Apple’s own gen-AI system.

Apple has been slow to jump on the AI bandwagon in any meaningful way, which has given rivals a head-start, but also let them make the mistakes and the public-relations fails so Apple can jump in now looking like the good guy.

If Apple can be the first do to a major deal with content providers it will certainly take the moral high ground, but beyond that there’s little in the reports emerging so far to suggest Apple has anything original to offer.

Link to the rest at The New Publishing Standard

PG wondered when he read the announcement how much of the $50 million will be going to the authors who wrote the news items.

While he has never seen a contract between a newspaper and one of its writers, he would bet quite a bit of money that the news writers didn’t sign any sort of contract that allowed the publisher to utilize their work to train an AI system.

If any visitors to TPV have a copy of an employment agreement between a newspaper and one of its writers, PG would be interested in reviewing it. Click the Contact PG button to initiate a discussion.

The state of AI in 2023: Generative AI’s breakout year

From McKinsey:

The latest annual McKinsey Global Survey on the current state of AI confirms the explosive growth of generative AI (gen AI) tools. Less than a year after many of these tools debuted, one-third of our survey respondents say their organizations are using gen AI regularly in at least one business function. Amid recent advances, AI has risen from a topic relegated to tech employees to a focus of company leaders: nearly one-quarter of surveyed C-suite executives say they are personally using gen AI tools for work, and more than one-quarter of respondents from companies using AI say gen AI is already on their boards’ agendas. What’s more, 40 percent of respondents say their organizations will increase their investment in AI overall because of advances in gen AI. The findings show that these are still early days for managing gen AI–related risks, with less than half of respondents saying their organizations are mitigating even the risk they consider most relevant: inaccuracy.

The organizations that have already embedded AI capabilities have been the first to explore gen AI’s potential, and those seeing the most value from more traditional AI capabilities—a group we call AI high performers—are already outpacing others in their adoption of gen AI tools.1

The expected business disruption from gen AI is significant, and respondents predict meaningful changes to their workforces. They anticipate workforce cuts in certain areas and large reskilling efforts to address shifting talent needs. Yet while the use of gen AI might spur the adoption of other AI tools, we see few meaningful increases in organizations’ adoption of these technologies. The percent of organizations adopting any AI tools has held steady since 2022, and adoption remains concentrated within a small number of business functions.

. . . .

1. It’s early days still, but use of gen AI is already widespread

The findings from the survey—which was in the field in mid-April 2023—show that, despite gen AI’s nascent public availability, experimentation with the tools is already relatively common, and respondents expect the new capabilities to transform their industries. Gen AI has captured interest across the business population: individuals across regions, industries, and seniority levels are using gen AI for work and outside of work. Seventy-nine percent of all respondents say they’ve had at least some exposure to gen AI, either for work or outside of work, and 22 percent say they are regularly using it in their own work. While reported use is quite similar across seniority levels, it is highest among respondents working in the technology sector and those in North America.

. . . .

Organizations, too, are now commonly using gen AI. One-third of all respondents say their organizations are already regularly using generative AI in at least one function—meaning that 60 percent of organizations with reported AI adoption are using gen AI. What’s more, 40 percent of those reporting AI adoption at their organizations say their companies expect to invest more in AI overall thanks to generative AI, and 28 percent say generative AI use is already on their board’s agenda. The most commonly reported business functions using these newer tools are the same as those in which AI use is most common overall: marketing and sales, product and service development, and service operations, such as customer care and back-office support. This suggests that organizations are pursuing these new tools where the most value is. In our previous research, these three areas, along with software engineering, showed the potential to deliver about 75 percent of the total annual value from generative AI use cases.

. . . .

In these early days, expectations for gen AI’s impact are high: three-quarters of all respondents expect gen AI to cause significant or disruptive change in the nature of their industry’s competition in the next three years. Survey respondents working in the technology and financial-services industries are the most likely to expect disruptive change from gen AI. Our previous research shows that, while all industries are indeed likely to see some degree of disruption, the level of impact is likely to vary. Industries relying most heavily on knowledge work are likely to see more disruption—and potentially reap more value. While our estimates suggest that tech companies, unsurprisingly, are poised to see the highest impact from gen AI—adding value equivalent to as much as 9 percent of global industry revenue—knowledge-based industries such as banking (up to 5 percent), pharmaceuticals and medical products (also up to 5 percent), and education (up to 4 percent) could experience significant effects as well. By contrast, manufacturing-based industries, such as aerospace, automotives, and advanced electronics, could experience less disruptive effects. This stands in contrast to the impact of previous technology waves that affected manufacturing the most and is due to gen AI’s strengths in language-based activities, as opposed to those requiring physical labor.

. . . .

3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial

Our latest survey results show changes in the roles that organizations are filling to support their AI ambitions. In the past year, organizations using AI most often hired data engineers, machine learning engineers, and Al data scientists—all roles that respondents commonly reported hiring in the previous survey. But a much smaller share of respondents report hiring AI-related-software engineers—the most-hired role last year—than in the previous survey (28 percent in the latest survey, down from 39 percent). Roles in prompt engineering have recently emerged, as the need for that skill set rises alongside gen AI adoption, with 7 percent of respondents whose organizations have adopted AI reporting those hires in the past year.

. . . .

Looking ahead to the next three years, respondents predict that the adoption of AI will reshape many roles in the workforce. Generally, they expect more employees to be reskilled than to be separated. Nearly four in ten respondents reporting AI adoption expect more than 20 percent of their companies’ workforces will be reskilled, whereas 8 percent of respondents say the size of their workforces will decrease by more than 20 percent.

. . . .

Looking specifically at gen AI’s predicted impact, service operations is the only function in which most respondents expect to see a decrease in workforce size at their organizations. This finding generally aligns with what our recent research suggests: while the emergence of gen AI increased our estimate of the percentage of worker activities that could be automated (60 to 70 percent, up from 50 percent), this doesn’t necessarily translate into the automation of an entire role.

AI high performers are expected to conduct much higher levels of reskilling than other companies are. Respondents at these organizations are over three times more likely than others to say their organizations will reskill more than 30 percent of their workforces over the next three years as a result of AI adoption.

Link to the rest at McKinsey

For those unfamiliar with the company, McKinsey is generally regarded as the most prestigious, powerful, and influential management consulting firm in the world.

How news publishers are using AI chatbots to boost engagement

From Press Gazette:

Chatbots based on publishers’ own content archives pose “a lot of revenue opportunities” according to the chief digital and information officer at Forbes.

Forbes is among the publishers trying out generative AI tools trained on their own archives, having launched AI-powered search tool Adelaide at the end of October.

Both Google and ChatGPT owner OpenAI have this year made technology available to allow people create their own generative AI tools using their proprietary content and data, mitigating the copyright issues – although hallucinations and other inaccuracies may still pose a risk.

OpenAI began rolling out customisable versions of ChatGPT for paying ChatGPT Plus and Enterprise users in November, promising that no coding was required.

Google had already launched Enterprise Search on Generative AI App Builder, which lets organisations easily create custom chatbots and semantic search applications. It said in a blogpost in June: “Customers can combine their internal data with the power of Google’s search technologies and generative foundation models, delivering relevant, personalized search experiences for enterprise applications or consumer-facing websites.”

. . . .

Forbes used Google’s Enterprise Search feature to build Adelaide and chief digital and information officer Vadim Supitskiy told Press Gazette it took just two weeks to create a prototype and two months to release it.

Adelaide, named after the wife of Forbes founder B.C. Forbes, allows people to search for Forbes content. Answers to specific questions are provided in a more conversational way than a traditional website search bar, with relevant links signposted underneath. Currently, the tool is trained on 12 months of Forbes content only but it is planned that this will eventually roll out to the whole archive.

. . . .

They then looked at which areas of the Forbes website could benefit from the addition of the tool, potentially driving engagement and loyalty.

“And one that came up kind of right away was search because traditional search on publishers’ websites has been the same for so many years,” he said. “We always kind of looked at it and thought okay, how can we innovate on that particular product… but in general, it’s pretty straightforward and not the most engaging experience. So we thought gen AI could be a good solution here.

“Not only does it become interactive for the user – they can engage, talk to that particular experience, create a conversation – but also get very relevant content to dive deeper into it.”

Since 26 October Adelaide has been in beta testing with 5% of the Forbes audience seeing its answers when they search for something on the website. In addition anyone can go to the dedicated Adelaide page.

Speaking at the end of November, Supitskiy said: “So far what we’ve seen is good engagement when people come to that page, so actually people are engaging really well there and are coming back to it. They like the experience. What we’re working on right now is how to get more people into the experience…”

Link to the rest at Press Gazette

Putting Ourselves Back in the Equation

From The Wall Street Journal:

Popular science often feels like a kind of voyeurism. Those who can’t manage the real thing are given a thrilling glimpse of its intrigue and excitement but kept at a distance. So a book that tackles the mind-boggling triad of physics, consciousness and artificial intelligence might be expected to provide little more than intellectual titillation. The science journalist George Musser even says at its end that “many physicists and neuroscientists are just as perplexed as the rest of us.”

But Mr. Musser knows that the point of popular science is not for the reader to understand everything fully but to get a sense of what’s at stake, what kinds of answers are being offered to difficult questions, and why it all matters. One could not ask more of “Putting Ourselves Back in the Equation”—on all three counts it delivers.

The central puzzle of the book is what the contemporary philosopher William Seager has called a “ticking time bomb”: the intellectual division between the objective and the subjective. Natural science was able to make such great strides after the Middle Ages only because it left the analysis of thoughts and feelings to poets and philosophers, focusing instead on measurable observables. The strategy worked a treat until it hit two brick walls.

The first is the nature of consciousness. Modern neuroscience, at first, stuck to examining the brain events that corresponded to conscious experiences: the “neural correlates of consciousness.” But at a certain point it became clear that such a focus left out a good deal. How is it possible that mushy cells give rise to sensations, emotions and perceptions? The science of mind had to ignore precisely what it was supposed to explain because a purely objective account of consciousness cannot encompass its subjective character.

And then—a second and related problem—physicists discovered that they couldn’t leave conscious minds out of their equations. A central tenet of quantum theory is that observers change what they observe. This is embarrassing. Physics is meant to describe the mind-independent world. But its best description ended up having minds—with their particular points of view—at its center. So for physics to be anything like complete, it has to find a way to kick minds out again or account for what makes them conscious and why they should affect physical matter.

Mr. Musser provides a chatty and informal overview of the many ways in which physicists have been trying to rise to these challenges. He speaks to many of the leading scientists in the field, trying a bit too hard to make them seem like regular folks so that we don’t feel intimidated. A bigger challenge, for the reader, is that he introduces us to so many theories that it’s difficult to judge which should be taken most seriously and which lean toward the cranky. Given that even the most well-evidenced theories in physics sound crazy, our intuitions are no guide.

But by the end a number of general insights shine through. The central one is that we have to think of both physics and consciousness in terms of networks and relations. You can’t find consciousness in single neurons, no matter how hard you look. The reductive approach, which seeks to break down phenomena to their smallest parts, doesn’t work for everything. The clearest evidence of the limits of reductionism is quantum entanglement, or “spooky action at a distance,” the title-phrase of Mr. Musser’s previous book. This is the phenomenon by which two particles appear to affect each other even though they are too far apart for any information to pass between them without exceeding the speed of light, a physical impossibility. No explanation of this oddity is possible if we focus reductively on the particles as discrete entities. Instead we have to see them as interrelated.

Consciousness, too, seems to depend upon patterns of interconnectedness. For a while now researchers into artificial intelligence have realized that we can get nothing close to human reasoning if we have computers that follow only linear processes. AI took off when scientists started to create neural networks, in which processes are conducted in parallel, mimicking the brain’s capacity to run different processes at the same time in its many parts.

This insight led to the currently hottest theory in consciousness studies, integrated information theory, which holds that consciousness is essentially the result of information being kept in whole systems rather than in parts. Adherents even quantify the degree of this integration with the Greek letter phi, which, says Mr. Musser, “represents the amount of information that is held collectively in the network rather than stored in its individual elements.” The higher the value of phi, the more conscious the system is.

By the end of “Putting Ourselves Back in the Equation,” Carlo Rovelli emerges as the physicist who is on the hottest trail. For Mr. Rovelli, there are no independent, objective facts. Truth is always a matter of relations. We understand that New York is west of London but east of San Francisco. Mr. Rovelli argues that all physical properties are like this: Nothing is anything until it is related to something else. It’s an old idea, found in ancient Greek, Chinese and Indian philosophy, and scientists are discovering it anew.

Link to the rest at The Wall Street Journal

Seven Ways AI Will Impact Authors And The Publishing Industry

From Forbes:

For better or worse, artificial intelligence is poised to shake up the publishing industry. As a technologist who has authored over 50 books, I’ve been keen on its evolution in that space. From assisting in research to streamlining content creation and distribution, AI offers promising avenues for increased productivity and efficiency.

However, it also raises concerns about the preservation of human voice and original thought. In fact, with the increase in AI in content creation, we’ll soon see the pendulum swing the other way: Works may be labeled as “Non-AI,” just like we see “Non-GMO” in many food products. If you’re a thought leader looking to publish a book, you may be wondering how to leverage AI while avoiding its pitfalls.

. . . .

1. Assisted Research

It’s clear that generative AI tools like ChatGPT will have a significant impact on publishing. It won’t replace authors but rather make them more productive with aggregated research and synthesized content that can be tweaked to resemble the voice of the author. Of course, the risks associated with AI will introduce new fact-checking and authenticity processes. However, the benefits will outweigh the risks. – Mike Walker, Microsoft

2. AI-Powered Editing

Artificial intelligence is excellent at editing. I can see top authors leveraging AI to get their work output more efficiently and effectively with AI by reducing the time needed to edit their original creations. Unfortunately, I also believe we will see the rise of a lot of junk authors who can now quickly create mediocre work. – Rashmi Rao, RCubed Ventures

. . . .

4. Streamlined Content Creation, Distribution And Consumption

AI will impact authors by streamlining content creation, distribution, and consumption. While this may increase efficiency, it also poses a risk of diminishing the uniqueness of the human voice and original thought. Authors can leverage this trend by incorporating AI tools to enhance their work while keeping their own voices and perspectives. – Miguel Llorca, Torrent Group

5. Automated Text Analysis, Formatting And Translation

Certain laborious manual functions such as automated text analysis, formatting, personalization and translations can be automated. This allows authors to focus on their research and content. Publishing agencies can use AI to recommend better topics for authors based on its quick and wide capability to parse consumer needs. – Kiran Palla, AVS Academy

6. Greater Emphasis On Having A ‘Voice’

It will put a greater emphasis on having a “voice.” AI is excellent at putting together cogent, but ultimately lifeless, writing. Authors who have good ideas and execution but lack the human touch will be most impacted by AI-generated writing. As a writer, you should always focus not just on your argument, but what you—the human behind it—bring to the process. – Lewis Wynne-Jones, ThinkData Works

Link to the rest at Forbes

The Future of AI in Publishing

From Johns Hopkins University Press:

“AI is one of the most exciting things to happen since the birth of the internet,” Kline Pope said. “I share the perspective [of Thad McIlroy’s article] of endless possibilities for publishing while recognizing there are issues.” 

AI has the potential to revolutionize all aspects of publishing, from content creation to marketing to production to data analytics and more. “The low-hanging fruit of AI is marketing,” Kline Pope said. She noted that generative AI is going to allow us to go from the mundane to the creative. “We have so much to do in publishing, particularly in marketing. Every idea you have you can’t implement, but if the mundane is taken care of by AI, [we can be] more creative,” Kline Pope said. 

They discussed the opportunities of using AI to reinvigorate backlist marketing across thousands of books and journals. Warren provided students’ perspectives on generative AI, recognizing their broad range of views. “It’s going to touch all aspects of publishing and really all aspects of society,” Warren said. He noted that he agrees with McIlroy in the Publishers Weekly article: “You can only understand the perils surrounding a new technology after you fully appreciate the opportunities that it affords.” Warren hopes that AI will not replace the entry-level positions that his students need in order to learn how to be successful in higher-level positions. 

In terms of the fraught relationship between AI and protecting intellectual property, Kline Pope quoted her former colleague Michael Jensen, a technologist who said, “Our concern is not piracy; it’s obscurity” in university press publishing. While the rapid expansion of AI raises concerns about piracy and authorship, it also represents the compelling possibility to make scholarly publishing more accessible and engaging than ever. 

Link to the rest at Johns Hopkins University Press

While PG agrees that AI will be useful for marketing, he suggests that drafting and content creation will end up being the most dramatic and disruptive uses of AI and language.

AI-generated news anchors to be part of new national news channel premiering next year

From Fox Business:

An upcoming news station will be broadcasting the first ever AI-generated news anchors to viewers across America next year.

New Los Angeles-based station Channel 1, which will launch in 2024, aims to be the first nationally syndicated news station to use AI avatars instead of human anchors.

DailyMail.com reported that the new station’s news segments will use a mix of AI-generated people, digital avatars compiled using doubles of real actors, and actual human anchors for the channel’s most important reports.

An upcoming news station will be broadcasting the first ever AI-generated news anchors to viewers across America next year.

New Los Angeles-based station Channel 1, which will launch in 2024, aims to be the first nationally syndicated news station to use AI avatars instead of human anchors.

DailyMail.com reported that the new station’s news segments will use a mix of AI-generated people, digital avatars compiled using doubles of real actors, and actual human anchors for the channel’s most important reports.

. . . .

Channel 1 founder Adam Mosam recently told DailyMail.com this is his chance “to get out in front of” the wave of artificial intelligence innovation and “create a responsible use of the technology.”

Mosam’s claims come amid a growing concern that the public has over the rise of AI technology in daily life. A recent Pew Research survey revealed 52% of Americans are more concerned than excited about AI’s growing prevalence.

The founder noted that Channel 1 will be transparent about what footage used in reports is real and what is AI-generated. 

Mosam also stated that humans will be involved in every step of his station’s reporting process to ensure the accuracy of the content.

“We do have people in the loop, they just end up being more efficient,” he told the outlet.

Link to the rest at Fox Business

Here is the Channel 1 video announcement about its upcoming AI news anchors. All the anchors in the announcement are AI creations.

PG expresses no opinion about the short portion of the Channel 1 video showing President Biden includes an actual president or an AI recreation. It occurred to him that “chairman for life” might mean something entirely different with AI.

Your Occasional ChatGPT Interaction

Prompt: give me a list of the worst pickup lines

ChatGPT4 Response:

Certainly, here’s a list of some cheesy and cringe-worthy pickup lines that are often considered among the worst:

  1. “Are you a magician? Because whenever I look at you, everyone else disappears.”
  2. “Do you have a name or can I call you mine?”
  3. “Is your name Google? Because you have everything I’ve been searching for.”
  4. “Did it hurt when you fell from heaven?”
  5. “Are you a parking ticket? Because you have ‘FINE’ written all over you.”
  6. “Do you have a map? Because I just got lost in your eyes.”
  7. “Are you made of copper and tellurium? Because you’re Cu-Te.”
  8. “If you were a vegetable, you’d be a cute-cumber!”
  9. “Can you lend me a kiss? I promise I’ll give it back.”
  10. “Do you believe in love at first sight, or should I walk by again?”
  11. “Is your dad a baker? Because you’re a cutie pie.”
  12. “Are you a campfire? Because you’re hot and I want s’more.”
  13. “Is there a rainbow today? I just found the treasure I’ve been searching for.”
  14. “Do you have a Band-Aid? Because I just scraped my knee falling for you.”
  15. “Is your name Wi-Fi? Because I’m really feeling a connection.”

Remember, while these pickup lines may be amusing in certain social situations, they are often considered corny and ineffective when used seriously. It’s generally better to approach someone you’re interested in with genuine conversation and respect rather than relying on cheesy pickup lines.

All the Jobs AI Is Coming for, According to a UK Study

From Lifehacker:

The question of whether AI will eventually take jobs away from us meatbags is nothing new. However, following ChatGPT’s launch late last year, the speed at which AI has caught on has surprised almost everybody, even those working in the space. And far from a question to consider in the far-off (or even near-term) future, jobs are already being affected: Some layoffs this year came due to companies believing AI could replace certain roles, while other companies froze hiring for similar reasons.

So, how do you know if your job is one of the ones at risk? A recent study could give you the answer (and you might not like it).

This UK study reveals the jobs “most exposed” to AI—and what that means

Assessing the random actions of various companies and getting lost in speculation do us no good. For a substantive and thoughtful discussion on the topic, there is already traditional research ongoing into how AI will affect the job market, including this recent study out of the U.K. The study, developed by the UK’s Department for Education, estimates that 10–30% of jobs are automatable with AI—which, depending on your general outlook on AI, may sound like a lot, or less than you’d expect.

The study investigated the job functions and qualifications for various sectors of the workforce, looking for whether the following ten AI applications could aid in those jobs:

  • Abstract strategy games
  • Real-time video games
  • Image recognition
  • Visual question answering
  • Image generation
  • Reading comprehension
  • Language modeling
  • Translation
  • Speech recognition
  • Instrumental track recognition

Depending on how relevant each of these 10 functions were to a particular role, the study generated an AI Occupational Exposure (AIOE) score for the role. The higher the score, the more “exposure” that role may have to artificial intelligence.

In the initial findings, the study determined that “professional occupations,” including sectors like finance, law, and business management, tended to be more exposed to AI. In fact, they specifically found that the finance and insurance sectors were the most exposed. Building off this discovery, it seems the more advanced the qualifications necessary for the role, the more AI exposure that role tends to have. In general, if your job requires more education and more advanced training, chances are it pairs with well AI.

The reverse is true, of course—except for security guards, interestingly enough. The study says there is such an emergence of security technology that although the role requires low education and work experience, it is more exposed to AI than other jobs of its kind.

None of this is necessarily a bad thing. As the study points out, the International Labor Organization has found most jobs are only partially exposed to AI, so the odds are decent employees in these roles will benefit from AI exposure, rather than have their jobs fully replaced by the technology.

Which jobs are most exposed to AI

Taking all this into consideration, the study breaks down the top 20 occupation types most exposed to AI, as well as most exposed to large language modeling (LLMs). It’s a long list, including sectors like consulting, telephone sales, psychologists, legal professionals, teachers, and payroll managers.

As stated above, the study finds that finance and insurance are the most exposed to AI of any job sector. The other most exposed sectors include information and communication; professional, scientific and technical, property; public administration and defense; and education.

Just as interesting as the list of occupation types most exposed is the list of those least exposed. Many of these roles require manual labor that cannot be replicated by AI or technology in general, such as sports players, roofers, fork-lift truck drivers, painters, window cleaners, and bricklayers:

. . . .

Will AI truly replace any jobs, according to the study?

Interestingly enough, the study is almost exclusively focused on AI exposure, rather than on jobs threatened by the technology. That said, they do have a list of 16 job types that are considered “high automation occupations,” which a pessimist could infer to mean jobs that could be one day replaced by automation.

  • Authors, writers and translators
  • Bank and post office clerks
  • Bookkeepers, payroll managers and wages clerks
  • Brokers Call and contact centre occupations
  • Customer service occupations n.e.c.
  • Finance officers
  • Financial administrative occupations n.e.c
  • Human resources administrative occupations
  • Librarians
  • Market research interviewers
  • Other administrative occupations n.e.c.
  • Pensions and insurance clerks and assistants
  • Telephone salespersons
  • Travel agents
  • Typists and related keyboard occupations

You might notice some overlap between this list and the list of jobs most exposed to AI. That’s because the study notes that these jobs all have high AIOE scores, both for exposure to AI and LLMs.

Link to the rest at Lifehacker

Sweeping Regulation of AI Advances in European Union Deal

From The Wall Street Journal:

European lawmakers reached a political deal on regulating artificial intelligence, marking a step toward establishing a comprehensive AI law in Western countries.

The European Union’s Artificial Intelligence Act would be the latest in a series of regulations the bloc has pushed forward that is expected to reverberate through the global tech industry and affect some of its biggest players. Earlier legislation from the EU set out new competition and online content rules that affect large U.S. tech companies including Meta Platforms, Apple and Google’s parent, Alphabet.

“The EU is the first in the world to set in place robust regulation on AI,” said Dragos Tudorache, a member of the European Parliament from Romania who was one of the body’s lead negotiators of the AI Act.

The deal agreed to by lawmakers includes bans on several AI applications, such as untargeted scraping of images to create facial-recognition databases, and sets rules for systems that lawmakers consider to be high-risk, according to a statement from the European Parliament. It also includes transparency rules for general-purpose AI systems and the models that power them.

Penalties for breaking the rules could reach up to 7% of a company’s global revenue, depending on the size of the company and the rule that was broken, according to the statement.

Europe’s AI Act was first proposed in 2021, before chatbots such as OpenAI’s ChatGPT and Google’s Bard were publicly launched. The introduction of those chatbots and the popularization of a range of other AI applications in recent years put new scrutiny on the legislation and prompted some lawmakers to seek to introduce new provisions.

One of the most controversial aspects of the European legislation was determining whether to set blanket rules for general-purpose AI and so-called foundation models, which are trained on vast data sets and underpin more-specialized AI applications. 

The AI Act will require those systems to follow a set of transparency rules that include complying with EU copyright law and creating detailed summaries about the content used to train AI models, according to the statement from the European Parliament. High-impact models that are deemed to create systemic risk will face tougher rules, including a requirement to assess and mitigate such as risk.

. . . .

The deal faced swift criticism from industry and consumer groups. DigitalEurope, a tech lobby group, said the new rules would be expensive for AI companies to comply with and risk putting Europe at a disadvantage. “The AI race is not one Europe can miss out on,” said Cecilia Bonefeld-Dahl, the group’s director-general.

. . . .

At The New York Times, Adam Satariano writes, “even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be.

“Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for AI development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.”

So while an agreement has been reached, its precise outlines—and how firm they may be—remain to be examined and understood.

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

PG says this is a mistake that will kill a great deal of AI research and development in the EU. He predicts that many bright AI researchers and developers will decide to head to employment opportunities outside of Europe.

Italian Publishers: Toughen Europe’s AI Act Regulations

From Publishing Perspectives:

A potentially pivotal moment occurs this week in the closely watched development of the European Union’s “AI Act.”

Markets in many parts of the world, not just in Europe, are following along for clues and cues in terms of how artificial intelligence can be developed and applied “safely”—and even that term safely can be hotly debated, of course.

On Wednesday (December 6), the AI Act is to have its fifth “trilogue.” That’s the term for a negotiating session in which the European Parliament, the European Commission, and the Council of the European Union. Previous trilogue meetings on the Artificial Intelligence Act were held in June, July, September, and October. Originally, the idea was that this December trilogue would finalize the bill for the bloc this year, but there’s increasing concern that the timing of such progress will be take longer. This, on legislation that saw its first draft in 2021 and was first proposed in 2019.

What has happened in the interim—you won’t be surprised to read—is the rise of “foundation models.” Sometimes called “general purpose,” these are the systems designed as large-language models built for “deep learning” that can be adapted for a wide array of scenarios. This contrasts, of course, with the concept of a traditional program designed to handle a specific and narrow task set, maybe speeding up a bit of office drudge work. Such less ambitious programs require nothing like some foundation models’ contentious free-range feeding on information—often copyrighted content—to build their algorithmic-response structures.

A foundation model is a form of what’s called “generative artificial intelligence,” meaning that it can generate output from a broad base of ingested data.

At the highest intentional level, the over-arching core of discussion around this legislation has been, to quote the EU’s material, to handle “concerns especially with regard to safety, security, and fundamental rights protection.” But if the devil is usually in the details, a construct of digital details presents such a major chance for devilry that many observers now are worried about this important legislation’s progress.

Needless to say, the upheaval around OpenAI last month when its board fired and the rehired Sam Altman seemed to confirm fears that a major corporate player in the AI space could be thrown into turmoil by inscrutable internal governance issues. As Kevin Chan at the Associated Press is writing today, once the Altman fiasco had played out, European Commissioner Thierry Breton said at an AI conference, “‘At least things are now clear’ that companies like OpenAI defend their businesses and not the public interest.”

And yet, much discussed in coverage on the run-up to Wednesday’s trilogue is an unexpected resistance that’s been mounted by France, Spain, and Italy, which presented a pitch for self-regulation among AI players.

At The Guardian, John Naughton wrote up this “Franco-German-Italian volte face,” as he calls it, as the result of everyone’s worst fears: “the power of the corporate lobbying that has been brought to bear on everyone in Brussels and European capitals generally.” More broadly, the assumption is that in each EU member-state seeming to make that about-face and start talking of self-regulation as the way to go, something has been promised by industry advocates for the local national AI companies, a divide-and-conquer effort by lobbyists.

Link to the rest at Publishing Perspectives

PG notes that the reaction of the European publishers sounds a lot like that of American publishers.

As far as regulation is concerned, the current AI programs/services he has tried have their AI capabilities online, so geographical fences like the the European Union’s “AI Act” are unlikely to prevent individuals or organizations who wish to use AI services offered over the internet from a provider located anywhere in the world.

As one cyberlaw website put it, “Some believe that the internet should be operated as if it were a land all its own, independent of national policy.”

Provide a fictional plot outline in Zadie Smith’s style and themes

PG asked ChatGPT4 to create a detailed plot outline for The Fraud by Zadie Smith, a book that is a current NY Times bestseller.

The AI responded as follows:

As of my last update in April 2023, “The Fraud” by Zadie Smith is not a recognized title among her works. It’s possible that it’s a new release after my last training data, or it might be a lesser-known or misremembered title.

However, if you’re interested in a fictional plot outline inspired by Zadie Smith’s style and themes, I can certainly create one for you. Smith’s works often explore themes of identity, race, class, and the complexities of contemporary life, often set against the backdrop of multicultural London. Please let me know if you’d like me to proceed with this approach, or if there are specific elements you’d like included in this fictional plot outline!

PG took the AI’s advice and posted the following prompt:

Provide a fictional plot outline in Zadie Smith’s style and themes.

Here’s the response he received from ChatGPT4:

Title: “The Tapestry of Us”

Plot Outline:

  1. Setting and Introduction:
    • London, in the vibrant and diverse neighborhood of Willesden.
    • Introduce the central character, Aisha, a young mixed-race woman grappling with her identity.
  2. Aisha’s Background:
    • Aisha’s parents: her mother is a Jamaican immigrant and her father a British academic.
    • Flashbacks to her childhood show a constant struggle between her Jamaican heritage and British upbringing.
  3. The Inciting Incident:
    • Aisha receives a mysterious letter from an unknown relative in Jamaica, sparking curiosity about her roots.
  4. Exploring Identity:
    • Aisha’s journey delves into themes of racial identity, class, and belonging.
    • She navigates the complexities of being biracial in modern London.
  5. Contrasting Perspectives:
    • Introduce a cast of diverse characters, each offering different views on identity and culture.
    • Include a love interest, perhaps someone who challenges Aisha’s perceptions.
  6. Journey to Jamaica:
    • Aisha decides to visit Jamaica to explore her heritage.
    • The trip is eye-opening; she experiences a culture both familiar and alien.
  7. Revelations:
    • In Jamaica, Aisha uncovers family secrets and histories that reshape her understanding of her identity.
  8. Return to London:
    • Aisha returns with a new sense of self.
    • Her experiences in Jamaica influence her life in London, from relationships to career choices.
  9. Resolution:
    • Aisha embraces a multifaceted identity, understanding that she can be both Jamaican and British.
    • The story concludes with Aisha starting a community project that celebrates multicultural identities.
  10. Themes:
    • The narrative weaves through themes of race, class, the immigrant experience, and the search for personal identity.
    • It explores the idea that identity is not fixed but a tapestry of experiences and cultures.

This fictional outline borrows from Zadie Smith’s style in exploring deep societal issues through personal narratives, set against the backdrop of a multicultural London, much like her acclaimed works such as “White Teeth” and “Swing Time”.

End of ChatGPT4

Since PG has not read any of Ms. Smith’s books, he’ll leave an assessment of the outline to any who have.

Help Me Get Out of a Speeding Ticket

Prompt

Prepare a written argument that I wasn’t driving recklessly when I was pulled over by a highway patrol car on the 405 freeway in Long Beach at 3:00 AM for driving 8 miles over the speed limit when the freeway was almost completely empty and the highway patrol officer ignored another car that had just passed me traveling at least 20 miles per hour over the speed limit.

ChatGPT4

When preparing a written argument to contest a traffic citation for allegedly driving recklessly on the 405 freeway in Long Beach, it’s crucial to focus on specific legal and factual points. Here is an outline to structure your argument:

  1. Introduction:
    • State the date, time, and location of the incident.
    • Briefly describe the circumstances leading to the traffic stop.
  2. Clarification of Charges:
    • Clearly state the specific violation you were cited for (e.g., driving 8 miles over the speed limit).
  3. Road Conditions and Context:
    • Emphasize that the freeway was nearly empty at the time of the incident.
    • Explain that the conditions allowed for a slightly higher speed without compromising safety.
  4. Comparison with Other Vehicles:
    • Mention that another vehicle passed you traveling significantly faster (approximately 20 miles over the speed limit).
    • Argue that this indicates your speed was not excessively high compared to the flow of traffic.
  5. Definition of Reckless Driving:
    • Cite the legal definition of reckless driving.
    • Argue that driving 8 miles over the speed limit, in this context, does not meet the criteria for reckless driving.
  6. Safety Record and Driving Behavior:
    • If applicable, mention your clean driving record or lack of prior traffic violations.
    • Emphasize your commitment to safe driving and respect for traffic laws.
  7. Request for Consideration:
    • Request leniency or dismissal based on the specific circumstances of the case.
    • If appropriate, express willingness to attend a defensive driving course as an alternative to penalties.
  8. Conclusion:
    • Summarize your key points.
    • State that you believe the evidence supports a finding that you were not driving recklessly.

Remember, the goal is to provide a clear, factual, and concise argument that challenges the notion of reckless driving under the specific conditions of your case. It’s also important to remain respectful and professional in your tone throughout the document.

End of ChatGPT4

PG doesn’t see much genius in the solution that ChatGPT generated. It won’t put many lawyers out of business with this type of performance.

Diagram a Sentence

PG asked ChatGPT4 to do something with the first line from The Fortune Men by Nadifa Mohamed.

Prompt: Diagram the following sentences: ‘The King is dead. Long live the Queen.” The announcer’s voice crackles from the wireless and winds around the rapt patrons of Berlin’s Milk Bar as sinuously as the fog curls around the mournful street lamps, their wan glow barely illuminating the cobblestones.”

ChatGPT4: To diagram the given sentence, we’ll break it down into its constituent parts, focusing on the main clauses, subordinate clauses, phrases, and key parts of speech. The sentence can be diagrammed as follows:

  1. Main Clause: “The announcer’s voice crackles from the wireless”
    • Subject: “The announcer’s voice”
    • Verb: “crackles”
    • Prepositional Phrase: “from the wireless”
  2. Subordinate Clause: “as sinuously as the fog curls around the mournful street lamps”
    • Conjunction: “as”
    • Adverb describing how the voice crackles: “sinuously”
    • Comparative Conjunction: “as”
    • Subject: “the fog”
    • Verb: “curls”
    • Prepositional Phrase: “around the mournful street lamps”
  3. Additional Phrase: “their wan glow barely illuminating the cobblestones”
    • Possessive Pronoun: “their”
    • Subject: “wan glow”
    • Verb: “illuminating”
    • Direct Object: “the cobblestones”
    • Adverb modifying the verb: “barely”
  4. Independent Sentences: “‘The King is dead. Long live the Queen.'”
    • These are two separate declarative sentences.
  5. Contextual Setting: “winds around the rapt patrons of Berlin’s Milk Bar”
    • Verb: “winds”
    • Prepositional Phrase: “around the rapt patrons”
    • Prepositional Phrase: “of Berlin’s Milk Bar”

In this diagram, each part of the sentence is broken down to show the structure and how different elements are connected. This visualization helps in understanding the grammatical structure and the relationships between different parts of the sentence.

END ChatGPT4

PG will leave the task of assessing how the AI did on this challenge to visitors who still remember what a “Direct Object” is. He doesn’t remember Mrs. Lascelles ever saying anything about “contextual settings.”

PG was expecting to receive a graphical diagram to be created by the prompt, like the following example:

PG speculates that the AI doesn’t have the ability to output graphic diagrams. (Yet)

How a Fervent Belief Split Silicon Valley—and Fueled the Blowup at OpenAI

From The Wall Street Journal:

Over the past few years, the social movement known as effective altruism has divided employees and executives at artificial-intelligence companies across Silicon Valley, pitting believers against nonbelievers.

The blowup at OpenAI showed its influence—and the triumphant return of chief executive Sam Altman revealed hard limits, capping a bruising year for the divisive philosophy.

Coming just weeks after effective altruism’s most prominent backer, Sam Bankman-Fried, was convicted of fraud, the OpenAI meltdown delivered another blow to the movement, which believes that carefully crafted artificial-intelligence systems, imbued with the correct human values, will yield a Golden Age—and failure to do so could have apocalyptic consequences.

OpenAI, which released ChatGPT a year ago, was formed in part on the principles of effective altruism, a broad social and moral philosophy that influences the AI research community in Silicon Valley and beyond. Some followers live in private group homes, where they can brainstorm ideas, engage in philosophical debates and relax playing a four-person variant of chess known as Bughouse. The movement includes people devoted to animal rights and climate change, drawing ideas from rationalist philosophers, mathematicians and forecasters of the future.

Supercharged by hundreds of millions of dollars in tech-titan donations, effective altruists believe a headlong rush into artificial intelligence could destroy mankind. They favor safety over speed for AI development. The movement, which includes people who helped shape the generative-AI boom, is insular and multifaceted but shares a belief in doing good in the world—even if that means simply making a lot of money and giving it to worthy recipients.

Altman, who was fired by the board Friday, clashed with the company’s chief scientist and board member Ilya Sutskever over AI-safety issues that mirrored effective-altruism concerns, according to people familiar with the dispute.

Voting with Sutskever, who led the coup, were board members Tasha McCauley, a tech executive and board member for the effective-altruism charity Effective Ventures, and Helen Toner, an executive with Georgetown University’s Center for Security and Emerging Technology, which is backed by a philanthropy dedicated to effective-altruism causes. They made up three of the four votes needed to oust Altman, people familiar with the matter said. The board said he failed to be “consistently candid.”

The company announced Wednesday that Altman would return as chief executive and Sutskever, McCauley and Toner would be replaced. Emmett Shear, a tech executive favoring a slowdown in AI development and recruited as the interim CEO, was out.

Altman’s dismissal had triggered a company revolt that threatened OpenAI’s future. More than 700 of about 770 employees had called for Altman’s return and threatened to jump ship to Microsoft, OpenAI’s biggest investor. Sutskever said Monday he regretted his vote.

“OpenAI’s board members’ religion of ‘effective altruism’ and its misapplication could have set back the world’s path to the tremendous benefits of artificial intelligence,” venture capitalist and OpenAI investor Vinod Khosla wrote in an opinion piece for The Information.

Altman toured the world this spring warning that AI could cause serious harm. He also called effective altruism an “incredibly flawed movement” that showed “very weird emergent behavior.”

The effective-altruism community has spent vast sums promoting the idea that AI poses an existential risk. But it was the release of ChatGPT that drew broad attention to how quickly AI had advanced, said Scott Aaronson, a computer scientist at the University of Texas, Austin, who works on AI safety at OpenAI. The chatbot’s surprising capabilities worried people who had previously brushed off concerns, he said.

The movement has spread among the armies of tech-industry scientists, investors and executives racing to create AI systems to mimic and eventually surpass human ability. AI can bring global prosperity, but it first must be prevented from wreaking havoc, according to those in the movement.

. . . .

Google and other companies are trying to be the first to roll out AI systems that can match the human brain. They largely regard artificial intelligence as a tool to advance work and economies at great profit.

The movement’s high-profile supporters include Dustin Moskovitz, a co-founder of Facebook, and Jann Tallinn, the billionaire founder of Skype, who have pledged billions of dollars to effective-altruism research. Before his fall, Bankman-Fried had also pledged billions. Elon Musk has called the writings of effective altruism’s co-founder William MacAskill “a close match for my philosophy.”

Marc Andreessen, the co-founder of venture-capital firm Andreessen Horowitz, and Garry Tan, chief executive of the startup incubator Y Combinator, have criticized the movement. Tan called it an insubstantial “virtue signal philosophy” that should be abandoned to “solve real problems that create human abundance.”

Urgent fear among effective-altruists that AI will destroy humanity “clouds their ability to take in critique from outside the culture,” said Shazeda Ahmed, a researcher who led a Princeton University team that studied the movement. “That is never good for any community trying to solve any trenchant problem.”

The turmoil at OpenAI exposes the behind-the-scenes contest in Silicon Valley between people who put their faith in markets and effective altruists who believe ethics, reason, mathematics and finely tuned machines should guide the future.

. . . .

One fall day last year, thousands of paper clips in the shape of OpenAI’s logo arrived at the company’s San Francisco office. No one seemed to know where they were from, but everybody knew what they meant.

The paper clip has become a symbol of doom in the AI community. The idea is that an artificial-intelligence system told to build as many paper clips as possible might destroy all of humanity in its drive to maximize production.

The prank was done by an employee at crosstown rival, Anthropic, which itself sprang from divisions over AI safety.

Dario Amodei, OpenAI’s top research scientist, split from the company, joined by several company executives in early 2021. They started Anthropic, an AI research company friendly to effective altruists.

Bankman-Fried had been one of Anthropic’s largest investors and supported the company’s mission, which favored AI safety over growth and profits. 

. . . .

The fear of futuristic AI systems hasn’t stopped even those worried about safety from trying to build artificial general intelligence or AGI—advanced systems that match or outdo the human brain. 

At OpenAI’s holiday party last December, Sutskever addressed hundreds of employees and their guests at the California Academy of Science in San Francisco, not far from the museum’s dioramas of stuffed zebras, antelopes and lions.

“Our goal is to make a mankind-loving AGI,” said Sutskever, the company’s chief scientist.

“Feel the AGI,” he said. “Repeat after me. Feel the AGI.”

Effective altruists say they can build safer AI systems because they are willing to invest in what they call alignment: making sure employees can control the technology they create and ensure it comports with a set of human values. So far, no AI company has said what those values should be.

At Google, the merging this year of its two artificial intelligence units—DeepMind and Google Brain—triggered a split over how effective-altruism principles are applied, according to current and former employees.

DeepMind co-founder Demis Hassabis, who has long hired people aligned with the movement, is in charge of the combined units.

Google Brain employees say they have largely ignored effective altruism and instead explore practical uses of artificial intelligence and the potential misuse of AI tools, according to people familiar with the matter.

One former employee compared the merger with DeepMind to a forced marriage, “making many people squirm at Brain.”

. . . .

Arjun Panickssery, a 21-year-old AI safety researcher, lives with other effective altruists at Andromeda House, a five-bedroom, three-story home a few blocks from the University of California, Berkeley campus.

They host dinners, and visitors are sometimes asked to reveal their P(doom)—estimates of the chances of an AI catastrophe. 

Berkeley, Calif., is an epicenter of effective altruism in the Bay Area, Panickssery said. Some houses designate “no-AI” zones to give people an escape from constant talk about artificial intelligence. 

Open Philanthropy’s then-CEO Holden Karnofsky had once lived with two senior OpenAI executives, according to Open Philanthropy’s website. Since 2015, Open Philanthropy, a nonprofit that supports effective-altruism causes—has given away $327 million to AI-related causes, including $30 million to OpenAI, its website shows. 

When Karnofsky was engaged to Daniela Amodei, now Anthropic’s president, they were roommates with Amodei’s brother Dario, now Anthropic’s CEO.

In August 2017, Karnofsky and Daniela Amodei married in an effective-altruism-theme ceremony. Wedding guests were encouraged to donate to causes recommended by Karnofsky’s effective-altruism charity, GiveWell, and to read a 457-page tome by German philosopher Jürgen Habermas beforehand. 

“This is necessary context for understanding our wedding,” the couple wrote on a website for the event.

. . . .

The effective-altruism movement dates back roughly two decades, when a group of Oxford University philosophers and those they identified as “super-hardcore do-gooders,” were looking for a marketing term to promote their utilitarian version of philanthropy.

Adherents believe in maximizing the amount of good they do with their time. They can earn as much money as possible, then give much of it away to attack problems that government and traditional nonprofits are ignoring or haven’t solved. They focus on ideas that deliver the biggest impact or help the largest number of people per dollar spent. 

Bankman-Fried, who was convicted this month, said he was building his fortune only to give most of it away.

. . . .

The gatherings and events, held around the world, are often closed to outsiders. Organizers of a recent effective-altruism conference in New York declined the request of a Wall Street Journal reporter to attend, saying in an email that there was “a high bar for admissions.”

Link to the rest at The Wall Street Journal (Sorry if you encounter a paywall)

Intelligence is the ability to learn from your mistakes. Wisdom is the ability to learn from the mistakes of others.

Author Unknown

He’s an intelligent man, but it takes something more than intelligence to act intelligently.

Fyodor Dostoevsky, Crime and Punishment

Hubris is interesting, because you get people who are often very clever, very powerful, have achieved great things, and then something goes wrong – they just don’t know when to stop.

Margaret MacMillan

Hubris and science are incompatible.

Douglas Preston

Hubris calls for nemesis, and in one form or another it’s going to get it, not as a punishment from outside but as the completion of a pattern already started.

Mary Midgley

Hemingway vs. Fitzgerald

Prompt

Compare and contrast the novels of Ernest Hemingway with the novels of F. Scott Fitzgerald, including excerpts from the writings of each.

ChatGPT4

Ernest Hemingway and F. Scott Fitzgerald are two of the most acclaimed American authors of the 20th century, known for their distinct literary styles and thematic explorations. Their works reflect the complexities and changes in American society during the early to mid-20th century, but their approaches and thematic focuses have notable differences.

Ernest Hemingway

  1. Writing Style: Hemingway is famous for his concise, straightforward, and unadorned prose, often referred to as the “Iceberg Theory” or “Theory of Omission”. He believed in saying as little as possible and leaving much to the reader’s interpretation.
  2. Themes: Hemingway’s works often revolve around themes of stoicism, masculinity, and the human struggle against the natural world. His characters typically face physical or emotional trials that test their courage and moral integrity.
  3. Notable Works:
    • “The Old Man and the Sea”
    • “A Farewell to Arms”
    • “For Whom the Bell Tolls”
  4. Excerpt: From “The Old Man and the Sea” -“He was an old man who fished alone in a skiff in the Gulf Stream and he had gone eighty-four days now without taking a fish.”

F. Scott Fitzgerald

  1. Writing Style: Fitzgerald’s style is often more elaborate and richly descriptive compared to Hemingway’s. He is known for his lyrical, flowing prose and his ability to capture the mood of the Jazz Age.
  2. Themes: Fitzgerald’s novels commonly address themes of decadence, idealism, resistance to change, and the disillusionment of the American Dream. His works often depict the high life of the 1920s and the moral emptiness that can accompany wealth and status.
  3. Notable Works:
    • “The Great Gatsby”
    • “Tender is the Night”
    • “This Side of Paradise”
  4. Excerpt: From “The Great Gatsby” -“So we beat on, boats against the current, borne back ceaselessly into the past.”

Comparison

  • Writing Style: Hemingway’s minimalist, understated style contrasts sharply with Fitzgerald’s ornate and poetic approach. While Hemingway focuses on directness and simplicity, Fitzgerald employs a more decorative language to create mood and tone.
  • Themes: Both explore themes related to the human condition, but their focal points differ. Hemingway often delves into the raw aspects of human nature and survival, whereas Fitzgerald tends to focus on societal issues, particularly the moral implications of the American Dream.
  • Contextual Influence: The era they lived in greatly influenced both writers. The aftermath of World War I, the Roaring Twenties, and the Great Depression are reflected distinctly in their works. Hemingway’s experiences as a war correspondent and ambulance driver in World War I influenced his themes of heroism and human endurance, while Fitzgerald’s portrayal of the Jazz Age reflects his fascination with wealth and celebrity.

Conclusion

Ernest Hemingway and F. Scott Fitzgerald offer two contrasting windows into American literature. Hemingway’s terse prose and focus on existential struggles stand in contrast to Fitzgerald’s lyrical exploration of social mores and disillusionment. Both, however, remain quintessential in their portrayal of their respective eras, capturing the spirit and challenges of early 20th-century America.

New ways generative AI can help you find holiday gifts

From Google The Key Word:

Technology has improved so many aspects of the way we shop, like how we research, find inspiration, try out products and look for deals. And now, as we approach the busiest shopping weeks of the year, technology — especially generative AI — is making it much easier to find gifts for others (and ourselves).

Read on for more about our new shopping experiences and opt into Search Labsat goo.gle/sge-gifts.

Get AI-generated gift ideas

Search Generative Experience (SGE), which brings generative AI capabilities into Search, is designed to help you understand topics in faster, easier ways and uncover new insights — which can be especially useful for gift-giving. And today we’re rolling out a new capability within SGE to help holiday shoppers more easily get inspired and find that perfect holiday gift.

Let’s say you’re looking for a present for the budding chef in your life and don’t know where to start. Search “great gifts for home cooks”, and you’ll now see a wide variety of subcategories to explore — like specialty tools, artisanal ingredients, culinary subscriptions and even experiences like cooking classes — with options from brands large and small. Just tap into a category to browse shoppable gift options and select a product to buy.

. . . .

Shop what’s (literally) on your mind

‘Tis the season for giving, but about half of people actually shop for themselves over the holidays.1 Shoppers, especially apparel shoppers, also often have a very specific vision in mind: In fact, 20% of apparel queries are five words or longer.2 So we’re introducing a new way to put the item that’s on your mind right into your shopping cart with AI-powered image generation for shopping.

We’re connecting our image generation technology with more than 35 billion listings in our Shopping Graph to help you better visualize and find the style you’re looking for. Soon when you use SGE to look for apparel, you’ll have the option to generate photorealistic images inspired by the words you search — and shop products that match those images.

Let’s say you want a new winter coat — a colorful, patterned puffer jacket — but you haven’t found quite what you’re looking for. With generative AI in Search, just look for that style and tap “Generate images” to see photorealistic images that match your vision. And if you want to make an adjustment, like if you’d prefer a colorful metallic jacket instead of a patterned one, just enter that refinement in the text box to see a new set of images. Once you’ve found one you like, scroll down to see similar shoppable products.

Link to the rest at Google The Key Word

PG wonders what’s going to happen when this capability is integrated with online dating apps.

Raskolnikov Updated

Prompt:

Provide a detailed character sketch of Rodion Raskolnikov updated for 2023

From GPT4:

Updating the character of Rodion Raskolnikov from Fyodor Dostoevsky’s “Crime and Punishment” for a 2023 context involves reimagining his background, motivations, and conflicts to resonate with contemporary themes and issues. Here’s a detailed character sketch for a modern-day Raskolnikov:

Name: Rodion “Rodya” Raskolnikov

Background

  • Age: Late 20s.
  • Education: A brilliant former student of psychology, who dropped out of a prestigious university due to financial constraints and disillusionment with the academic system.
  • Family: Comes from a lower-middle-class family. His mother is a school teacher, and he has a younger sister who is struggling to find her footing in a competitive job market.
  • Location: Lives in a modest apartment in a large, culturally diverse city.

Personality Traits

  • Intellectually Gifted: Possesses a sharp intellect and often engages in philosophical musings about society and morality.
  • Isolated: Tends to isolate himself from others, preferring his own company and thoughts.
  • Altruistic Yet Egotistical: Displays moments of genuine compassion, especially towards his family, but is also deeply egocentric, believing himself to be morally and intellectually superior.
  • Anxious and Paranoid: Exhibits signs of anxiety and paranoia, particularly after committing his crime.

Modern Adaptation

  • Occupation: Works part-time in a mundane job, possibly in data entry or as a delivery driver, which he finds unfulfilling. In his spare time, he writes blogs and essays on social and ethical issues.
  • Financial Struggles: His financial difficulties are a significant source of stress, driving him to question the social structures and disparities in wealth and power.
  • Crime: Raskolnikov’s crime in 2023 could involve hacking into a corrupt corporation’s system or committing a cybercrime against a morally dubious business tycoon, believing his actions are justified to expose or dismantle corrupt power structures.
  • Mental Health: Struggles with his mental health, experiencing guilt and internal conflict post-crime, which leads him into a psychological downward spiral.

Relationships

  • Family: Feels a deep sense of responsibility and guilt over his inability to financially support his mother and sister.
  • Friends: Has few close friends but maintains some online connections with individuals who share his ideological views.
  • Romantic Interest: Could have a complicated relationship with a character akin to Sonya, who in this adaptation might be a social worker or counselor, representing empathy and moral clarity.

Conflicts

  • Internal: Grapples with guilt, moral justification of his actions, and a sense of isolation from society.
  • External: Faces the consequences of his crime in the digital age, dealing with law enforcement and the societal fallout of his actions.

Development

  • Journey: Raskolnikov’s journey involves confronting the reality of his actions, grappling with the moral and ethical implications, and ultimately seeking redemption.
  • Resolution: His resolution could involve turning himself in and accepting the legal consequences of his actions, symbolizing his acceptance of societal laws and moral boundaries.

In this 2023 adaptation, Raskolnikov is a complex character who embodies the struggles of modern society, including financial instability, the ethical dilemmas posed by technology, and the quest for moral grounding in an increasingly complex world. His story reflects contemporary issues of social justice, mental health, and the search for meaning in a digital age.

End of GPT4

As he reviewed the AI’s response, PG had to admit that his detailed recollection of Raskolnikov’s character traits had faded into the mists of time.

He’ll need to rely on the colorfast recollections of visitors to TPV to assess how GPT4 did with this challenge. He understands several movies based on the novel have been released, but hasn’t seen any of those either, so his mind is tabula rasa on the plot.

Gone with the Wind – 2023

Prompt: Prepare a detailed outline of an adaptation of Gone with the Wind set in 2023

From ChatGPT4:

Adapting “Gone with the Wind” to a contemporary setting of 2023 requires thoughtful reimagining while maintaining the essence of the original story. Here’s a detailed outline for such an adaptation:

Title: “Gone with the Wind: 2023”

Setting

  • Location: Atlanta, Georgia, and surrounding areas, with a modern urban and suburban backdrop.
  • Time: Present day, focusing on the social, economic, and political dynamics of 2023.

Characters

  • Scarlett O’Hara: A young, ambitious entrepreneur in the tech industry, known for her beauty, intelligence, and relentless drive.
  • Rhett Butler: A charismatic venture capitalist with a mysterious past and a knack for risk-taking in both business and personal life.
  • Ashley Wilkes: An idealistic environmental activist and Scarlett’s long-time crush, who struggles to balance his ideals with practicality.
  • Melanie Hamilton: A compassionate, understanding social worker, married to Ashley, and a close friend to Scarlett.

Plot Overview

  1. Introduction:
    • Scarlett’s World: Introduce Scarlett’s life in modern Atlanta, her tech startup, and her unrequited love for Ashley.
    • Social Context: Touch on contemporary issues such as climate change, economic inequality, and technological advancements.
  2. Rising Tensions:
    • Business and Romance: Scarlett’s company faces financial difficulties. She meets Rhett at a fundraising event, sparking a complicated relationship.
    • Ashley’s Dilemma: Ashley is torn between his environmental activism and the pressures to conform to a more traditional career path.
  3. Climax:
    • Business Crisis: Scarlett’s company is on the brink of collapse. She turns to Rhett for help, leading to a controversial business deal.
    • Romantic Conflicts: Scarlett’s feelings for Ashley resurface, causing tension with Rhett and Melanie.
  4. Falling Action:
    • Personal Growth: Scarlett starts to realize the importance of balancing ambition with ethics and compassion.
    • Relationship Strains: Rhett feels neglected by Scarlett’s obsession with Ashley and her career.
  5. Conclusion:
    • Resolution: Scarlett makes a significant decision about her professional and personal life, choosing her own path over her longstanding pursuit of Ashley.
    • Rhett’s Departure: Rhett decides to leave Scarlett, leading to her famous realization of self-reliance and determination to rebuild her life.

Themes

  • Modern Feminism: Exploration of a woman’s role in a high-powered, male-dominated business world.
  • Technology vs. Environment: The clash between technological advancement and environmental conservation.
  • Personal vs. Professional Life: The struggle to maintain personal relationships while pursuing ambitious career goals.
  • Adaptation and Resilience: How characters adapt to the rapidly changing world around them.

Visual Style

  • Cinematography: A mix of sleek, modern cityscapes and lush, natural landscapes to contrast the worlds of technology and environmentalism.
  • Costume Design: Contemporary fashion, reflecting the status and personalities of characters in a modern setting.

Soundtrack

  • A blend of modern and classical music, capturing the emotional depth and complexities of the story in a contemporary context.

This adaptation maintains the core narrative of “Gone with the Wind” while updating its setting and themes to reflect modern societal and cultural issues.

End of ChatGPT4

PG will allow visitors to determine if the outline shows a useful amount of creativity or not.

If he had tasked himself to produce a similar document, PG admits he would not have considered the Soundtrack.

Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta

From The Hollywood Reporter:

A federal judge has dismissed most of Sarah Silverman‘s lawsuit against Meta over the unauthorized use of authors’ copyrighted books to train its generative artificial intelligence model, marking the second ruling from a court siding with AI firms on novel intellectual property questions presented in the legal battle.

U.S. District Judge Vince Chhabria on Monday offered a full-throated denial of one of the authors’ core theories that Meta’s AI system is itself an infringing derivative work made possible only by information extracted from copyrighted material. “This is nonsensical,” he wrote in the order. “There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.”

Another of Silverman’s arguments that every result produced by Meta’s AI tools constitutes copyright infringement was dismissed because she didn’t offer evidence that any of the outputs “could be understood as recasting, transforming, or adapting the plaintiffs’ books.” Chhabria gave her lawyers a chance to replead the claim, along with five others that weren’t allowed to advance.

Notably, Meta didn’t move to dismiss the allegation that the copying of books for purposes of training its AI model rises to the level of copyright infringement.

The ruling builds upon findings from another federal judge overseeing a lawsuit from artists suing AI art generators over the use of billions of images downloaded from the Internet as training data. In that case, U.S. District Judge William Orrick similarly delivered a blow to fundamental contentions in the lawsuit by questioning whether artists can substantiate copyright infringement in the absence of identical material created by the AI tools. He called the allegations “defective in numerous respects.”

Some of the issues presented in the litigation could decide whether creators are compensated for the use of their material to train human-mimicking chatbots that have the potential to undercut their labor. AI companies maintain that they don’t have to secure licenses because they’re protected by the fair use defense to copyright infringement.

According to the complaint filed in July, Meta’s AI model “copies each piece of text in the training dataset” and then “progressively adjusts its output to more closely resemble” expression extracted from the training dataset. The lawsuit revolved around the claim that the entire purpose of LLaMA is to imitate copyrighted expression and that the entire model should be considered an infringing derivative work.

But Chhabria called the argument “not viable” in the absence of allegations or evidence suggesting that LLaMA, short for Large Language Model Meta AI, has been “recast, transformed, or adapted” based on a preexisting, copyrighted work.

Another of Silverman’s main theories — along with other creators suing AI firms – was that every output produced by AI models are infringing derivatives, with the companies benefiting from every answer initiated by third-party users allegedly constituting an act of vicarious infringement. The judge concluded that her lawyers, who also represent the artists suing StabilityAI, DeviantArt and Midjourney, are “wrong to say that”  — because their books were duplicated in full as part of the LLaMA training process — evidence of substantially similar outputs isn’t necessary.

Link to the rest at The Hollywood Reporter

Stability AI exec leaves amid concerns over ‘fair use’ disagreement with $101m startup

From Yahoo Finance – UK:

A top exec at a British artificial intelligence (AI) startup has resigned because he is opposed to the company’s method of using copyrighted work to train its model without payment or permission from the owner.

Stability AI head of audio, Ed Newton-Rex said in a lengthy post on X he is stepping down because he disagrees “with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’”.

The London-headquartered startup uses generative AI to create text, images, music and video from descriptive prompts typed in by a human.

However, it has sparked controversy because it sweeps data indiscriminately to train its open-source model.

Newton-Rex, who says he is also a music composer, continued: “For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment.

“This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with.”

He is not the only one to disagree. Getty Images has accused Stable Diffusion, a generative AI model developed by Stability AI, of unlawfully scraping over 12m images owned by Getty to train its models.

The media company has asked Delaware’s federal court to order Stability to stop using its pictures and wants what could amount to $1.8 trillion in damages, according to Fortune.

Stability AI is the brainchild of Oxford graduate Emad Mostaque and has raised $101m (£82.3m) in a funding round led by Coatue, Lightspeed Venture Partners and O’Shaughnessy Asset Management.

In reply to Newton-Rex on X, Mostaque wrote: “Was great working with you & this is an important discussion. The considered input we gave to the Copyright Office on why we believe fair use supports creative development is here.“

Link to the rest at Yahoo Finance – UK

Google Arts & Culture

PG discovered the Google Arts & Culture site earlier this morning. It’s quite an extensive site and a definite threat to suck away a lot of time, but is nicely-done.

One of the site’s interesting features is listed as experimental. It’s called Poem Postcards. This location allows you to select a postcard based upon a classic painting, then use an AI to write a poem about it, then email the painting/poem to a friend.

Here’s a link to a poem postcard PG just created.

Grammarly

PG watched a Grammarly for Business Webinar earlier today and was very impressed by what the company showed with respect to AI writing.

Per the demos, Grammarly has taken AI to a higher plane than PG has seen before. Features that impressed him were:

  • Analysis of the user’s writing voice to help Grammarly be more helpful in shaping future creations for a variety of purposes.
  • The ability to have Grammarly go through a document you or someone else has written, summarize it, and use it as a sort of electronic style guide in the future
  • The ability to assist in quick creation of documents for a wide variety of purposes, blog posts, text messages, emails, etc. In at least some uses, Grammarly will create the appropriate format for the written message, for example by creating and inserting bullet points from a longer text.

For the moment, it looks like the super AI will be limited to Grammarly for Business, which is an enterprise-scale program.

PG looked for an online link to a recorded version of the webinar but was unable to find it. If visitors to The Passive Voice can locate one, feel free to include a link in the comments to this post.

A Google AI model developed a skill it wasn’t expected to have

From Yahoo Finance:

Concerns about AI developing skills independently of its programmers’ wishes have long absorbed scientists, ethicists, and science fiction writers. A recent interview with Google’s executives may be adding to those worries.

In an interview on CBS’s 60 Minutes on April 16, James Manyika, Google’s SVP for technology and society, discussed how one of the company’s AI systems taught itself Bengali, even though it wasn’t trained to know the language. “We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali,” he said.

Pichai confirmed that there are still elements of how AI systems learn and behave that still surprises experts: “There is an aspect of this which we call— all of us in the field call it as a ‘black box’. You don’t fully understand. And you can’t quite tell why it said this.” The CEO said the company has “some ideas” why this could be the case, but it needs more research to fully comprehend how it works.

CBS’s Scott Pelley then questioned the reasoning for opening to the public a system that its own developers don’t fully understand, but Pichai responded: “I don’t think we fully understand how a human mind works either.”

AI’s development has also come with glaring flaws that lead to fake news, deepfakes, and weaponization, sometimes with so much confidence, in what the industry calls “hallucinations.”

Asked if Google’s Bard is getting a lot of “hallucinations,” Pichai responded: “Yes, you know, which is expected. No one in the, in the field has yet solved the hallucination problems. All models do have this as an issue.” The cure, Pichai said, is around developing “more robust safety layers before we build, before we deploy more capable models.”

Link to the rest at Yahoo Finance

AI Prompt and Response of the Day

PG submitted a question to ChatGPT 3.5 today:

What sort of problems can be expected if Microsoft’s company computer systems are hacked by a sophisticated group of people who wish to harm Microsoft?

Here’s the AI’s Response:

If Microsoft’s company computer systems are hacked by a sophisticated group with malicious intent, several serious problems can be expected, including:

  1. Data Breach: The hackers may gain unauthorized access to sensitive information, compromising customer data, intellectual property, and business-critical data.
  2. Service Disruption: The attack could disrupt Microsoft’s services and products, leading to downtime, affecting user experience, and potentially causing financial losses.
  3. Reputation Damage: A successful hack can damage Microsoft’s reputation, eroding trust among customers, partners, and investors. This can have long-term consequences for the company’s brand.
  4. Financial Losses: Beyond immediate financial losses due to service disruption, the company may face legal and regulatory consequences, fines, and costs associated with addressing the breach and implementing enhanced cybersecurity measures.
  5. Intellectual Property Theft: Hackers may target Microsoft’s proprietary software and technologies, leading to the theft of valuable intellectual property that could be exploited or sold to competitors.
  6. Supply Chain Risks: If the attack extends to Microsoft’s supply chain, it could affect the security of products and services provided to customers, potentially leading to broader industry impacts.
  7. Legal and Regulatory Consequences: Microsoft may face legal actions and regulatory scrutiny for failing to protect user data and for any potential violations of data protection and privacy laws.
  8. Ongoing Threats: Once a sophisticated group gains access, they may establish persistent threats, making it challenging for Microsoft to completely eradicate the threat and requiring ongoing cybersecurity efforts.

Microsoft, like any major technology company, invests heavily in cybersecurity measures to prevent and respond to such threats. However, the landscape is constantly evolving, and the potential impact of a sophisticated cyberattack remains a significant concern for large corporations.

Did a computer write this? Book industry grapples with AI

From Today Online:

From low-quality computer-written books flooding the market to potential copyright violations, publishing is the latest industry to feel the threat from rapid developments in artificial intelligence (AI).

Since the launch last year of ChatGPT, an easy-to-use AI chatbot that can deliver an essay upon request within seconds, there have been growing worries about the impact of generative AI on a range of sectors.

Among book industry players there is “a deep sense of insecurity”, said Juergen Boos, director of the Frankfurt Book Fair, the world’s biggest, where the topic was in focus last week.

They are asking, “What happens to authors’ intellectual property? Who does new content actually belong to? How do we bring this into value chains?” he said.

The threat is plain to see — AI writing programmes allow budding authors to produce in a matter of day novels that could in the past have taken months or years to write.

A flood of titles that list ChatGPT as a co-author has been offered for sale through Amazon’s e-book self-publishing unit.

Still, critics say the works are of low quality and sense little threat from AI for now.

British author Salman Rushdie told a press conference at the fair that recently someone asked an AI writing tool to produce 300 words in his style.

“And what came out was pure garbage,” said the Midnight’s Children writer, to laughter from the audience.

“Anybody who has ever read 300 words of mine would immediately recognise that it could not possibly be by me.””So far I’m not that alarmed,” he added, during a rare public appearance since a near-fatal stabbing attack last year in the United States.

Jennifer Becker, a German author and academic, echoed his sentiments, telling a panel discussion that the results when it comes to AI writing fiction “are still not that great”. 

“There is a lot of potential to use it — to use it collaboratively. 

“But I still don’t see the point where we really hand over the writing work to AI completely autonomously. That wouldn’t make for an interesting book.”

. . . .

Industry players stress however that in some areas there is more openness to dealing with artificial intelligence.

“It depends a bit on the genre,” said Susanne Barwick, deputy legal adviser of the German Publishers and Booksellers Association, who has been in discussion about AI with publishers. 

“The field of science and specialist books is already further along and has already dealt with it more.”

These areas were “easier than the field of fiction, where I think at the moment people still tend to look a bit more at the risks”, she added. 

Artificial intelligence’s relationship with publishing threatens to throw up a host of legal problems, with one major “grey area” being who owns the copyright of AI-generated content, said fair director Boos. 

“Then you get into a real mess, and it is a huge theme. There is also really a lot of money involved,” he said.

. . . .

Last month, Game of Thrones author George RR Martin, John Grisham and Jodi Picoult were among several writers who filed a class-action lawsuit against ChatGPT creator OpenAI over alleged violation of copyrights.

Along with the Authors Guild, an organisation representing writers, they accused the California-based company of using their books “without permission” to train ChatGPT’s “large language models”, algorithms capable of producing human-sounding text responses based on simple queries, according to the lawsuit.

Translation is another thorny area, with some industry players feeling artificial intelligence would miss the nuances and subtleties needed to render complex literature into other languages.

Link to the rest at Today Online

The best AI tools to make you a better writer

From Fast Company:

You’ll soon see AI in most writing tools. Canva, Notion, Craft, Coda, and other popular writing services have been racing to add new AI features. Google Docs added a new AI-driven summarization feature. Read on for what AI can do for you, where the hype goes too far, and a few recommended tools to try.

. . . .

AI TOOLS TO TRY

  • Lex. I love its simplicity. It’s designed specifically for individual writers, not marketers, technologists or corporate teams. It doesn’t make braggadocious claims, just helps writers in subtle ways.

    Lex draws on the same OpenAI engine as Canva Docs but lacks its visual features. It’s useful if you write often but don’t need the 150 menu options available in Microsoft Word. For now, Lex is free, with a waitlist for access. For now, Lex is free, with a waitlist for access.

    Still in private beta, it already has great features, including:
    • title generator to suggest headlines for whatever you’re writing.
    • question answerer to respond concisely to factual queries.
    • A paragraph writer that assesses what you’ve written and suggests a relevant next paragraph or bullet-points that build on your argument. It can also help remind you of blind spots in your writing by listing topics, facts or issues you may have accidentally ignored.
    • Writing stats to track productivity.
  • ChatGPT isn’t a writing service per se, but it’s a helpful way to challenge your own thinking by seeking out AI explanations of complex issues. More than a million people have already signed up to chat with this remarkable chatbot that answers questions with an eerily human touch.
  • Craft.do has the best-designed implementation of any of the AI writing tools I’ve seen so far. You hit / and it gives you some options for adding directly into whatever doc you’re working on. It works on the Mac and Windows apps, on the Web and on iOS.

    You can use the AI feature to summarize or explain something; create an outline; write a pros and cons list; generate keywords or hashtags; suggest a title; add to something you’ve written; or translate your text into English, Spanish, French, Italian, German, Dutch, Portuguese, Korean or Japanese. It can stray beyond those features. I asked it to generate a limerick about AI, which you’ll see in this AI-generated document it spun out of my testing.
  • Canva Docs calls its AI feature Magic Write, which I mentioned in my piece last week. It joins another Canva AI tool that lets you generate images by typing in a text prompt, and a magic resizing tool that will adjust your image to fit whatever dimensions you need. Those first two AI features are free.
  • Copy.ai is useful for generating marketing materials.

    To test it I typed in some keywords about the Wonder Tools newsletter. It generated surprisingly decent marketing copy employing the Pain/Problem-Agitate-Solution framework.

    It didn’t, of course, generate the time or expertise required to figure out a marketing strategy. For non-marketers, though, AI-generated text can provide a helpful starting point.

    You can also use it to generate YouTube titles and descriptions, Instagram captions, TikTok video ideas, Facebook ad text, and various other formats.

Link to the rest at Fast Company

Artists Suing Stability AI Forced to Go Back to The Drawing Board on Copyright Claims

From MSN:

A group of artists suing generative AI companies for allegedly using their copyrighted works are down, but not out, following a recent federal judge’s order. On Monday, the judge presiding over a case brought by three visual artists dismissed the majority of the claims levied against Stability AI, Midjourney, and art social network DeviantArt after determining the artists’ accusations were “defective in numerous respects.”

All of the allegations against Midjourney and DeviantArt were dismissed, though the artists and their legal teams will have a chance to amend their complaint to state their argument more clearly. The core question of whether or not training generative AI models on artists’ work amounts to a copyright infringement, however, remains totally unresolved.

The case stems from a January lawsuit filed by artists Sarah Andersen, Kelly McKernan, and Karla Ortiz, who accused the tech companies of downloading billions of copyrighted images to train models without the artist’s consent and without compensating them. The artists claim supposedly “new” creations generated by Stability AI’s Stable Diffusion generator derivative of their own supposedly sucked up on a dataset used to train the models. Generative AI image generators, the artists argued in their lawsuits, aren’t creating completely original art, but are instead “merely a complex collage tool.” The artists sought a permanent injunction from the court barring Stability AI and the other defendants from using artwork without artists’ permission.

Orrick seemed unconvinced about whether or not the actual images generated by the AI models amount to a copyright infringement. In their complaint, the artists particularly took issue with AI images generated via prompts asking the modes to create images “in the style of” a known professional. The images the AI models spit out, they argue, end up competing in the marketplace against the original work of the human artist they were based on. But many of the works generated by these models, even if they are trained on an artist’s original work, may not look similar enough to the original artist’s work to run afoul of copyright protection. In other words, those “inspired by” images generated by AI models likely do not violate the artist’s copyright.

“I am not convinced that copyright claims based on a derivative theory can survive absent ‘substantial similarity’ type allegations,” Orrick wrote in the order. “The cases plaintiffs rely on appear to recognize that the alleged infringer’s derivative work must still bear some similarity to the original work or contain the protected elements of the original work.”

The judge also expressed skepticism towards the artists’ theory that the billions of supposedly scrapped works were “compressed” into Stable Diffusion’s program. Stability AI has previously denied accusations that training its AI model requires complete copies of copyrighted works. Instead, Stability claims it trains its models using complex parameters that are associated with certain subjects.

“Plaintiffs will be required to amend to clarify their theory with respect to compressed copies of Training Images and to state facts in support of how Stable Diffusion—a program that is open source, at least in part—operates with respect to the Training Images,” Orrick’s order stated.

Stability AI did not immediately respond to Gizmodo’s request for comment. Matthew Butterick, and Joseph Saveri, two of the attorneys representing the artists, told Gizmodo they believed the order issued by Judge Orrick sustained their client’s “core claim” regarding the alleged direct copyright infringement by Stability AI. That core claim, he said, was now on a path to trial.

Link to the rest at MSN and thanks to F. for the tip

PG suggests that the comments from the artists’ counsel in the last paragraph may be hopeful happy talk.

Much Ado About AI: Why I Built a Tool to Modernize Shakespeare’s Verse

From School Library Journal:

There’s a good argument that Shakespeare is the world’s most popular author. About 90 percent of American schools assign Shakespeare to students. His work has been translated into more than 100 languages. Declare “To be or not to be,” and most will answer, “That is the question.” The Bard’s work is widely integrated across culture, education, and the modern English language. Despite this, people find Shakespeare hard. Some might even say too hard.

In a recent survey of 500 teachers, 56 percent said their students found Shakespeare difficult to read. Of these teachers, 60 percent said the Elizabethan language was the biggest obstacle for students reading the plays. The themes of love, betrayal, and ambition are timeless—but maybe Elizabethan English isn’t. For many first-time readers, Shakespeare’s plays are full of unfamiliar words, phrasing, and grammatical constructions.

This reported difficulty with the language shouldn’t be viewed as a problem with Shakespeare. Elizabethan English didn’t suddenly become dated in 2023. It’s been unfamiliar and antiquated to readers for many decades. But increasingly, the language is a barrier to new readers starting a love affair with the material.

Here, in my view, artificial intelligence (AI) offers a unique benefit: facilitating the reading experience of Shakespeare’s works. Large language models (LLMs: the AI systems that power popular products like ChatGPT) have exciting potential to help people read older texts with relative ease.

If you provide AI models with text, they can instantaneously synthesize, explain, and contextualize it. They offer definitions of words, historical context, and other details that might escape a modern reader. If you’re reading War and Peace and have a foggy sense of Russian history, running a passage through an AI model quickly provides bullet points on Napoleon’s invasion of Russia as well as definitions of period-specific terms.

AI can also accurately paraphrase Elizabethan language into modern English so readers can understand any line of Shakespeare. This strategy isn’t intended as a substitute for reading the original text, but as a “reading copilot” on hand to help.

Bard-themed AI tools are gaining popularity. One I created, Shakespeare Translator, has been used by over 50,000 readers. These AI models aren’t deterministic systems with pre-written translations mapped to certain lines. Rather, the tools use LLMs to analyze the context and language patterns, providing modern interpretations.

Many are quick to critique AI-powered reading tools. The arguments essentially center on the idea that using AI waters down the joy and rewards of reading. But using AI isn’t about replacing reading. It’s about helping more people appreciate difficult material, more readily, and with fewer barriers to entry.

. . . .

How useful are these translations? Are they accurate? Do they actually help students grasp the meaning of lines? Judge for yourself.

Here’s the original text of the famous Hamlet speech
“To be, or not to be, that is the question: Whether ’tis nobler in the mind to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles and by opposing end them. To die; to sleep; No more; and by a sleep to say we end the heart-ache and the thousand natural shocks that flesh is heir to.”

Run through a Shakespeare translation tool, this is paraphrased to
“The big question is whether it’s better to keep on living or to end it all. Is it nobler to endure the hardships and misfortunes that life throws at us, or to fight against all the problems that come our way and put an end to them? Death would mean eternal rest and therefore an escape from the pain and suffering that life brings us.”

Or consider this from Richard III
“Now is the winter of our discontent made glorious summer by this sun of York; And all the clouds that lour’d upon our house in the deep bosom of the ocean buried.”

Paraphrase
“Our unhappiness has now turned into joy due to the presence of the ruler from York; and all of our worries and troubles are now in the past.”

Link to the rest at School Library Journal

Here’s a link to  Shakespeare Translator

PG’s Borrowed Thoughts:

The dumbing down of American is most evident in the slow decay of substantive content in the enormously influential media, the 30 second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.

Carl Sagan

Court Offers First Glimpse Into Whether AI Machine Learning Is Copyright Infringement Or Fair Use

From Mondaq:

As we previously blogged, multiple generative AI platforms are facing lawsuits alleging that the unauthorized use of copyright-protected material to train artificial intelligence constitutes copyright infringement.  A key defense in those cases is fair use.  Specifically, AI platforms contend that they don’t need a license to use copyright-protected content—whether scraped from the Internet or obtained from a pirate trove of books—for the purpose of developing and improving large language models (LLMs) under the theory that such use is transformative and fair use under the Copyright Act.  Whether fair use prevails in this battle is one of the biggest copyright questions of the day.

While many of the generative AI actions are pending in the U.S. District Court for the Northern District of California, a federal court in Delaware recently had the opportunity to opine on the merits of this important fair use question.  In Thomson Reuters v. Ross Intelligence, 2023 WL 6210901 (D. Del. Sept. 25, 2023), the owner of Westlaw (Thomson Reuters) claims, among other things, that an AI startup (Ross Intelligence) infringed Thomson Reuters’ copyright by using Westlaw’s headnotes to train Ross’s legal AI model.  The parties cross moved for summary judgment on various grounds, including on Ross’s fair use defense.  

Though the decision explores multiple interesting questions of copyright law, including the copyrightability of Westlaw headnotes (maybe) and whether the Copyright Act preempts Thomson Reuters’ claim for tortious interference (yes), its analysis of Ross’s fair use defense—in particular, the court’s assessment of whether Ross’s alleged use of Westlaw’s headnotes (assuming they are protected by copyright) is “transformative—is where the court appears to have broken new ground.

The court begins its fair use analysis by discussing two cases from the Ninth Circuit that deal with so-called “intermediate copying.”  In Sega Enterprises v. Accolade, 977 F.2d 1510 (9th Cir. 1992), the court held that it was fair use for a company to copy Sega’s copyright-protected console code for the purpose of learning the software’s functional components and making new games that were compatible with Sega’s console.  Similarly, in Sony Computer Entertainment v. Connectix, 203 F.3d 596 (9th Cir. 2000), the Ninth Circuit held it was fair use for a company to create a copy of Sony’s software in order to create a new gaming platform that was compatible with Sony’s games.  The Thomson Reuters court noted that the Supreme Court “has cited these intermediate copying cases favorably, particularly in the context of ‘adapting the doctrine of fair use in light of rapid technological change.’”  2023 WL 6210901, at *8 (quoting Google v. Oracle, 141 S. Ct. 1183, 1198 (2021)) (cleaned up).

Thomson Reuters attempted to distinguish the intermediate-copying cases by arguing that, unlike the companies in Sega and Sony that merely sought to “study functionality or create compatibility,” Ross sought to train its AI with Westlaw’s “creative decisions” specifically to “replicate them” in the AI’s output.  Ross, on the other hand, contended that “its AI studied the headnotes and opinion quotes only to analyze language patterns, not to replicate Westlaw’s expression,” and thus was lawful “intermediate copying.”  The court held that whether Ross’s use was transformative would turn on the “precise nature of Ross’s actions.”  

Here’s the key text:

It was transformative intermediate copying if Ross’s AI only studied the language patterns in the headnotes to learn how to produce judicial opinion quotes.  But if Thomson Reuters is right that Ross used the untransformed text of headnotes to get its AI to replicate and reproduce the creative drafting done by Westlaw’s attorney-editors, then Ross’s comparisons to cases like Sega and Sony are not apt.

. . . .

To the extent that LLMs are ingesting copyright-protected material solely to understand language patterns and not to replicate their creative expression (which may very well be the case for many LLMs), this opinion suggests that using such material to train AI is transformative.  But if the material is being used to train AI to output the “creative drafting” discerned from the original, then the use is likely not transformative.  Thus, as the Thomson Reuters court observes, the fair use question in these cases may turn on the exact nature of the AI training process.

Link to the rest at Mondaq

PG apologizes if the rest of this post is boring for anyone who isn’t a law geek, but the following may help clarify PG’s interest.

The OP intrigued PG because he got into a bit of trouble a long time ago when he suggested, in an article he wrote for The Journal of the American Bar Association, that West Publishing didn’t have a legitimate copyright to the books it published that consisted of the opinions of a large number of courts across the country.

West was a venerable professional publisher, founded in 1872 to print law books for the use of attorneys and judges.

West evolved to publish the statutes for the United States government and every state.

West also published the court opinions written by judges in the federal court system and all states.

Because the statutes and case opinions are public documents, anyone who desires to publish them is free to do so.

West contended that the improvements it made in these public documents it published were protected by copyright laws.

West built up a large business based upon the changes it made to improve the quality of the federal and state court opinions. These included:

  1. West employees proofread the opinion and corrected grammatical errors.
  2. West employees checked all of the statutory and case citations included in the opinion and corrected them to reflect generally used conventions of legal citations. (Judges, like any other human beings, sometimes make mistakes when they write their opinions. The conventions used in creating such citations can make correctly creating the citations to statutes and cases an error-prone activity.)
  3. For example, “Stearns v. Ticketmaster Corp., 655 F.3d 1013 (9th Cir. 2011),” is West’s citation for the court opinion in the case of Stephen Stearns v. Ticketmaster Corp, et al (et al is an abbreviation of the Latin term “et alia,” which means “and others.”) that was published in volume 655 of the Federal Reporter, Third Series (identified by the abbreviation “F.3d”), beginning on page 1013. The citation also shows the decision was issued by the United States Court of Appeals for the Ninth Circuit (abbreviated as 9th Cir.), in 2011.
  4. It was and is considered bad form for an attorney to cite a case other than in the form prescribed by “Blue Book Citations” in legal documents submitted to a court. West citations were the basis for Blue Book Citations. As mentioned earlier, most judges were happy to have West correct their citation errors. That service helped a judge avoid snide remarks from other judges in the judicial cafeteria.

West also categorized cases according to a West-created “Key Number System.” This is a classification system that organizes cases by topic, allowing legal researchers to quickly find cases related to a particular issue. This system was created in the 19th century, starting with seven categories: persons, property, contracts, torts, crime, remedies, and government.

The Key Number System could be quite helpful before the digitization of cases and statutes.

In 1967, the Ohio State Bar Association entered into a $7,000 agreement with Data Corporation of Beavercreek, Ohio, to create a full-text, interactive research service of the Ohio statutes.

In 1973, Mead Data Central, the successor of Data Corporation, introduced LEXIS, an online computer research service that consisted of the full text of Ohio and New York codes and cases, the U.S.
code, and some federal case law. The LEXIS search engine was clunky by today’s standards, but it allowed attorneys to search the statutes and case opinions much faster and at a more granular level than could be done with West’s printed books.

West and LEXIS (Mead Data Central)

Why I let an AI chatbot train on my book

From Vox:

Four years ago, I published my first book: End Times: A Brief Guide to the End of the World.

It did … okay? I earned a Q&A with the site you’re reading now — thanks, Dylan! — and the book eventually helped get me the job of running Future Perfect. I had one day where I went from radio hit to radio hit, trying to explain in five-minute segments to morning DJs from Philadelphia to Phoenix why we should all be more worried about the threat of human extinction and what we could do to prevent it.

But a bestseller it was not. Let’s put it this way — about every six months, I receive a letter from my publisher containing a “non-paying royalty statement,” which is sort of like getting a Christmas card from your parents, only instead of money, it just contains a note telling you how much they’ve spent raising you.

So I’ll admit that I was a bit chuffed when I received an email a couple of months ago from people at aisafety.info, who are aiming to create a centralized hub for explaining questions about AI safety and AI alignment — how to make AI accountable to human goals — to a general audience. To that end, they were building a large language model — with the delightful name “Stampy” — that could act as a chatbot, answering questions people might have about the subject. (The website was just soft launched, while Stampy is still in the prototype stage.) And they were asking permission to use my book End Times, which contains a long chapter on existential risks from AI, as part of the data Stampy would be trained on.

My first thought, like any author’s: Someone has actually read (or at least is aware of the existence of) my book! But then I had a second thought: As a writer, what does it mean to allow a chatbot to be trained on your own work? (And for free, no less.) Was I contributing to a project that could help people better understand a complex and important subject like AI safety? Or was I just speeding along the process of my own obsolescence?

Training days

These are live questions right now, with large language models like ChatGPT becoming more widespread and more capable. As my colleague Sara Morrison reported this summer, a number of class action lawsuits have already been filed against big tech firms like Google and OpenAI on behalf of writers and artists who claim that their work, including entire books, had been used to train chatbots without their permission and without remuneration. In August, a group of prominent novelists — including Game of Thrones author George R.R. Martin, who really has some other deadlines he should attend to — filed suit against ChatGPT maker OpenAI for “systematic theft on a massive scale.”

Such concerns aren’t entirely new — tech companies have long come under fire for harnessing people’s data to improve and perfect their products, often in ways that are far from transparent for the average user. But AI feels different, as attorney Ryan Clarkson, whose law firm is behind some of the class action lawsuits, told Sara. “Up until this point, tech companies have not done what they’re doing now with generative AI, which is to take everyone’s information and feed it into a product that can then contribute to people’s professional obsolescence and totally decimate their privacy in ways previously unimaginable.”

I should note here that what aisafety.info is doing is fundamentally different from the work of companies like Meta or Microsoft. For one thing, they asked me, the author, for permission before using my work. Which was very polite!

Beyond that, aisafety.info is a nonprofit research group, meaning that no one will be making money off the training data provided by my work. (A fact which, I suspect, will not surprise my publisher.) Stampy the chatbot will be an educational tool, and as someone who runs a section at Vox that cares deeply about the risk of powerful AI, I’m largely glad that my work can play some small role in making that bot smarter.

And we desperately need more reliable sources of information about AI risk. “I think the general understanding of AI alignment and safety is very poor,” Robert Miles of aisafety.info told me. “I would say that people care a lot more than they used to, but they don’t know a lot more.”

Chatbots, trained on the right source materials, can be excellent educational tools. An AI tutor can scale itself to the educational level of its student and can be kept up to date with the latest information about the subject. Plus, there’s the pleasant irony of using some of the latest breakthroughs in language models to create an educational tool designed to help people understand the potential danger of the very technology they’re using.

What’s “fair use” for AI?

I think that training a chatbot for nonprofit, educational purposes, with the express permission of the authors of the works on which it’s trained, seems okay. But do novelists like George R.R. Martin or John Grisham have a case against for-profit companies that take their work without that express permission?

The law, unfortunately, is far from clear on this question. As Harvard Law professor and First Amendment expert Rebecca Tushnet explained in an interview published in the Harvard Gazette, digital companies have generally been able to employ concepts of fair use to defend harvesting existing intellectual property. “The internet as we know it today, with Google and image search and Google Books, wouldn’t exist if it weren’t fair use to use these words for an output that was not copying” the original, she said.

One way to consider this is to think about how humans, like myself, write books. When I was researching and writing End Times, I was drawing upon and synthesizing the existing work of hundreds of different authors. Sometimes I would quote them directly, though there are specific rules about how much of an individual work another author can directly quote from under fair use. (The rough rule is 300 words when quoting from a published book, or around 200 words for a briefer article or paper.)

More often, though, what I read and processed in my research rattled around in my brain, combined with other reporting and reasoning, and came out as my own work — my work informed by my own sources. Or, in other words, informed by my own personal training dataset.

Link to the rest at Vox

The New York Times built a robot to help make article tagging easier

From NiemanLab:

If you write online, you know that a final, tedious part of the process is adding tags to your story before sending it out to the wider world.

Tags and keywords in articles help readers dig deeper into related stories and topics, and give search audiences another way to discover stories. A Nieman Lab reader could go down a rabbit hole of tags, finding all our stories mentioning Snapchat, Nick Denton, or Mystery Science Theater 3000.

Those tags can also help newsrooms create new products and find inventive ways of collecting content. That’s one reason The New York Times Research and Development lab is experimenting with a new tool that automates the tagging process using machine learning — and does it in real time.

The Times R&D Editor tool analyzes text as it’s written and suggests tags along the way, in much the way that spell-check tools highlight misspelled words:

Editor is an experimental text editing interface that explores how collaboration between machine learning systems and journalists could afford fine-grained annotation and tagging of news articles. Our approach applies machine learning techniques interactively, as part of the writing process, rather than retroactively. This approach can offload the burden of work to the computational processes, and can create affordances for journalists to augment, edit and correct those processes with their knowledge.

It’s similar to Thomson Reuters’ Open Calais system, which extracts metadata from text files of any kind. Editor works by connecting the corpus of tags housed at the Times with an artificial neural network designed to read over a writer’s shoulder in a text editing system. They explain:

As the journalist is writing in the text editor, every word, phrase and sentence is emitted on to the network so that any microservice can process that text and send relevant metadata back to the editor interface. Annotated phrases are highlighted in the text as it is written. When journalists finish writing, they can simply review the suggested annotations with as little effort as is required to perform a spell check, correcting, verifying or removing tags where needed. Editor also has a contextual menu that allows the journalist to make annotations that only a person would be able to judge, like identifying a pull quote, a fact, a key point, etc.

“We started looking at what we could do if we started tagging smaller entities in the articles. [We thought] it might afford greater capabilities for reuses and other types of presentation,” said Alexis Lloyd, creative director at the Times R&D Lab.

Tags are a big deal at the Times; the paper has a system of article tags that goes back over 100 years. That metadata makes things like Times Topics pages possible. It’s an important process that is entirely manual, relying on reporters and editors to provide a context layer around every story. And in some cases, that process can lag: The Times’ innovation report cited many gaps in the paper’s metadata system as a strategic weakness:
“Everyone forgets about metadata,” said John O’Donovan, the chief technology officer for The Financial Times. “They think they can just make stuff and then forget about how it is organized in terms of how you describe your content. But all your assets are useless to you unless you have metadata — your archive is full of stuff that is of no value because you can’t find it and don’t know what it’s about.”

Lloyd said the idea behind Editor was not just to make the metadata process more efficient, but also to make it more granular. By using a system that combs through articles at a word-by-word level, the amount of data associated with people, places, companies, and events becomes that much richer.

And that much more data opens new doors for potential products, Lloyd told me. “Having that underlying metadata helps to scale to all kinds of new platforms as they emerge,” she said. “It’s part of our broader thinking about the future of news and how that will become more complex, in terms of forms and formats.”

. . . .

The key feature of the automatic tagging system relies on bringing machines into the mix, an idea that inspires conflicting ideas of progress and dread in some journalists. For Editor to work, the lab needed to build a way for machines and humans to supplement each other’s strengths. Humans are great at seeing context and connections and understanding language, while machines can do computations at enormous scale and have perfect memory. Mike Dewar, a data scientist at the Times R&D lab, said the artificial neural network makes connections between the text and an index of terms pulled from every article in the Times archive.

It took around four months to build Editor, and part of that time was spent training the neural network in how a reporter might tag certain stories. Dewar said that teaching the network the way tags are associated with certain phrases or words gives it a benchmark to use when checking text in the future.

The biggest challenge was latency, as Editor works to make connections between what’s being written and the index of tags. In order for Editor to be really effective, it has to operate at the speed of typing, Dewar said: “It needs to respond very quickly.”

. . . .

Robots continue to expand their foothold in the world of journalism. In March, the AP said it planned to use its automated reporting services to increase college sports coverage. Lloyd has experimented with how bots can work more cooperatively with people, or at least learn from them and their Slack conversations.

Link to the rest at NiemanLab

AI is making its way into the courtroom and legal process

From CNBC:

Is the U.S. headed towards an AI-driven “smart court,” as the Center for Strategic and International Studies calls China’s frequent use of automated, digitized court proceedings? Not quite, experts say. However, these predictions aren’t entirely off the mark.

“AI is really reaching all aspects of the law,” said Wayne Cohen, managing partner at Cohen & Cohen and a law professor at the George Washington University School of Law.

While the current use of AI in the U.S. legal industry operates intensely behind the scenes, it’s inching further into the front lines of the courtroom.

Cohen said AI plays a role in most of the research, writing and jury exhibit creation that goes into trial preparation, as well as office administration, trial summaries and translations.

It also helps kick the can down the road when processing lawsuits. “The movement of the cases from when a party files a lawsuit until the case is resolved is going to get much shorter,” Cohen said.

From the bench, judges can generate searchable PDF transcriptions from audio recordings and make informed judgments that day. And with AI’s ability to flag contradictions, it can bolster or hinder the credibility of the prosecution or defense. When judges make rulings, “they can do it with a lot of accuracy, and it’s supported by the evidence that they heard in their courtroom,” said Jackie Schafer, a former assistant attorney general for the state of Washington.

Schafer founded Clearbrief in 2020, which runs on AI that’s designed to scan documents and identify citations, in addition to creating hyperlinked chronological timelines of all of the dates mentioned in documents for swift reference.

Jason Boehmig, CEO and co-founder of digital contract company Ironclad and who has experience as a corporate attorney, said AI can review a company’s legal contracts, learning its preferred language and drafting and negotiating contracts in the organization’s historic legal voice. 

Business contracts are at the forefront of legal innovation, Boehmig said. “It’s an area where we can afford to experiment,” he said. On the spectrum of the legal system, the businesses on either end of the contract arguably have less to lose than, say, an individual whose basic freedoms are at stake. 

In all of these applications, experts say the ideal situation is for humans to review AI’s work. The notion of keeping the human in the loop is far from unique to the legal industry, but the significant ramifications coming out of the justice system make human oversight all the more critical.

Link to the rest at CNBC

AI Doomers Take Center Stage at the UK’s AI Summit

From Bloomberg via Yahoo Finance:

A fierce debate over how much to focus on the supposed existential risks of artificial intelligence defined the kickoff of the UK’s AI Safety Summit on Wednesday, highlighting broader tensions in the tech community as lawmakers propose regulations and safeguards.

Tech leaders and academics attending the Summit at Bletchley Park, the former home of secret World War II code-breakers, disagreed over whether to prioritize immediate risks from AI — such as fueling discrimination and misinformation — verses concerns that it could lead to the end of human civilization.

Some attendees openly worried so-called AI doomers would dominate the proceedings — a fear compounded by news that Elon Musk would appear alongside British Prime Minister Rishi Sunak shortly after the billionaire raised the specter of AI leading to “the extinction of humanity” on a podcast. On Wednesday, the UK government also unveiled the Bletchley Declaration, a communique signed by 28 countries warning of the potential for AI to cause “catastrophic harm.”

“I hope that it doesn’t get dominated by the doomer, X-risk, ‘Terminator’-scenario discourse, and I’ll certainly push the conversation towards practical, near-term harms,” said Aidan Gomez, co-founder and chief executive officer of AI company Cohere Inc., ahead of the summit.

Top tech executives spent the week trading rhetorical blows over the subject. Meta Platforms Inc.’s chief AI scientist Yann LeCun accused rivals, including DeepMind co-founder Demis Hassabis, of playing up existential risks of the technology in an attempt “to perform a regulatory capture” of the industry. Hassabis then hit back in an interview with Bloomberg on Wednesday, calling the criticisms preposterous.

On the summit’s fringes, Ciaran Martin, the former head of the UK’s National Cyber Security Center, said there’s “genuine debate between those who take a potentially catastrophic view of AI and those who take the view that it’s a series of individual, sometimes-serious problems, that need to be managed.”

“While the undertones of that debate are running through all of the discussions,” Martin said, “I think there’s an acceptance from virtually everybody that the international, public and private communities need to do both. It’s a question of degree.”

In closed-door sessions at the summit, there were discussions about whether to pause the development of next-generation “frontier” AI models and the “existential threat” this technology may pose “to democracy, human rights, civil rights, fairness, and equality,” according to summaries published by the British government late Wednesday.

Between seminars, Musk was “mobbed” and “held court” with delegates from tech companies and civil society, according to a diplomat. But during a session about the risks of losing control of AI, he quietly listened, according to another attendee, who said the seminar was nicknamed the “Group of Death.”

Matt Clifford, a representative of the UK Prime Minister who helped organize the summit, tried to square the circle and suggest the disagreement over AI risks wasn’t such a dichotomy.

“This summit’s not focused on long-term risk; this summit’s focused on next year’s models,” he told reporters on Wednesday. “How do we address potentially catastrophic risks — as it says in the Bletchley Declaration — from those models?” he said. “The ‘short term, long term’ distinction is very often overblown.”

By the end of the summit’s first day, there were some signs of a rapprochement between the two camps. Max Tegmark, a professor at the Massachusetts Institute of Technology who previously called to pause the development of powerful AI systems, said “this debate is starting to melt away.”

Link to the rest at Yahoo Finance

When Successful, Art Exceeds Its Creator’s Plans – Especially In the Days of AI

From LexBlog:

When successful, art exceeds its creator’s plans. So true in these days of AI.

No one could have envisioned the “travelings” of a book, an article or a legal blog post. To the “Books3 database” for Meta’s AI?

Ian Bogost, a contributing writer for The Atlantic, takes a contra – and refreshing – view to that of most authors, reporters and publishers when it comes to the scraping of their work for the training of AI large language models such as ChatGPT.

A searchable database revealed that thousands of books were used “without permission,” causing some authors to express outrage and even launch lawsuits against tech giant Meta.

Bogost’s response:

Whether or not Meta’s behavior amounts to infringement is a matter for the courts to decide. Permission is a different matter. One of the facts (and pleasures) of authorship is that one’s work will be used in unpredictable ways. The philosopher Jacques Derrida liked to talk about “dissemination,” which I take to mean that, like a plant releasing its seed, an author separates from their published work. Their readers (or viewers, or listeners) not only can but must make sense of that work in different contexts. A retiree cracks a Haruki Murakami novel recommended by a grandchild. A high-school kid skims Shakespeare for a class. My mother’s tree trimmer reads my book on play at her suggestion. A lack of permission underlies all of these uses, as it underlies influence in general: When successful, art exceeds its creator’s plans.

Sitting with a group of law firm leaders in January, I was told they were going to sign a demand letter, along with other large law firms, demanding that large LLM’s – OpenAI, Google, etc stop scraping the open legal publishing of law firms.

I thought lots of luck – and why would you want to stop the advancement of the law, which the use of AI in legal publishing represents.

That silliness by law firms, as best I can tell, has subsided.

Books, articles and legal publishing – and AI itself – are vessels for ideas, per Bogost.

Once bound and published, boxed and shipped, my books find their way to places I might never have anticipated. As vessels for ideas, I hope, but also as doorstops or insect-execution devices or as the last inch of a stack that holds up a laptop for an important Zoom. Or even—even!—as a litany of tokens, chunked apart to be reassembled by the alien mind of a weird machine. Why not? I am an author, sure, but I am also a man who put some words in order amid the uncountable others who have done the same. If authorship is nothing more than vanity, then let the machines put us out of our misery.

I tend to agree with Bogost that authors, rather than feeling violated, should consider the unexpected ways their works contribute to the collective human—and increasingly machine—understanding.

Link to the rest at LexBlog

What Will AI Do to Your Job? Take a Look at What It’s Already Doing to Coders

From The Wall Street Journal:

Want to know if artificial intelligence is going to eliminate millions of jobs? The first place to look is the industry that birthed the technology.

AI seems set to do to computer programming—and possibly other kinds of so-called knowledge work—what automation has done to other jobs, from the factory floor and the warehouse, to the checkout aisle and the call center. In those industries, the end result of widespread automation has been the elimination of countless roles—and their replacement with ones that require either relatively little skill and knowledge, or a great deal more, with workers at either end of this spectrum being rewarded accordingly.

In other words, software is eating the software industry.

Economists call this “skills-biased technological change.” It’s what happens when technology makes skilled workers more productive, while taking over the complex and difficult parts of more repetitive jobs, making workers who do them easier to train and more interchangeable.

Now, AI is automating knowledge work, and the implications for the half of the U.S. workforce who are employed in such jobs are profound. It’s true that these white-collar jobs have been evolving for decades as technology has improved, but the elimination of middle-skilled jobs seems set to accelerate as AI is institutionalized in the workplace. This new technology has the potential to reshuffle the deck of winners and losers in America’s increasingly economically polarized economy.

Coding was early to the generative AI boom that has captured the world’s attention since the release of OpenAI’s ChatGPT in November. While generative AI is typically thought of as a tool for creating text, images and even video that look as though humans created them, it’s also being used by programmers to generate code, and to automatically test it. Microsoft made GitHub Copilot—a programming tool that uses technology from OpenAI—widely available in June 2022, five months before OpenAI made public its ChatGPT bot.

Tech layoffs over the past year, driven by macroeconomic trends, happen to have come just as genuinely useful AI for coding has arrived. For many young coders, the timing is unfortunate. Data from workforce-analytics company Revelio Labs indicate that companies have tended to fire their newest employees, and that in 2023, software engineers represent the largest share of people laid off by tech companies. The few tech job openings that remain, meanwhile, are being snapped up by still-in-demand, more-experienced software engineers.

Link to the rest at The Wall Street Journal