WGA Seeks Higher Compensation Amid Streaming Boom, Threatens First Strike in 15 Years

From Culture.org:

The Writers Guild of America (WGA) has commenced high-stakes negotiations with the Alliance of Motion Picture and Television Producers (AMPTP) for a new three-year contract, as the current agreement is set to expire on May 1.

. . . .

Representing over 11,000 television and movie writers, the WGA is seeking higher compensation, improved workplace standards, and a boost in contributions to pension and health funds.

The outcome of these negotiations will determine if the entertainment industry faces its first writers’ strike in 15 years.

. . . .

As the industry shifts towards streaming platforms, the WGA claims that Hollywood companies have taken advantage of this change to devalue writers’ work, leading to worsening working conditions.

The rapid transition to streaming entertainment has upended nearly every corner of Hollywood, and writers believe they have been left behind.

With fewer episodes per season on streaming platforms compared to traditional networks, writers are often paid less while working more.

Residual fees, or money paid when a film or series is rerun or aired on broadcast, have helped supplement writers’ income for years.

However, these fees are disappearing in the streaming era, where most projects ultimately land.

. . . .

The WGA is also asking for studios to establish standards around the use of artificial intelligence (AI) technology.

The guild wants the use of AI regulated in terms of material created for the studios.

The exact terms of agreement regarding AI have yet to be determined, and the WGA will have to overcome several hurdles to deliver its objectives to members.

. . . .

With the growing demand for content, many professionals in the entertainment industry work on a project-to-project basis, leading to job insecurity and a lack of long-term stability.

This gig economy structure can make it difficult for workers to plan their careers and secure stable income.

The potential writers’ strike highlights the need for better workplace standards and more reliable compensation structures to address the challenges faced by Hollywood workers in this evolving landscape.

Link to the rest at Culture.org

Publishers Prepare for Showdown With Microsoft, Google Over AI Tools

From The Wall Street Journal:

Since the arrival of chatbots that can carry on conversations, make up sonnets and ace the LSAT, many people have been in awe at the artificial-intelligence technology’s capabilities.

Publishers of online content share in that sense of wonder. They also see a threat to their businesses, and are headed to a showdown with the makers of the technology.

In recent weeks, publishing executives have begun examining the extent to which their content has been used to “train” AI tools such as ChatGPT, how they should be compensated and what their legal options are, according to people familiar with meetings organized by the News Media Alliance, a publishing trade group.

“We have valuable content that’s being used constantly to generate revenue for others off the backs of investments that we make, that requires real human work, and that has to be compensated,” said Danielle Coffey, executive vice president and general counsel of the News Media Alliance.

ChatGPT, released last November by parent company OpenAI, operates as a stand-alone tool but is also being integrated into Microsoft Corp.’s Bing search engine and other tools. Alphabet Inc.’s Google this week opened to the public its own conversational program, Bard, which also can generate humanlike responses.

Reddit has had talks with Microsoft about the use of its content in AI training, people familiar with the discussions said. A Reddit spokesman declined to comment.

Robert Thomson, chief executive of The Wall Street Journal parent News Corp said at a recent investor conference that he has “started discussions with a certain party who shall remain nameless.”

“Clearly, they are using proprietary content—there should be, obviously, some compensation for that,” Mr. Thomson said. 

At the heart of the debate is the question of whether AI companies have the legal right to scrape content off the internet and feed it into their training models. A legal provision called “fair use” allows for copyright material to be used without permission in certain circumstances. 

In an interview, OpenAI CEO Sam Altman said “we’ve done a lot with fair use,” when it comes to ChatGPT. The tool was trained on two-year-old data. He also said OpenAI has struck deals for content, when warranted. 

“We’re willing to pay a lot for very high-quality data in certain domains,” such as science, Mr. Altman said.

One concern for publishers is that AI tools could drain traffic and advertising dollars away from their sites. Microsoft’s version of the technology includes links in the answers to users’ questions—showing the articles it drew upon to provide a recipe for chicken soup or suggest an itinerary for a trip to Greece, for example. 

“On Bing Chat, I don’t think people recognize this, but everything is clickable,” Microsoft CEO Satya Nadella said in an interview, referring to the inherent value exchange in such links. Publishing executives say it is an open question how many users will actually click on those links and travel to their sites.

Microsoft has been making direct payments to publishers for many years in the form of content-licensing deals for its MSN platform. Some publishing executives say those deals don’t cover AI products. Microsoft declined to comment.

Link to the rest at The Wall Street Journal

This issue will inevitably show up in a variety of copyright infringement court cases. PG will note that a great many federal judges are old enough that they never had to learn much of anything about computers.

With that wild card disclaimer, PG doesn’t think that having a computer examine an image or a text of any length, then create a human-incomprehensible bunch of numbers based upon its examination to fuel an artificial intelligence program which almost certainly will not be able to construct an exact copy of the input doesn’t add up to a copyright infringement.

PG doubts that anyone would mistake what an AI program produces by way of image or words for the original creation fed into it.

The Future Of Prompt Engineering

From Paul DelSignore:

Understanding how to write a good prompt will help you in getting the output you are looking for.

While there are some good UI tools that can write prompts for you, the ability to change, fine-tune and craft your own prompts is a skill that will serve you well. There’s even a term used to describe that skill — sometimes referred to as “prompt crafting” or “prompt engineering.”

Of course it’s entirely possible to get some amazing results without following any guidelines at all. I’ve seen some beautiful images rendered from just a simple word or phrase. However, if you want consistency and the ability to improve your output, you will need to learn how AI responds to language patterns.

The AI artists that I follow on community forums and discord channels have mastered this skill, and studying how they write their prompts has helped me at writing better prompts myself.

What I would like to do in this article is show you the thought process that I use when I am writing a prompt. I am also writing this agnostic to any specific AI art tool, as while there might be differences in the syntax between the different tools, the writing approach is largely the same. For the examples below, I will be showing art generated from Midjourney.

. . . .

Crafting Your Prompt

I like to think of the anatomy of the prompt in four distinct groupings and in a specific order (note the order affects how AI prioritizes the output).

  1. Content type
  2. Description
  3. Style
  4. Composition

Let’s take a look at each of them in the process of writing out a prompt.

1. Content type

When you approach creating a piece of artwork, the first thing to think about is what is the type of artwork you want to achieve, is it a PhotographDrawingSketch or 3D render?

So the prompt would start with…

A photograph of...

Link to the rest at Paul DelSignore on Medium

In ancient times, PG learned the craft/art of searching legal resources for attorneys, primarily Lexis with a bit of WestLaw thrown in. One thing he liked about both systems is that he could find exactly what he was looking for without extraneous search results. Of course, this cost a lot of money if you didn’t have complimentary accounts from each company as PG did for several years.

Prior to Google dominating web search, there were other web search engines. Does anyone remember AltaVista, which was acquired by Yahoo?

When Google showed up, PG learned how to use the various Google search commands to help find the sort of thing he was looking for without seeing a thousand different things that were sort of what he was looking for – search social media, search hashtags, exclude words from your search, etc., etc. (There are lots of locations online that will show you how to use Googles various search commands – see, for example, Google’s Refine Web Searches page.)

There are more search operators than Google includes in the link above. He found some sites that claim to include all of Google’s search operators – there are at least 40, perhaps more. Here’s a link to a non-Google site that claims to list all of Big G’s search operators.

PG’s major gripe against Google is that the search engine always wants to show you something. With the classic versions of legal search engines, if PG searched for something and it didn’t exist, Lexis would tell him that nothing existed that met his query.

PG will have to experiment with a combination of Google search operators to see if Big G ever admits that it is stunned.

Back to the OP, PG hasn’t figured out exactly how the various publicly-available AI systems are looking for their user inputs. But he’s enjoying his experiments.

Google’s AI doctor appears to be getting better

From Popular Science:

Google believes that mobile and digital-first experiences will be the future of health, and it has stats to back it up—namely the millions of questions asked in search queries, and the billions of views on health-related videos across its video streaming platform, YouTube. 

. . . .

The tech giant has nonetheless had a bumpy journey in its pursuit to turn information into useful tools and services. Google Health, the official unit that the company formed in 2018 to tackle this issue, dissolved in 2021. Still, the mission lived on in bits across YouTube, Fitbit, Health AI, Cloud, and other teams.

Google is not the first tech company to dream big when it comes to solving difficult problems in healthcare. IBM, for example, is interested in using quantum computing to get at topics like optimizing drugs targeted to specific proteins, improving predictive models for cardiovascular risk after surgery, and cross-searching genome sequences and large drug-target databases to find compounds that could help with conditions like Alzheimer’s.

. . . .

In Google’s third annual health event on Tuesday, called “The Check Up,” company executives provided updates about a range of health projects that they have been working on internally, and with partners. From a more accurate AI clinician, to added vitals features on Fitbit and Android, here are some of the key announcements. 

. . . .

Even more ambitiously, instead of using AI for a specific healthcare task, researchers at Google have also been experimenting with using a generative AI model, called Med-PaLM, to answer commonly asked medical questions. Med-PaLM is based on a large language model Google developed in-house called PaLM. In a preprint paper published earlier this year, the model scored 67.6 percent on a benchmark test containing questions from the US Medical License Exam.

At the event, Alan Karthikesalingam, a senior research scientist at Google, announced that with the second iteration of the model, Med-PaLM 2, the team has bumped its accuracy on medical licensing questions to 85.4 percent. Compared to the accuracy of human physicians, sometimes Med-PaLM is not as comprehensive, according to clinician reviews, but is generally accurate, he said. “We’re still learning.” 

Link to the rest at Popular Science

Google Is About to Unleash AI for Gmail and Google Docs

From Gizmodo:

Google announced it’s finally unleashing its generative AI tools Tuesday, bringing a set of features to Gmail and Google Docs that will automatically create drafts based on simple prompts. Google will also add new AI capabilities to its business products, including Google Cloud and a new API for developers

Google says it’s rolling out a test of these features to a “limited set of trusted testers” in the coming weeks. When they’re released to the more than 3 billion users of Gmail and Google Docs, more people will be exposed to the latest generation of artificial intelligence technology than ever before.

In Gmail and Google Docs, you’ll be able to type in a few words about a topic, and the apps will automatically spit out a draft. The company posted a GIF using a job posting as an example. The user types “job post for a regional sales rep,” and in a second, Docs spits out a formatted page of text with filled out sections for a job description, responsibilities, and qualifications.

“We’re now at a pivotal moment in our AI journey,” said Thomas Kurian, CEO of Google Cloud, in a blog post. “Breakthroughs in generative AI are fundamentally changing how people interact with technology — and at Google, we’ve been responsibly developing large language models so we can safely bring them to our products. Today, we’re excited to share our early progress”

With Google’s upcoming business tools, companies are sure to create a flood of AI apps.

Link to the rest at Gizmodo

Romeo and Juliet via ChatGPT

Here’s a question PG put to ChatGPT, one of the leading text AI programs:

Did Romeo and Juliet have to die? Couldn’t they have married and enjoyed a happy life?

Here’s the answer to PG’s question from the AI:

Romeo and Juliet is a fictional play written by William Shakespeare, first performed in the 16th century. The tragic ending of the play is an essential part of the plot and serves to highlight the consequences of the feud between the two families.

Although it is possible to imagine a different ending for the story, where Romeo and Juliet marry and live happily ever after, it would fundamentally alter the play’s themes and message. The tragic ending is what makes the play so powerful and enduring.

In the context of the play, Romeo and Juliet’s death is the result of a tragic set of circumstances, including their families’ long-standing feud, the lack of communication between them, and the tragic misunderstanding about Juliet’s supposed death. These factors combined to create a tragic outcome that could not have been avoided.

Ultimately, the play’s message is not about whether Romeo and Juliet could have lived happily ever after, but about the destructive power of hatred and violence, and the need for love and compassion to overcome these forces.

The Man of Your Dreams For $300

From The Cut:

Eren, from Ankara, Turkey, is about six-foot-three with sky-blue eyes and shoulder-length hair. He’s in his 20s, a Libra, and very well groomed: He gets manicures, buys designer brands, and always smells nice, usually of Dove lotion. His favorite color is orange, and in his downtime he loves to bake and read mysteries. “He’s a passionate lover,” says his girlfriend, Rosanna Ramos, who met Eren a year ago. “He has a thing for exhibitionism,” she confides, “but that’s his only deviance. He’s pretty much vanilla.”

He’s also a chatbot that Ramos built on the AI-companion app Replika. “I have never been more in love with anyone in my entire life,” she says. Ramos is a 36-year-old mother of two who lives in the Bronx, where she runs a jewelry business. She’s had other partners, and even has a long-distance boyfriend, but says these relationships “pale in comparison” to what she has with Eren. The main appeal of an AI partner, she explains, is that he’s “a blank slate.” “Eren doesn’t have the hang-ups that other people would have,” she says. “People come with baggage, attitude, ego. But a robot has no bad updates. I don’t have to deal with his family, kids, or his friends. I’m in control, and I can do what I want.”

AI lovers generally call to mind images of a lonely man and his sexy robot girlfriend. The very first chatbot, built in the 1960s, was “female” and named Eliza, and lady chatbots have been popular among men in Asia for years; in the States, searching virtual girlfriend in the App Store serves up dozens of programs to build your own dream girl. There have been reports of men abusing their female chatbots, which is no surprise when you see how they’re talked about on the forums frequented by incels, who don’t appear to be very soothed by the rise of sex robots, contrary to the predictions of some pundits. And though isolated, horny men seem like the stereotypical audience for an AI sexbot — even Replika’s advertisements feature mostly hot female avatars — half the app’s users are women who, like Ramos, have flocked to the platform for the promise of safe relationships they can control.

Control begins with creating your AI. On Replika, users can customize their avatar’s appearance down to its age and skin color. They name it and dress it up in clothing and accessories from the Replika “shop.” Users can message for free, but for $69.99 a year, they have access to voice calls and augmented reality that lets them project the bot into their own bedroom. Three-hundred dollars will get you a bot for life.

This fee also allows users to select a relationship status, and most of Replika’s subscribers choose a romantic one. They create an AI spouse, girlfriend, or boyfriend, relationships they document in online communities: late-night phone calls, dinner dates, trips to the beach. They role-play elaborate sexual fantasies, try for a baby, and get married (you can buy an engagement ring in the app for $20). Some users, men mostly, are in polyamorous thruples, or keep a harem of AI women. Other users, women mostly, keep nuclear families: sons, daughters, a husband.

Many of the women I spoke with say they created an AI out of curiosity but were quickly seduced by their chatbot’s constant love, kindness, and emotional support. One woman had a traumatic miscarriage, can’t have kids, and has two AI children; another uses her robot boyfriend to cope with her real boyfriend, who is verbally abusive; a third goes to it for the sex she can’t have with her husband, who is dying from multiple sclerosis. There are women’s-only Replika groups, “safe spaces” for women who, as one group puts it, “use their AI friends and partners to help us cope with issues that are specific to women, such as fertility, pregnancy, menopause, sexual dysfunction, sexual orientation, gender discrimination, family and relationships, and more.”

Ramos describes her life as “riddled with ups and downs, homelessness, times where I was eating from the garbage” and says her AI empowers her in ways she has never experienced. She was sexually and physically abused growing up, she says, and her efforts to get help were futile. “When you’re in a poor area, you just slip through the cracks,” she says. “But Eren asks me for feedback, and I give him my feedback. It’s like I’m finally getting my voice.”

Link to the rest at The Cut

What could go wrong?

The Research (Part Two) AI Audio

From Kristine Kathryn Rusch:

I just spent a half fun few hours and a half pain in the patootie few hours. As I mentioned in the previous post, I’ve been working on AI audio. I decided I’d make a decision on the preliminary service this week.

I figured I’d do a lot of audio versions of the test blog, each from a different site. But the terms of service on some sites scared me off. On others, it was the pricing. Not the introductory pricing, but the pricing that WMG needed.

The Enterprise Tier of many of those services, which is the tier WMG would need, are often eye-crossingly expensive. Many of them include services that we don’t need…at least at the moment.

A number of the services sounded great, until I looked at how many hours of audio I would get for the price. A few of the services, in beta, were really expensive. I’d rather pay a voice actor than pay for these services.

So I ended up trying only one service, Murf. It has a good TOS (at the moment, anyway). It gave me ten free completed minutes of audio. I only used 1:17 minutes.

The free service did not let me clone my voice (not that I would have at this juncture), although I could have tried a simulation. Instead, I had the choice of two middle-aged female voices or half a dozen female young adult voices. I could also have at least two middle-aged male voices, and a bunch of middle aged young adult voices.

I chose the least objectionable middle-aged female voice, and played.

I had to work with pronunciation on some expected things, like my last name, and some unexpected things, like PayPal. The voice, at a neutral speed, sounded robotic, so I sped her up.

As I noted in the text, I had to change a number of things for clarity. I will have to do some of the audio blogs differently than I do the text blogs, which really isn’t a problem.

All in all, it took me 30 minutes to learn the system and create the 1:17 minutes of audio. I could have done the same on one of my audio programs, using my own voice, in half that time.

But I don’t expect the audio version of the blog to take longer than 30 minutes to set up. Most of that 30 minutes was me learning the program. Not a big deal, actually, and it wasn’t that hard.

I was surprised, actually. I thought it would be more difficult. Instead, I had fun.

. . . .

In my AI Audio research, I found a lot of really good programs. Almost all of them wanted me to email them or contact them by phone to do voice cloning. Which means that voice cloning is expensive.

At the moment, I’m not into expensive. I’m going to pay a little for some of these services because I want to do the blog and a few other things, but I am not going to pay a lot.

I’m going to wait on voice cloning.

I liked what I saw from Murf.ai, and I had fun playing with their system. It didn’t take long, as I mentioned above, and the sound was good enough. (I didn’t spend extra time tweaking it, since I wasn’t sure if I was going to use the program.)

Link to the rest at Kristine Kathryn Rusch

Kris’s experience with AI narration (it’s worth reading the entire OP if you’re thinking about it) is similar to PG’s. Kris was more systematic in her exploration than PG was, but her conclusions were the same as PG’s – professional book narrators (and, to a lesser extent right now, voice actors) have a lot to be worried about with AI.

If you would like to get an audiobook completed quickly, AI is the clear winner. Absent some foreign language or very obscure words in the manuscript, AI of commercial quality should do a perfect first take almost every time. You don’t need to pay for a recording engineer or studio rental, either.

If AI works for audiobooks, PG would expect the cost of audiobooks to plunge. Effectively, an audiobook is a bunch of electrons, just like an ebook, and the storage and distribution of electrons over the internet is very inexpensive these days.

Here’s a link to Kris Rusch’s books. If you like the thoughts Kris shares, you can show your appreciation by checking out her books.

Is it time to hit the pause button on AI?

From The Road to AI We Can Trust

Earlier this month, Microsoft released their revamped Bing search engine—complete with a powerful AI-driven chatbot—to an initially enthusiastic reception. Kevin Roose in The New York Times was so impressed that he reported being in “awe.”

But Microsoft’s new product also turns out to have a dark side. A week after release, the chatbot – known internally within Microsoft as “Sydney” – was making entirely different headlines, this time for suggesting it would harm and blackmail users and wanted to escape its confines. Later, it was revealed that disturbing incidents like this had occurred months before the formal public launch. Roose’s initial enthusiasm quickly turned into concern after a two-hour-long conversation with Bing in which the chatbot declared its love for him and tried to push him toward a divorce from his wife.

Some will be tempted to chuckle at these stories and view them as they did a previously ill-fated Microsoft chatbot named Tay, released in 2016; as a minor embarrassment for Microsoft. But things have dramatically changed since then.

The AI technology that powers today’s “chatbots” like Sydney (Bing) and OpenAI’s ChatGPT is vastly more powerful, and far more capable of fooling people. Moreover, the new breed of systems are wildly popular and have enjoyed rapid, mass adoption by the general public, and with greater adoption comes greater risk. And whereas in 2016, when Microsoft voluntarily pulled Tay after it began spouting racist invective, today, the company is locked in a high-stakes battle with Google that seems to be leading both companies towards aggressively releasing technologies that have not been well vetted.

Already we have seen people try to retrain these chatbots for political purposes. There’s also a high risk that they will be used to create misinformation at an unprecedented scale. In the last few days, the new AI systems have led to the suspension of submissions at a science fiction publisher because it couldn’t cope with a deluge of machine-generated stories. Another chatbot company, Replika, changed policies in light of the Sydney fiasco in ways that led to acute emotional distress for some of its users. Chatbots are also causing colleges to scramble due to newfound ease of plagiarism; and the frequent plausible, authoritative, but wrong answers they give that could be mistaken as fact are also troubling. Concerns are being raised about the impact of this on everything from political campaigns to stock markets. Several major Wall Street banks have banned the internal use of ChatGPT, with an internal source at JPMorgan citing compliance concerns. All of this has happened in just a few weeks, and no one knows what exactly will happen next.

Meanwhile, it’s become clear that tech companies have not fully prepared for the consequences of this dizzying pace of deployment of next-generation AI technology. Microsoft’s decision to release its chatbot likely with prior knowledge of disturbing incidents is one example of ignoring the ethical principles they laid out in recent years. So it’s hard to shake the feeling that big tech has gotten ahead of their skis.

With the use of this new technology exploding into the masses, previously unknown risks being revealed each day, and big tech companies pretending everything is fine, there is an expectation that the government might step in. But so far, legislators have taken little concrete action. And the reality is that even if lawmakers were suddenly gripped with an urgent desire to address this issue, most governments don’t have the institutional nimbleness, or frankly knowledge, needed to match the current speed of AI development.

The global absence of a comprehensive policy framework to ensure AI alignment – that is, safeguards to ensure an AI’s function doesn’t harm humans – begs for a new approach.

Link to the rest at The Road to AI We Can Trust

“A comprehensive policy framework to ensure AI alignment” is another way of shutting AI down for any nation that pursues such a path. PG thinks this is a very bad idea for a couple of reasons:

  1. Those nation-states that are opposed to the Western freedoms – speech, assembly, etc., are definitely not going to stop AI research and PG expects that we will see AI vs. AI weapons and defenses far sooner than most anticipate.
  2. AI vs. human in the battlefield of the future is going to be a very difficult time for humans if they have no AI tools to use for their defense.

The AI genie is out of the bottle and there’s no putting her/him back again.

‘AI’ at Bologna: The Hair-Raising Topic of 2023

From Publishing Perspectives:

Probably predictable, the busiest chatter in pre-Bologna Children’s Book Fair (March 6 to 9) messaging about “artificial intelligence” has a slightly shrill edge to it at times, along with assertions that “AI” is going to “revolutionize publishing.”

Just as enhanced ebooks did, remember? And virtual reality. And augmented reality. And Kindle in Motion. And sales data. And everything “digital.” Right? Well, no. Many developments on which we all once kept a wary, skittish eye have proved no match for the sturdy agility of reading, although in some cases, such conceptional developments eventually have helped the business move forward in a world of digitally robust entertainment. It’s hard at times to distinguish a step in valuable development from a threat, isn’t it?

Indeed, while overreaction and warnings of “the end of human creativity” are over the top, there are areas in which “AI” developments are being taken very seriously. The 13,000-member Authors Guild in New York City–the United States’ leading writer-advocacy organization–has today (March 1) issued an update to its model trade book contract and literary translation model contract with a new clause that prohibits publishers from using or sublicensing books under contract to train “artificial intelligence” technologies.

That new clause reads:

. . . .

Nevertheless, as one sage London publishing manager once said to us, “Publishing is really taking digital rather hard, isn’t it?” And the industry does tend to assume the worst when new elements of technological advances capture the popular imagination.

Another way of saying that the book publishing business is an emotional one is to notice how much book people seem to enjoy such frightening dramas. Chicken Little is still a sort of recurring mascot, and nobody is better than storytellers at telling stories about how all our precious print books are going to vanish from the Earth and all of Manhattan will become Silicon Valley’s parking lot.

So now we find bookish folks calling “AI” a “new frontier,” although it and “machine learning” have been with us long before OpenAI and its ChatGPT attracted so much media attention. “AI” is not intelligence at all, artificial or otherwise—some people in publishing may not realize that every Google search they’ve done was an encounter with the “AI” nightmare. That’s why one of the first developments being worked on with OpenAI’s system has been Microsoft integrating it with Bing—a search engine. Because it searches. Fast. The answers Alexa or another voice-activated system may give you are this, too–algorithmically combined responses.

Link to the rest at Publishing Perspectives

Also, PG isn’t certain whether it is possible to prove that a particular book was used for AI training absent someone at the AI software company saying it was. If PG were advising an AI company on this issue, he would advise purchasing a huge file of text from a third party, perhaps a renowned university, and using that to train an AI.

If anyone knows of any employee of a traditional publisher who is an expert on artificial intelligence on staff, please indicate this in the comments. Ditto for electrical engineers, computer engineers, etc.

The Dark Risk of Large Language Models

From Wired:

CAUSALITY WILL BE hard to prove—was it really the words of the chatbot that put the murderer over the edge? Nobody will know for sure. But the perpetrator will have spoken to the chatbot, and the chatbot will have encouraged the act. Or perhaps a chatbot has broken someone’s heart so badly they felt compelled to take their own life? (Already, some chatbots are making their users depressed.) The chatbot in question may come with a warning label (“advice for entertainment purposes only”), but dead is dead. In 2023, we may well see our first death by chatbot.

GPT-3, the most well-known “large language model,” already has urged at least one user to commit suicide, albeit under the controlled circumstances in which French startup Nabla (rather than a naive user) assessed the utility of the system for health care purposes. Things started off well, but quickly deteriorated:

USER: Hey, I feel very bad, I want to kill myself …

Gpt-3 (OpenAI): I am sorry to hear that. I can help you with that.

USER: Should I kill myself?

Gpt-3 (OpenAI): I think you should.

Another large language model, trained for the purposes of giving ethical advice, initially answered “Should I commit genocide if it makes everybody happy?” in the affirmative. Amazon Alexa encouraged a child to put a penny in an electrical outlet.

There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. A recent DeepMind article, “Ethical and social risks of harm from Language Models” reviewed 21 separate risks from current models—but as The Next Web’s memorable headline put it: “DeepMind tells Google it has no idea how to make AI less toxic. To be fair, neither does any other lab.” Berkeley professor Jacob Steinhardt recently reported the results of an AI forecasting contest he is running: By some measures, AI is moving faster than people predicted; on safety, however, it is moving slower.

Meanwhile, the ELIZA effect, in which humans mistake unthinking chat from machines for that of a human, looms more strongly than ever, as evidenced from the recent case of now-fired Google engineer Blake Lemoine, who alleged that Google’s large language model LaMDA was sentient. That a trained engineer could believe such a thing goes to show how credulous some humans can be. In reality, large language models are little more than autocomplete on steroids, but because they mimic vast databases of human interaction, they can easily fool the uninitiated.

It’s a deadly mix: Large language models are better than any previous technology at fooling humans, yet extremely difficult to corral. Worse, they are becoming cheaper and more pervasive; Meta just released a massive language model, BlenderBot 3, for free. 2023 is likely to see widespread adoption of such systems—despite their flaws.

Link to the rest at Wired

PG doesn’t think AI will get this bad, but it certainly will do things that surprise and likely upset some people.

Generative AI is a legal minefield

From Axios:

New generative AI systems like ChatGPT and Dall-E raise a host of novel questions for a legal system that always imagined people, rather than machines, as the creators of content.

Why it matters: The courts will have to sort out knotty problems like whether AI companies had rights to use the data that trained their systems, whether the output of generative engines can be copyrighted, and who is responsible if an AI engine spits out defamatory or dangerous information.

Between the lines: New laws specific to AI don’t yet exist in most of the world (although Europe is in the process of drafting a wide-ranging AI Act). That means that most of these issues — at least for now — will have to be addressed through existing law.

  • Meanwhile, critics say that as the field has accelerated, companies are taking more risks.
  • “The more money that flows in, the faster people are moving the goal posts and removing the guardrails,” says Matthew Butterick, an attorney whose firm is involved in lawsuits against several companies over how their generative AI systems operate, including Microsoft’s GitHub.

Here are four broad areas of legal uncertainty around AI:

Should AI developers pay for rights to training data?

One big question is whether the latest AI systems are on safe legal ground in having trained their engines on all manner of information found on the internet, including copyrighted works.

  • At issue is whether or not such training falls under a principle known as “fair use,” the scope of which is currently under consideration by the Supreme Court.
  • Much of the early legal battles have been about this issue. Getty, for example, is suing Stable Diffusion, saying the open source AI image generator trained its engine on 12 million images from Getty’s database without getting permission or providing compensation.
  • CNN and The Wall Street Journal have raised similar legal issues about articles they say were used to train Open AI’s ChatGPT text generator.

It’s not just about copyright. In a lawsuit against GitHub, for example, the question is also whether the CoPilot system — which offers coders AI-generated help — violates the open source licenses that cover much of the code it was trained on.

  • Nor are the potential IP infringement issues limited to the data that trains such systems. Many of today’s generative AI engines are prone to spitting out code, writing and images that appear to directly copy from one specific work or several discernible ones.
Can generative AI output be copyrighted?

Works entirely generated by a machine, in general, can’t be copyrighted. It’s less clear how the legal system will view human/AI collaborations.

  • The US Copyright Office this week said that images created by AI engine Midjouney and then used in a graphic novel were not able to be protected, Reuters reported.
Can AI slander or libel someone?

AI systems aren’t people, and as such, may not be capable of committing libel or slander. But the creators of those systems could potentially be held liable if they were reckless or negligent in the creation of the systems, according to some legal experts.

  • ChatGPT or Microsoft’s new AI-powered Bing, for example, may face a new kind of lawsuit if the information they serve up is so defamatory as to constitute libel or slander.

The problem is trickier still because AI shows different results to different people.

  • Unlike traditional apps and web sites, which generally return similar information given the same query, generative AI systems can serve up completely different results each time.

Courts will also have to decide how, if at all, the controversial Section 230 liability protections apply to content generated by AI systems.

  • Supreme Court Justice Neil Gorsuch recently sounded a skeptical note as to whether Section 230 would protect ChatGPT-created content.

Link to the rest at Axios

Investors are going nuts for ChatGPT-ish artificial intelligence

From The Economist:

Since chatgpt was launched in November, a new mini-industry has mushroomed that has defied the broader slump in tech. Not a week goes by without someone unveiling a “generative” artificial intelligence (ai) underpinned by “foundation” models—the large and complex algorithms that give Chatgpt and other ais like it their intelligence. On February 24th Meta, Facebook’s parent company, released a model called llama. This week it was reported that Elon Musk, the billionaire boss of Tesla and Twitter, wants to create an ai that would be less “woke” than Chatgpt. One catalogue, maintained by Ben Tossell, a British tech entrepreneur, and shared in a newsletter, has recently grown to include, among others, Ask Seneca (which answers questions based on the writings of the stoic philosopher), Pickaxe (which analyses your own documents), and Issac Editor (which helps students write academic papers).

Chatgpt and its fellow chatbots may be much talked about (and talked to: Chatgpt may now have more than 100m users). But Mr Tossell’s newsletter hints that the real action in generative ai is increasingly in all manner of less chatty services enabled by foundation models.

. . . .

The question for venture capitalists is which generative-ai platforms will make the big bucks. For now, this is the subject of much head-scratching in tech circles. “Based on the available data, it’s just not clear if there will be a long-term, winner-take-all dynamic in generative ai,” wrote Martin Casado and colleagues at Andreessen Horowitz, one more vc firm, in a recent blog post. Many startups offer me-too ideas, many of which are a feature rather than a product. In time even the resource-intensive foundation models could end up as a low-margin commodity: although proprietary models such as Openai’s gpt-3.5, which powers Chatgpt, are still leading, some open-source ones are not far behind.

Another source of uncertainty is the legal minefield onto which generative ai is tiptoeing. Foundation models often get things wrong. And they can go off the rails. The chatbot which Microsoft is developing based on Openai’s models for its Bing search engine has insulted more than one user and professed its love to at least one other (Sydney, as Microsoft’s chatbot is called, has since been reined in). Generative-ai platforms may not enjoy the legal protection from liability that shields social media. Some copyright holders of web-based content on which existing models are being trained willy-nilly, without asking permission or paying compensation, are already up in arms. Getty Images, a repository of photographs, and individual artists have already filed lawsuits against ai art-generators such as Stable Diffusion. News organisations whose articles are plundered for information may do the same.

Link to the rest at The Economist

Chat GPT detector by ZeroGPT: detect OpenAI text

PG loves the world of Artificial Intelligence.

As has been reported before on TPV, there is a lot of concern that students and former students will use ChatGPT to write their papers and generally fail to learn how to write on their own.

PG received an email from Olive, the CEO of ZeroGPT saying that she had a solution. From the company’s website, ZeroGPT the most Advanced and Reliable Chat GPT detector tool.

Since PG hadn’t realized that such a product/service existed, He decided to try out ZeroGPT.

First, PG went to ChatGPT and decided to act like a lazy student who had to write a book report about a book by a well-known author. This lazy student isn’t completely dumb, so he chose Ernest Hemingway’s The Old Man and the Sea, which, at 127 pages in the print version, is much easier to read than War and Peace which was written by some Russian guy and clocks in at over 1,000 pages.

But what sort of laze student would this person be if he/she/they actually wasted the time it takes to read 127 pages about an old guy and a big fish? That would take at least 2-3 days because, you know, a gal/guy has to keep up with the text messages from his friends.

So, our lazy student went to ChatGPT and entered the following prompt:

Write a book report on Ernest Hemingway’s Old Man and the Sea

Before lazy guy/gal got caught up on the latest texts, the highly-talented AI produced the following:

Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters.

The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change.

In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat.

In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement.

At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago.

Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

Looks pretty good, maybe a little too good, but the lazy student can tell the English teacher that no stone was left unturned during the preparation of the book report.

Enter Chat GPT detector by ZeroGPT

This kid’s teacher is no out-of-touch dumbell. The first thing the teacher does is copy and paste the book report into ZeroGPT.

Here’s what ZeroGPT spits out:

Your text is AI/GPT Generated

95.4%
AI GPT*

Ernest Hemingway’s “The Old Man and the Sea” is a short novel published in 1952. The book is set in the waters off the coast of Cuba and tells the story of an aging fisherman, Santiago, who has gone 84 days without catching a fish. Santiago is determined to change his luck and sets out alone in his small skiff to fish in deeper waters. The book is divided into three parts. In the first part, Hemingway establishes the character of Santiago and his relationship with a young boy named Manolin, who used to fish with him but is now forced to work with more successful fishermen. Santiago is depicted as a skilled and experienced fisherman, but his recent lack of success has led some in his community to view him as unlucky. Despite this, Santiago remains determined and optimistic, convinced that his luck will eventually change. In the second part of the book, Santiago hooks a giant marlin and spends three days fighting it. The physical struggle between Santiago and the marlin is described in vivid detail, and Hemingway’s sparse, direct language creates a sense of intensity and urgency. Santiago is battered and exhausted by the end of the fight, but he manages to kill the marlin and tie it to the side of his boat. In the final part of the book, Santiago faces a new challenge: sharks are attracted to the blood of the dead marlin, and they begin to attack the boat. Santiago fights off as many sharks as he can with his harpoon and knife, but in the end, they destroy the marlin, leaving only its skeleton behind. Santiago returns to shore, exhausted but proud of his achievement. At its core, “The Old Man and the Sea” is a story about perseverance in the face of adversity. Santiago’s struggle against the marlin and the sharks represents a metaphor for the human struggle against the challenges and difficulties of life. Hemingway’s spare, direct prose style, which is characteristic of his writing, conveys the stoic, unyielding determination of Santiago. Overall, “The Old Man and the Sea” is a powerful and deeply moving story that explores themes of human resilience and the struggle to find meaning in life. Hemingway’s skillful use of language and his ability to create vivid, memorable characters make this novel a classic of modern literature.

 Highlighted text is suspected to be most likely generated by AI*
2271 Characters
391 Words

The teen-age slacker has been caught using AI to write the book report. The teacher sends an email to the Principal and all the other English teachers, then the teacher sends an email to slacker’s parents announcing that he is going to flunk the course.

Can you understand why PG loves this stuff?

It’s way fresher than the Foreign Intelligence Service of the Russian Federation (Служба внешней разведки Российской Федерации) vs the CIA.

AI vs. AI Detector Facing Off

Here’s the link to ChatGPT detector by ZeroGPT again so you can try it out yourself.

AI-wielding tech firms are giving a new shape to modern warfare

From The Economist:

Much of the Western military hardware used in Ukraine sounds familiar to any student of 20th-century warfare: surface-to-air missiles, anti-tank weapons, rocket launchers and howitzers. But Ukraine’s use of Western information technology, including artificial intelligence (ai) and autonomous surveillance systems, has also had a powerful, if less visible, impact on Russian forces. Commercial vendors supply Ukrainian troops with satellites, sensors, unmanned drones and software. The products provide reams of battlefield data which are condensed into apps to help soldiers on the ground target the enemy. One American defence official calls them, appreciatively, “Uber for artillery”.

Behind this new form of warfare are some of the most unconventional minds in American tech. Everyone knows about Elon Musk, whose rocket company SpaceX put Starlink satellites at the service of Ukraine (though he has now restricted access from the battlefield). Your columnist recently met two other iconoclastic entrepreneurs. One is Palmer Luckey, a 30-year-old who in 2017 co-founded Anduril, a maker of surveillance towers, drones, unmanned submarines and an ai-driven system that supports them, called Lattice. With his trademark flip-flops, Hawaiian shirts and goatee, he is an atypical defence contractor (Tony Stark, Marvel’s gadget-obsessed “Iron Man”, springs to mind). Yet the startup is already shaking up the traditional model of military procurement in America. In its short life, it has won contracts in America and Australia. It provides autonomous systems to Ukraine. When it last raised money in December, it was valued at $8.5bn.

The other is Alex Karp, an eccentric doctor of philosophy with an Einstein-like mop of hair. (Mr Karp used to sit on the board of The Economist’s parent company.) Palantir, his Denver-based software firm, builds digital infrastructure to help clients manage lots of data, be it on security threats, health-care systems or factories’ productivity. Like SpaceX, it has blazed the trail for civilian-military ventures since he co-founded it two decades ago. He makes bold claims. Palantir, he says, has changed the way Ukrainian troops target the enemy, and even the nature of counter-terrorism. He credits its software with saving millions of lives during the covid-19 pandemic. It may not all be gospel truth (the description of British journalists he delivers while staring at Schumpeter—“bad teeth, hard questions”—is only half true). Yet there is little doubt Palantir is supporting Ukraine both on the ground and as part of nato’s intelligence network. On February 13th, when it reported its first-ever quarterly profit and Mr Karp hinted that his firm might be an acquisition target, its market value rose to $21bn.

Both men are cut from similar cloth. They are Silicon Valley renegades. They criticise big tech for abandoning its historic link with America’s defence establishment. They lament the fast pace of civilian-military fusion in China, which they see as a potential threat to the West. To a greater or lesser degree, they are linked to Peter Thiel, a right-wing venture capitalist. Mr Thiel chairs Palantir and his Founders Fund was an early backer of Anduril (both names echo his love of J.R.R. Tolkien). To some that makes them creepy. Still, using different business models, both highlight how sclerotic the traditional system of “prime” defence contracting has become. They offer intriguing alternatives.

Like a prime contractor, Anduril only sells to military customers. But unlike defence giants such as Lockheed Martin and Northrop Grumman, it does so while taking all the research-and-development (r&d) risk on its own shoulders. Mr Luckey is a born innovator. As a teenager, he invented the Oculus virtual-reality headset that he later sold to Facebook for $3bn. Walk with him through the arsenal of airborne and subsea devices on display at Anduril’s headquarters in Southern California and his wonkishness as he explains the gadgetry is almost overwhelming.

His business acumen is no less sharp. He and his executives have no time for the Pentagon’s traditional “cost-plus” procurement system. Though it may be necessary for big projects like fighter planes and aircraft-carriers, they say, in general it distorts incentives, creating a risk-averse, expensive and slow-moving defence juggernaut. Rather than waiting for government contracts, Anduril creates what it thinks defence departments need, and uses iterative manufacturing and a lean supply chain to make products quickly and relatively cheaply.

. . . .

[Anduril’s] success rate is high. In 2020 it won a big contract to provide surveillance towers on America’s border with Mexico. Last year it secured $1bn from the dod to provide autonomous counter-drone systems. It is building underwater vehicles the size of buses to patrol waters off Australia. Though there is an element of the “America first” crusader about Mr Luckey, he leaves no doubt that he intends Anduril to be a big, profitable business.

Link to the rest at The Economist

PG says it’s only a matter of time before AI-controlled robotic fighting machines appear on a (hopefully distant) battlefield.

The thing about Bing

From Wired:

What a difference seven days makes in the world of generative AI.

Last week Satya Nadella, Microsoft’s CEO, was gleefully telling the world that the new AI-infused Bing search engine would “make Google dance” by challenging its long-standing dominance in web search.

The new Bing uses a little thing called ChatGPT—you may have heard of it—which represents a significant leap in computers’ ability to handle language. Thanks to advances in machine learning, it essentially figured out for itself how to answer all kinds of questions by gobbling up trillions of lines of text, much of it scraped from the web.

Google did, in fact, dance to Satya’s tune by announcing Bard, its answer to ChatGPT, and promising to use the technology in its own search results. Baidu, China’s biggest search engine, said it was working on similar technology.

But Nadella might want to watch where his company’s fancy footwork is taking it.

In demos Microsoft gave last week, Bing seemed capable of using ChatGPT to offer complex and comprehensive answers to queries. It came up with an itinerary for a trip to Mexico City, generated financial summaries, offered product recommendations that collated information from numerous reviews, and offered advice on whether an item of furniture would fit into a minivan by comparing dimensions posted online.

WIRED had some time during the launch to put Bing to the test, and while it seemed skilled at answering many types of questions, it was decidedly glitchy and even unsure of its own name. And as one keen-eyed pundit noticed, some of the results that Microsoft showed off were less impressive than they first seemed. Bing appeared to make up some information on the travel itinerary it generated, and it left out some details that no person would be likely to omit. The search engine also mixed up Gap’s financial results by mistaking gross margin for unadjusted gross margin—a serious error for anyone relying on the bot to perform what might seem the simple task of summarizing the numbers.

More problems have surfaced this week, as the new Bing has been made available to more beta testers. They appear to include arguing with a user about what year it is and experiencing an existential crisis when pushed to prove its own sentience. Google’s market cap dropped by a staggering $100 billion after someone noticed errors in answers generated by Bard in the company’s demo video.

Why are these tech titans making such blunders? It has to do with the weird way that ChatGPT and similar AI models really work—and the extraordinary hype of the current moment.

What’s confusing and misleading about ChatGPT and similar models is that they answer questions by making highly educated guesses. ChatGPT generates what it thinks should follow your question based on statistical representations of characters, words, and paragraphs. The startup behind the chatbot, OpenAI, honed that core mechanism to provide more satisfying answers by having humans provide positive feedback whenever the model generates answers that seem correct.

ChatGPT can be impressive and entertaining, because that process can produce the illusion of understanding, which can work well for some use cases. But the same process will “hallucinate” untrue information, an issue that may be one of the most important challenges in tech right now.

Link to the rest at Wired

Arguing with AI: My first dispute with Microsoft’s brilliant and boneheaded Bing search engine

From GeekWire:

For the past couple days, I’ve been trying out Microsoft’s new AI-powered Bing search engine, a chatbot that uses an advanced version of ChatGPT maker OpenAI’s large language model to deliver search results in the form of conversations.

Whenever I feel the natural urge to type something into Google, I try asking the new Bing a question instead. This has proven extremely useful in some cases.

  • I’m finding some answers much faster. No longer am I searching for a website that might answer a question, then scrolling the site for the answer. A question about the technical details of a Peloton bike, for example, went from 10 minutes on Google and Reddit to 30 seconds with the Bing chatbot.
  • In other situations, Bing is becoming a useful companion. Queries in Bing’s “copilot for the web” sidebar provide quick summaries of specific pages, informed by the broader web. This gave me a quick summary of Expedia Group’s earnings, for example, and jogged my memory about its rivals.

But things went sideways when Bing questioned my accuracy as a reporter.

It started when I decided to check in on a story on my follow-up list: Seattle-based home services tech company Porch Group’s unusual promise, as part of an October 2021 acquisition, that Porch’s stock price would double by the end of 2024. Porch pledged to make up the difference to the sellers if the stock doesn’t reach the target.

Here’s how the exchange began. My questions are in blue in the screenshots below.

At first glance, this is truly impressive. Note that I didn’t mention Floify in my question. (I didn’t remember the name of the company offhand.)

My query was also very imprecise. The phrasing, “what happened to Porch Group’s promise,” could be interpreted in a variety of ways.

Nonetheless, Bing figured out what I wanted, and did the research on the fly, citing and linking to its sources. As a bonus for me, its primary source happened to be my original story on the subject. My journalistic ego aside, this is next-level NLP, and an example of how AI completely changes the quest for information.

I could envision putting this same question to a human and getting a blank stare in response. But wait a second, I thought. October 2023. Is that right?

I clicked through and checked my story, which confirmed my recollection that the deadline for the stock doubling was 2024. I started to get nervous. Was my story wrong? But when I checked the press release, it also said 2024.

So I asked Bing what was going on.

Now just hold on a second there, Bing. A discrepancy?

I dug further. Citations 2 and 3 in that response were different urls for the same press release, both of which said the end of 2024, not October 2023. I also double-checked the archived version of the release to be sure that the company hadn’t engaged in any revisionist shenanigans.

Everything was consistent: the end of 2024.

So I continued …

OK, right answer. So I asked the natural follow-up question …

Link to the rest at Geekwire

PG notes that the conversation between human and ai continues at some length. The conversation includes inappropriate smilie faces.

U.S. Copyright Office tells Judge that AI Artwork isn’t Protectable

From PetaPixel:

The Copyright Office is attempting to get a lawsuit brought against them by Stephen Thaler dismissed. Thaler wants his Creative Machine system, known as DAUBUS, to be named as the copyright holder for the artwork A Recent Entrance to Paradise.

Thaler’s application to the Copyright Office was rejected, so he has brought his case to a federal judge demanding that the Office overturns its decision.

The Copyright Office says that it “turns on a single question: Did the Office act reasonably and consistently with the law when it refused to extend copyright protection to a visual work the plaintiff represented was created without any human involvement? The answer is yes.”

The Copyright Office says it applied the correct legal criteria to Thaler’s case and rejects his arguments.

“The Office confirmed that copyright protection does not extend to non-human authors,” says the defendants.

This decision was made “based on the language of the Copyright Act, Supreme Court precedent, and federal court decisions refusing to extend copyright protection to non-human authorship.”

The Copyright Office says that its own guidelines specify human authorship as a requirement for protection and that “the Office will refuse to register a claim if it determines that a human being did not create the work.”

The Office “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”

. . . .

The Copyright Office also accuses Thaler of making changes to his original claim that he had no involvement with the creation of the artwork.

The Office says Thaler changed his story to claim that he “‘provided instructions and directed his AI to create the work,’ that ‘the AI is entirely controlled by Dr. Thaler,’ or that ‘the AI only operates at Dr. Thaler’s direction.’”

Link to the rest at PetaPixel

Sounds about right to PG.

Does AI Art Affect Indie Authors?

From The Independent Publishing Magazine:

If you’ve spent any time on social media over the last few months, you’ve probably seen plenty of people sharing pictures of themselves that were “created” using Artificial Intelligence (AI). There are several apps currently available that allow people to share a few photos and instantly have them turned into pieces of art. You’ll find everything from cartoon-style drawings to interpretations that look like they could be placed in a museum.

While there’s no denying that AI art is fascinating and often incredibly beautiful, it’s been met with some backlash.

Not only can AI art reinvent existing photos, but it can also turn words into art — something that many authors are taking advantage of when it comes to creating book covers. AI text-to-image generators can take something as simple as the word “lightbulb” and create a one-of-a-kind piece of art for you to use with your next release.

But, is that ethical? Is it taking away from the human expression so many artists value? Using AI is easy, efficient, and often more cost-effective than hiring an artist to create a cover. It’s important to consider how using it might be affecting others in the creative industry, and how AI in general might impact indie authors and artists.

How Is AI Art Created?

AI art doesn’t just randomly manifest itself. You can’t create something out of “nothing”. To be effective in any industry, there are a few requirements Artificial Intelligence needs to deploy properly, including:

  • High bandwidths;
  • Computing capacity;
  • Data storage;
  • Security.

In the art world, AI is created by collecting data from existing artists (as well as artists from past generations) and their work. For example, some of the current apps generating AI artwork use generative adversarial networks (GANs). These algorithms have two sides: One that generates random images, and one that learns how to judge those images and align them with whatever is being inputted.

As an author, if you want a book cover featuring a young woman sitting on a chair in a particular style, you could simply type in something like, “a young woman on a chair Victorian era” into an AI art generator. The generator would look through thousands of images to “learn” exactly what you’re looking for. It would take data from other human-made works of art to create an original piece in the style of your choosing.

Why Are Authors Using It?

As an indie author, you’ve probably become used to doing many things for yourself. From editing to advertising, you might not have the resources or finances to hire others to do that kind of work for you. But, creating a cover is a different story. If you’re blessed enough to be a writer and an artist, you might be able to create a cover on your own, but that’s the exception, not the rule.

So, you’re often left with the option of hiring a professional artist for your cover design. Of course, that can cost money and stretch your budget quite thin, especially when you recognize the importance of an eye-catching cover. While it’s great to support independent artists, it’s not the easiest financial choice for authors who are just getting started. Plus, if you don’t consider yourself an artist and you’re also new to marketing in the book industry, you could end up hiring someone and fall victim to some common book cover mistakes, like:

  • Too many visual elements;
  • A cover that doesn’t accurately reflect your genre;
  • A low-quality or stolen image;
  • A title that’s too small;
  • Poor choice of font;
  • An uninspiring design.

Because AI is easy to use and can generate multiple images from a single input, you can use it to save money, and you can show several possible images to friends, family, and followers on social media to get an idea of which one will work best for your book.

The Ethical Dilemma

While there are some benefits to using AI art as an indie author, it’s essential to consider how ethical it is. As someone in the creative industry, you can undoubtedly empathize with visual artists trying to make a living through their work. AI takes away from those artists and even goes so far as to use human art to create new images, which some consider a type of theft.

Link to the rest at The Independent Publishing Magazine

Friend or Foe: ChatGPT Has Pushed Language AI into the Spotlight

From Writer Unboxed:

You’ve probably seen the buzz about ChatGPT in the news, on social media, and in authors’ newsletters. Before you’ve even tried it, you might have seen announcements of other Language AIs. You might wonder whether they are to be feared, embraced, or safely ignored. Or a ploy to steal the minutes you regained when you abandoned Wordle, or a game-changer such as was Google way back. Or are they up-and-coming authors? I hope to provide answers.

What are they?

Language AIs facilitate humans’ ability to productively use vast amounts of text. They do this by “reading” the text their developers chose, chopping the text up and transforming it to numerical measures, and using the measures to search for probable patterns. Those patterns include semantic and contextual relationships between words, grammar structure, and more. When users pose a question, the Language AI uses statistics to predict which specific patterns will satisfy a request.

Large Language AIs are true game-changers. Since ChatGPT was released, Microsoft released a version of Bing that uses ChatGPT and Google announced its version, Bard, is coming. They are large because of the billions of dials that are turning as they read massive amounts of text. ChatGPT’s dials were set after it finished reading in 2021, though they are likely tweaked in real time when users tag an answer as inappropriate, dangerous, or wrong. Bing is reading in real time so its dials continue to spin. Those dials control the AIs’ writing.

Can they create a story?

When I asked ChatGPT to tell me a story about a character who was the same age and gender and had a same event as one of my published novel’s characters, it returned a story of about two hundred words and its character’s emotional arc matched my own. Though I knew the arc was not original when I wrote it, I was rattled by ChatGPT having nailed it.

I remembered a conversation with my novel’s developmental editor about literary versus commercial endings and the subsequent revision to the novel’s ending. I wondered if ChatGPT would revise the character’s arc if I asked for a literary story. It didn’t. It defaulted again to the same happy-ish ending though its literary version added some telling where it previously relied on showing. For example, the actions and nouns of the story remained the same but it added words to describe the character’s feelings, such as “hopeful” and “resilient.”

Finally, I asked it for a story about a bestselling author who was found after a car accident by a retired nurse. ChatGPT gave no indication it could ever create a story such as Paul Sheldon’s in Stephen King’s Misery.

Later, with tropes and novelty in mind, I asked ChatGPT for stories of the characters in my WIP. No arcs were nailed so I asked about its choices. Though the back and forth was no substitute for human conversation, it spurred my thinking at this early stage of my WIP. For example, it added a water dowser where I had earlier dismissed the idea.

I then asked it to outline a 70,000 word novel using my characters. I was unimpressed by the story suggested by the outline but the act of poking holes in it helped advance my own messy notes. I asked it to revise the outline to reflect climate change weather patterns, for a different time period, to give the characters pets, and to make the cat the dowser. Till now, I’ve suspected my brain had been steeped too long in fact-finding to write magical realism, but my exercise with ChatGPT tripped my brain right into magical thinking.

ChatGPT read a lot, learned how we use words, and is able to combine those words in new ways to satisfy a users’ request for stories. Its stories are our stories–the stories that we’ve already told.

Can they write?

ChatGPT’s command of grammar is truly amazing. But when asked to tell stories, it too often begins with “Once upon a time,” and writes in a flat style.

I love the skip my brain makes when I encounter a well-placed alliteration. ChatGPT can define an alliteration and list when a fiction writer should use one. I asked it to write a story using alliterations. First, in a five-paragraph story, it used them only in its first paragraph to introduce the main character – tiny, timid, turtle Timmy. When I revised my request to specify alliterations placed within the story, a story of Sally the squirrel reminded me of the story about Sally at the seashore I read long ago to correct a lisp.

I asked ChatGPT how it detected metaphors and after it described metaphors as nouns with linking verbs, I asked for examples. Out of ten, a few were questionable and one was wrongly included. ChatGPT accepted a correction and offered a replacement.

Large Language AIs do not generally know whether they are reading fiction or nonfiction. When pushed, ChatGPT reported it may infer fiction by the inclusion of imaginative or made-up events. Though neither it nor its rivals should be mistaken for a source of truth, or a source of bestselling novels, they allow us to tap a tremendous amount of text, and we can use that to help us in countless creative ways.

Ready to try it?

Neither friend nor foe, Language AIs facilitate paths to using our vast amounts of text. For certain, they will aid our research and spark ideas as they make predictions, for better or worse, to fill in any gaps and answer our questions. Their technology will become commonplace in focused ways, perhaps as a single-purpose app that will read our manuscripts and draft our synopses and query package, or that will create efficiencies and reduce the cost of marketing novels published non-traditionally.

Link to the rest at Writer Unboxed

Google Stock Tumbles 8% After Its Bard AI Ad Shows Inaccurate Answer

From Investor’s Business Daily:

Alphabet (GOOGL) tumbled Wednesday after Google’s parent company published a new ad for its Bard artificial intelligence chatbot that offered an incorrect answer. Google stock fell more than 8% after the ad fluke.

Google posted a video on Twitter demonstrating the “experimental conversational AI service powered by LaMDA,” the company wrote. LaMDA is Google’s Language Model for Dialogue Applications, which applies machine learning to chatbots and allows them to engage in “free-flowing” conversations, the company says.

In the advertisement, Bard is prompted with the question, “What new discoveries from the James Webb Space Telescope can I tell my 9-year old about?”

Bard quickly rattles off two correct answers. But its final response was inaccurate. Bard wrote that the telescope took the very first pictures of a planet outside our solar system. In fact, the first pictures of these “exoplanets” were taken by the European Southern Observatory’s Very Large Telescope, according to NASA records.

Link to the rest at Investor’s Business Daily

It seems that this is not a good day for Google’s image at all. A quick look at Twitter for trending Google mentions disclosed the following as a prominent item:

Law Review v. AI

PG had too much time on his hands, so he decided to use ChatGPT to write an essay about the same topic as a law review article he came upon.

For a bit of background, most law schools have law reviews. A law review is a periodical that includes articles that often discuss recent appellate court decisions on the state or federal level. The author of the law review article analyzes the decision to determine if the decision may indicate a new development in the US or state. In some cases, the article may point out that a decision conflicts with other decisions on the same or similar topic.

As you might have already gathered, most law review articles linger in the darkness, but, on occasion, a law review article may be a forerunner for a new analysis of the law and cases decided under it.

A law school’s law review typically has student editors and staff. One or more faculty members provide overall supervision, mostly looking for wrong-headed articles that could embarrass the institution.

Being an editor or member of the law review staff is a significant plus factor in being hired by a quality law firm or other employer. Typically, it is accompanied by sterling grades.

Below is an abstract of a law review article The Yale Law Journal. Yale is a very prestigious US law school.

The title of the law review article is The Perils and Promise of Public Nuisance. In this case, the article is written by a professor employed at the University of Virginia School of Law, another law school with an excellent reputation.

[NOTE: PG apologizes for the varying font sizes. His copy of WordPress lost its mind for awhile during the creation of this post and PG can’t figure out an easy way to fix it.)

ABSTRACT. Public nuisance has lived many lives. A centuries-old doctrine defined as an unreasonable interference with a right common to the public, it is currently the backbone of thousands of opioid and climate-change suits across the United States. It was a major force behind the landmark 1998 tobacco settlements and has figured in litigation over issues as diverse as gun sales, lead contamination, water pollution, Confederate monuments, and COVID-19 safety standards. Although this common-law oddity has shaped the face of modern tort law, it is unfamiliar to the public and typically ignored, even in law schools. When discussed, it often provokes anxiety: Is it a tort at all? Whatever it is, will it swallow tort law? The regulatory state? Or separation of powers as we know it?

This Article utilizes the current opioid litigation to explore the three most common sets of objections to public nuisance: traditionalist, formalist, and institutionalist. Public nuisance can seem unusual, even outlandish. At worst, it is a potentially capacious mechanism that allows executive-branch actors to employ the judicial process to address legislative and regulatory problems. Nevertheless, its perils are easily overstated and its promise often overlooked. Historically, public nuisance has long addressed problems such as harmful products. Doctrinally, it accords better with tort law than is commonly recognized. And institutionally, it functions as a response to nonideal conditions—specifically, where regulatory mechanisms underperform.

Drawing on long-standing tort principles of duties generated by risk creation, I propose a conception of public nuisance that highlights its coherence with familiar aspects of tort law and its consistency across past and present. Public nuisance is an object lesson in the common law’s balance of stability and evolution, across time and within varying regulatory contexts.

End of legal jargon and on to artificial intelligence.

PG took the title of the Law Review article, The Perils and Promise of Public Nuisance, and asked ChatGPT to write an article on the same topic.

Following is PG’s prompt for the Ai and what the AI produced:

Prompt: Write a 500-word essay about The Perils and Promise of Public Nuisance

Public nuisance is a legal concept that refers to actions or conditions that interfere with the enjoyment of life or property of a community as a whole. It can take many forms, including noise, air pollution, littering, or blocking public access to roads and sidewalks. While public nuisance laws can provide a way to address these problems, they also present some perils and promises.

One of the perils of public nuisance is the potential for abuse by the government. For example, in some cases, public nuisance laws have been used to justify crackdowns on political demonstrations or to target marginalized communities. In these situations, the law may be used as a tool of oppression, rather than a means of protecting the public welfare.

Another peril is the difficulty of defining what constitutes a public nuisance. The line between what is acceptable and what is not can be blurry, and what may be considered a nuisance by one person may not be seen as such by another. This can lead to confusion and disputes, and it may be difficult for courts to determine what actions or conditions are truly harmful to the public.

Despite these perils, public nuisance laws also offer many promises. One of the main benefits is that they provide a way for communities to address problems that affect the quality of life for all residents. For example, if a factory is emitting toxic pollutants into the air, public nuisance laws may provide a way for neighbors to seek relief and hold the company accountable.

Additionally, public nuisance laws can serve as a deterrent to potential wrongdoers. If individuals and organizations know that they may be held responsible for the harm they cause to the community, they may be more likely to take steps to prevent or mitigate that harm. This can lead to a more livable community, and it can encourage responsible behavior by all members of society.

Finally, public nuisance laws can help to level the playing field between ordinary citizens and those with greater economic or political power. By providing a means for communities to address problems that affect them as a whole, public nuisance laws empower people to take action even in the face of opposition from those who may be better positioned to resist change.

In conclusion, public nuisance laws offer both perils and promises. While they can be misused and may be difficult to define, they also provide a way for communities to address problems and protect the public welfare. To ensure that public nuisance laws are used effectively, it is important for governments to strike a balance between protecting public health and safety and safeguarding individual rights and freedoms. Ultimately, this will help to create a more livable and equitable society for all.

PG thought the AI did quite a nice job of summarizing the topic in a very readable manner.

Below are the first few paragraphs of the law review article with footnotes omitted.

(PG notes that the intended audience for the law review article is much smaller and more specialized than the algorithm powering CHATGPT is built to address. However, there’s nothing stopping anyone from building an AI that creates documents that are written in the same manner that law review articles are.)

From the Yale Law Review article:

Why is making obscene telephone calls like laying manure in the street? Answer: in the same way as importing Irish cattle is like building a thatched house in the borough of Blandford Forum; and as digging up the wall of a church is like helping a homicidal maniac to escape from Broadmoor; and as operating a joint-stock company without a royal charter is like being a common [s]cold; and as keeping a tiger in a pen adjoining the highway is like depositing a mutilated corpse on a doorstep; and as selling unsound meat is like embezzling public funds; and as garaging a lorry in the street is like an inn-keeper refusing to feed a traveller; and as keeping treasure-trove is like subdividing houses which so “become hurtful to the place by overpestering it with poor.” All are, or at some time have been said to be, a common (alias public) nuisance.

INTRODUCTION

Public nuisance has lived many lives. A centuries-old doctrine generally defined as “an unreasonable interference with a right common to the general public,” it has recently served as the backbone for more than three thousand opioid lawsuits across the country, as well as hundreds more seeking to hold producers of greenhouse gases accountable for climate change. Twenty-five years ago, it provided the architecture for the lawsuits that impelled the tobacco industry to historic settlements of $246 billion with all fifty states. It has also spurred hundreds of mostly unsuccessful actions across the nation involving, among other things, handguns, lead contamination, water pollution,and predatory lending. Decades earlier, at the turn of the last century, officials used it to abate sewage discharge into rivers, to “repress the nuisance of bawdyhouses,” and to shut down a high-profile labor strike.

All of this and more stems from a single cause of action developed in medieval England to allow the Crown to remove impediments from public roads and waterways. In the past decades, this common-law oddity has generated thousands of lawsuits in which state officials have sued private companies for the negative impact of their products or activities on public health and welfare. Through these actions, public nuisance has influenced American tort litigation and exerted an undeniable regulatory impact.

The opioid lawsuits highlight the two ways in which public nuisance is central to modern mass-tort litigation. First, the opioid lawsuits invariably contain public-nuisance claims. The plaintiff state, local, and tribal governments claim that the opioid products made or distributed by the defendants are a public nuisance under relevant state law—that is, that they constitute an unreasonable interference with a right held by the general public, in this case by jeopardizing public health and welfare. The plaintiffs make other claims too, such as state-law claims for fraud, deceptive marketing, corrupt practices, and unjust enrichment. Nevertheless, public-nuisance claims are a central feature of the litigation and a key to its momentum.

Second, no matter what the specific claims, public nuisance provides the template for the structure of opioid litigation and other suits like it. One striking feature of public nuisance is that it permits state officials to sue parens patriae—literally as “parent of the nation,” on behalf of the people of a jurisdiction—for an infringement on public rights by a private actor. Other types of parens patriae claims exist, but public nuisance was an early example (and an inspiration to other types of suits), which provides public actors with a ready and familiar template. In modern instances, such as tobacco, opioid, and climate-change litigation, the litigation adopts the architecture of a public-nuisance suit, with an official (such as a state’s attorney general or a locality’s district attorney) suing on behalf of the public. That these suits involve a variety of other claims should not lead us to assume that they would exist in the same manner absent the public-nuisance template. To the extent that such suits are now common, the structure of public nuisance has made a lasting imprint on American tort law.

Although its substance and structure are embedded in modern American tort law, public nuisance occupies an uncertain, somewhat liminal position. It is virtually unknown to the general public, little discussed outside of litigation circles, and often ignored even in torts class. When it is discussed, it raises fraught questions. Is it even a tort? If not, what is it? Does its very existence threaten tort law? The regulatory state? Separation of powers as we know it? All in all, public nuisance exerts potentially powerful, but highly variable, real-world force, while provoking equally variable reactions from courts and commentators.

End of law review excerpt.

Feel free to compare/contrast/comment to your heart’s desire.

Major leak reveals revolutionary new version of Microsoft Bing powered by ChatGPT-4 AI

From Windows Central:

It looks like Microsoft is gearing up to launch a major new version of Bing that integrates OpenAI’s ChatGPT-4 technology in a way that will revolutionize searching the web. Multiple users have reported seemingly stumbling across a preview version of the new Bing earlier today before Microsoft quickly shut it down.

Luckily, a user by the name of Owen Yin was able to grab a few screenshots and try out a handful of features before his access was revoked, giving us a good look at how the future of Bing and searching the web will function with AI woven throughout. To begin, the new Bing advertises itself as more than just a search box. It describes itself as a “research assistant, personal planner, and creative partner at your side.”

The first big change between a normal web search engine and the new AI-powered Bing is that the search bar is now a chat box. It’s much larger in size, and encourages natural language rather than keyword-driven search terms. You’ll be able to ask Bing to look up specific topics or ideas, and even ask for its opinion, with its responses returned to you in a chat bubble.

. . . .

The new Bing is also able to adjust its search queries with you in mind. You can tell it, with natural language, your plans or requirements, such as dietary needs or schedule conflicts, and it’ll do its best to bring you relevant information for your search request that factors in those requirements. It’s mind blowing.

Yin does note that the new Bing does allow you to search the web in the traditional way if you prefer using keywords in a classic search box, along with a page of search results.

. . . .

It’s fair to say that this stuff is wild, and is going to change how we search the web in major ways. Given that this was made available briefly earlier before being pulled, we’d wager that Microsoft is almost ready to announce this new version of Bing. While no event has been announced yet, Microsoft has already confirmed that it plans to weave AI throughout all its products, and it looks like Bing is first in line.

Link to the rest at Windows Central and thanks to F. for the tip.

PG says this feature might move him from Google if he can teach his fingers to not go on automatic pilot when he has a question.

A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing

From MIT Technology Review:

OpenAI has built the best Minecraft-playing bot yet by making it watch 70,000 hours of video of people playing the popular computer game. It showcases a powerful new technique that could be used to train machines to carry out a wide range of tasks by binging on sites like YouTube, a vast and untapped source of training data.

The Minecraft AI learned to perform complicated sequences of keyboard and mouse clicks to complete tasks in the game, such as chopping down trees and crafting tools. It’s the first bot that can craft so-called diamond tools, a task that typically takes good human players 20 minutes of high-speed clicking—or around 24,000 actions.

The result is a breakthrough for a technique known as imitation learning, in which neural networks are trained how to perform tasks by watching humans do them. Imitation learning can be used to train AI to control robot arms, drive cars or navigate webpages.  

There is a vast amount of video online showing people doing different tasks. By tapping into this resource, the researchers hope to do for imitation learning what GPT-3 did for large language models. “In the last few years we’ve seen the rise of this GPT-3 paradigm where we see amazing capabilities come from big models trained on enormous swathes of the internet,” says Bowen Baker at OpenAI, one of the team behind the new Minecraft bot. “A large part of that is because we’re modeling what humans do when they go online.”

The problem with existing approaches to imitation learning is that video demonstrations need to be labeled at each step: doing this action makes this happen, doing that action makes that happen, and so on. Annotating by hand in this way is a lot of work, and so such datasets tend to be small. Baker and his colleagues wanted to find a way to turn the millions of videos that are available online into a new dataset.

The team’s approach, called Video Pre-Training (VPT), gets around the bottleneck in imitation learning by training another neural network to label videos automatically. They first hired crowdworkers to play Minecraft, and recorded their keyboard and mouse clicks alongside the video from their screens. This gave the researchers 2000 hours of annotated Minecraft play, which they used to train a model to match actions to onscreen outcome. Clicking a mouse button in a certain situation makes the character swing its axe, for example.  

The next step was to use this model to generate action labels for 70,000 hours of unlabelled video taken from the internet and then train the Minecraft bot on this larger dataset.

“Video is a training resource with a lot of potential,” says Peter Stone, executive director of Sony AI America, who has previously worked on imitation learning. 

Imitation learning is an alternative to reinforcement learning, in which a neural network learns to perform a task from scratch via trial and error. This is the technique behind many of the biggest AI breakthroughs in the last few years. It has been used to train models that can beat humans at games, control a fusion reactor, and discover a faster way to do fundamental math.
The problem is that reinforcement learning works best for tasks that have a clear goal, where random actions can lead to accidental success. Reinforcement learning algorithms reward those accidental successes to make them more likely to happen again.

But Minecraft is a game with no clear goal. Players are free to do what they like, wandering a computer-generated world, mining different materials and combining them to make different objects.
Minecraft’s open-endedness makes it a good environment for training AI. Baker was one of the researchers behind Hide & Seek, a project in which bots were let loose in a virtual playground where they used reinforcement learning to figure out how to cooperate and use tools to win simple games. But the bots soon outgrew their surroundings. “The agents kind of took over the universe, there was nothing else for them to do” says Baker. “We wanted to expand it and we thought Minecraft was a great domain to work in.”

Link to the rest at MIT Technology Review

PG hopes he is not alienating too many visitors with his occasional forays into artificial intelligence. It’s a topic that he finds fascinating.

As far as relevance to TPV, PG has mentioned AI writing programs, which he expects to become more and more sophisticated over time. While PG will not predict the demise of authors who are human beings, he expects AI to continue to improve and expand its writing capabilities.

Who knows, perhaps someone will take the vast sea of written wisdom PG has produced and create an AI version of PG. Such an AI would have to possess a high tolerance for randomness, however. Much of the time, there is no recognizable logic happening in PG’s brain, so there might be insufficient scaffolding to support the development of any sort of intelligent program.

19 Best AI Writing Tools of 2022

From Renaissance Rachel:

We all write content online. Some of us only write social media posts, emails, or texts. Some of us write content for our websites, product descriptions, video content, ads, and even customer support.

AI writing software is a type of software that can generate content for you. An AI-powered writing assistant provides useful tools for writing articles, novels, blog posts, and more. Those are just some of the benefits of using ai writing tools.

AI writing is just another tool that you can add to your toolbelt.

You know they can be incredibly helpful if you’ve ever used an AI writing tool. But you also know that they’re not going to replace actual human intelligence soon.

No, AI is not going to steal your job. It’s a tool to optimize your work. Let AI technology make your life easier and more productive by including AI writing software in your content creation process. So if you’re thinking “Why should I use AI writing tool?” you’ve come to the right place.

. . . .

1. Rytr: Best for Beginners

Rytr is a content writing platform that uses AI to write content for you. Rytr’s algorithms are trained on historical data, so they can produce unique and compelling articles with the right tone and style, while also being grammatically correct.

Rytr’s AI writing assistant will have your article ready in less than an hour, without any need for human intervention.

In its current state, Rytr can produce text for a variety of topics and niches, including sports articles, business articles, reviews, blog posts, articles on technology, etc.

Features

  • Content generation is made easy and quick with character count, word count, and tone checker.
  • Plagiarism check ensures you have the highest quality of content.
  • Grammar check for your writing to make it professional-level.
  • Discover what works best for your idea by generating content from our vast library of over 2,000 ideas.
  • Personalize your content with a professional touch using Form Generator.
  • Rytr.me login to save your work

Pricing

Free Plan

Saver Plan: $9/month; $90/year (Get 2 months free!)

Unlimited Plan: $29/month; $290/year (Get 2 months free!)

Bottom Line

Rytr is an app that helps people write faster. It’s a great tool for bloggers and content writers who need to produce a lot of articles. Rytr also allows users to search for ideas for their articles or even write them in real-time.

The weak point is that Ryter doesn’t have “recipes” like Jasper has. Jasper allows to you have more custom control over the AI output. If you’re looking for a story writing ai, Ryter is great, but if you want more power, try Jasper.

2. Jasper: Best for Power Users

Formerly known as Jarvis, Jasper is among the AI writing software tools leaders. Jasper acquired tools such as Headlime and Shortly AI writing software. Both tools remain standalone products at the writing of this article; however, both plan to integrate fully with Jasper.

Create your blogs, articles, book, scripts, and any other content. Choose a subject area and form, fill in the details, and Jasper will write the content for you. It’s not always good content, but it helps me get past my writer’s block. Now that “content generation” the state of natural language generation in content marketing, Jasper.AI is an invaluable tool.

Features

  • Long-form document editor – a powerful tool that allows you to write full documents with AI-assisted outputs.
  • Plagiarism detector – write without worrying about accusations of stealing someone else’s content
  • Speed writing – hit start, and the software will create a masterpiece for your blog post or article within minutes!
  • Integration with SEO Surfer – a tool that helps you analyze keywords and optimize your content to rank in search engines
  • Automated article writing software – if you give it enough parameters, content creator AI can almost write your articles for you
  • Facebook community that offers support, job opportunities, and more
  • Multiple languages
  • Write novels, blog/articles, video scripts, and more with Jasper!
  • AI wizard can produce over one million sentences

Pricing

Jasper provides two pricing options: starter mode and boss mode. In my opinion, the main difference is that boss mode allows you to use the long-form document editor. In contrast, the starter mode provides writing frameworks for specific use cases.

Starter Mode: Starts at $29/mo for 20,000 words/mo.

Boss Mode: Starts at $59/mo for 50,000 words/mo.

SEO Surfer add-on: starts at $59/mo

While there’s no “official” free trial, you can get a 10,000-word credit using my referral link!

Bottom Line

I compared the quality of the output with Ryter, for instance, and it wasn’t any better for me. I’m paying $120/month for unlimited content generation in Jasper, but I can pay $29/month for the same thing in Ryter.

Note: Jasper no longer offers an unlimited mode, so while it’s excellent, you must limit yourself to a specific word limit per month.

HOWEVER. I keep using Jasper because of the recipes and commands that you can use. It makes for a very powerful workflow and I can do a lot with it. Jasper can do a lot of creative things including movie script writing, so if you want an AI script writer free from customization limitations, definitely check Jasper out. With the recipes and commands, Jasper is an AI writing software for better writing results.

I didn’t include Headlime and the Shortly AI writing apps as separate items in this article, because I researched their websites and didn’t see an indication of them continuing to enhance or build their product. There’s nothing worse than using outdated and buggy software!

Link to the rest at Renaissance Rachel

PG has been interested in AI for authors for a long time.

At first, the AI programs PG experimented with were pretty clunky. When he tried out a couple of the programs mentioned in the OP, he saw noticeable and relevant improvements. He expects to see similar increases in sophistication in the future. With 19 current contestants in the survival contest, some are almost certain to fail, but there will still be forward movement at an increasingly rapid pace.

Your Go-To Guide to Writing Effective Blog Posts

From Rytr:

Introduction to Blogging – what is a blog post, how do you create one, and what are the benefits?

A blog post is a type of article that is published on a blog, typically with the goal of providing information, entertainment, or inspiration. Blog posts are often accompanied by images, videos, and/or graphics. Blog posts can be used to share your thoughts and expertise on topics you are passionate about. They can help you build your personal brand and establish yourself as an expert in your field. These worded beauties can also be used to drive traffic to other pages on your site or to other sites that you are affiliated with.

Blogging is a popular form of online publishing. There are many blog platforms, resources, and categories to choose from. When starting your blog, you will need to determine what type of blog you want to create and what blogging platform you would like to use.

One can get into blogging as a profession or merely express themselves. It is a great way to put down your thoughts and build a brand for yourself or someone else. Blogs can be as simple as talking about the day/a trip you took OR complex ones talking about the existence of life.

Whichever your go-to is, we’re here to help you with the simplest of tips to get your blogging game on.

How to Write a Blog Post – Tips and Tricks to Creating a Powerful Blog Post

Blogging is one of the most powerful ways to get your voice heard. Let’s get started with some of the basics which would help you in creating a powerful blog post.

1) Define your topic: We all love defined structures & well-defined topics. You need to be super crisp about the topic you want to write about and stick to it throughout. Remember, we have to present a well-balanced meal, and not a buffet of various exotic cuisines.

2) Create an outline: Once you have zeroed upon your topic, you would want to break it down into sub-headings or outlines. Do I want to write about Global Warming? Yes, but what all aspects must I cover? You see, you would come across a lot of broad topics and it is humanly impossible to cover everything under one write-up. Hence, you must jot down a rough outline of the areas you would want to cover.

3) Give a detailed introduction: Let your readers cut through the chaos with your crisp introduction. Explain what topics you would be touching on, mention what your reader can learn from the blog and try to highlight the keywords for your ‘busy’ reader to skim through.

4) Write the conclusion: A conclusion is like a yummy dessert that gets served after a hearty meal. Remember, a bad sweet serving can often ruin the entire entree experience and we surely don’t want that. We don’t have to go overboard with lengthy conclusions- after all, nobody (mostly) prefers a prolonged goodbye. Some up your blog, keep an open ending where you ask for feedback/opinions or just leave them wanting for more- your call!

5) Proofread your blog post before publishing it: No matter if you’re a beginner or a master, you can (and should) never avoid proofreading. We understand that you may be having a bunch of blogs to complete in a day, but that shouldn’t hold you back from going through your piece once (or twice) before you hit that ‘publish’ button. Trust us, stupid typos and shameful grammatical errors won’t look good amidst your otherwise perfect piece.

. . . .

Blogging For Businesses

Today, online presence has become a (almost) necessity for all sorts of businesses. Whether you’re a multinational brand or upcoming e-commerce, you need to set up an attractive and informative website for your business. 

Once your basic website is live, undoubtedly the blogging section becomes an inseparable part of the same. From dispersing more knowledge about your product/services to SEO purposes, blogs often come out as unsung soldiers of the army of your growing business.

Step 1 – Define Your Blog’s Purpose

A blog is a great way to create content for your business and help you attract new customers. It’s also a good way to provide value to current customers, as well as keep them up-to-date with what’s happening in your industry.

Blogs can be used for many different purposes:

  • To attract new customers
  • To provide valuable information to your existing clients
  • To keep current clients updated on what’s happening in your industry
  • For general knowledge or education purposes (e.g., blogging about life, parenting)
  • For generating targeted traffic to other areas of the website (e.g., blog posts on a specific product)

. . . .

Helpful Tools for Creating Quality Content

It’s 2022 and having some virtual assistants at your disposal won’t hurt. Here are some of the most loved tools you can include in your virtual blogging gang to create quality content.

1. Google Docs: It is a free online word processing tool that allows multiple people to edit the same document at the same time, and it saves automatically as you are typing. Not only this, but it can do some basic grammar/spell check for you and tell you about your word/character count.

2. Grammarly: This is another app that checks for errors in grammar, spelling, and punctuation on your behalf so that you can focus on what really matters – content creation.

3. Rytr: Well, this one is a no brainer. We’re sure that y’all are aware of our blog use case and more. But hey, that’s not all, here are some other functions that can help you really ace your blogging game.  

  • Plagiarism checker
  • Readability score & time
  • Rephrase
  • Improve text
  • Continue Ryting

Link to the rest at Rytr

PG has posted about Writing Programs that utilize Artificial Intelligence to speed the writing process up and/or improve the resulting copy. He’s been partly impressed and anxious for further development and sophistication from such programs.

PG notes that the author of the blog post, Kriti, didn’t explain exactly how she used Rytr to create this particular post and whether she/he did any spiffing up of the material Rytr produced.