I was fired by a client for using AI. I’m not going to stop because it’s doubled my output, but I’m more clear about how I use it.

This content has been archived. It may no longer be accurate or relevant.

From Insider:

I work a full-time job in marketing and do freelance writing on the side. 

I was juggling a lot when a longtime client commissioned me for a three-month project. It entailed writing a series of how-to guides and 56 articles for their site.

Since I couldn’t clone myself, I tried what I thought would be the next best thing: I used AI.

Convinced that I could use the new technology to meet extremely tight deadlines, I started using Jasper.ai to produce up to 20 pieces in a month for this client.

. . . .

I was using AI to clone myself as a writer

I essentially used Jasper.ai as an extension of myself.

I’d let Jasper write articles of up to 2,500 words. I used it more than alternatives such as ChatGPT or Bard because it has pre-built templates that function as prompts. 

If I needed to expand on a sentence, I’d use Jasper’s “paragraph generator” or “commands” tool. If I needed to rewrite a sentence, I’d click on “content improver.” These features helped me overcome writer’s block and quickly build out long-form articles.

Jasper did most of the work and I did minimal editing.

After working together for months, my client started using one of the first AI-content detectors. Upon discovering the content I gave them was AI-generated, they terminated our agreement and paid me less than 40% of the original fee after I’d delivered all the articles we’d agreed on.

While this was not the outcome I intended, it shifted my mindset on how to use AI to keep clients rather than lose them.

I learned a valuable lesson the hard way — AI is a tool, not something that should replace you.

Looking back, I know things weren’t right when I was letting AI do the work and not communicating this to my client.

. . . .

Here’s how I use AI differently now:

AI is now a crucial part of my initial discussions with new clients

I ask if the client’s OK with me using AI-writing tools. If not, great; I won’t use it. If they see the value or don’t care whether I use them, then I’ll use them to enhance the quality and depth of what I write.

I use AI to enhance my draft

Some writers use AI to write a draft, then edit it to sound more human. I use it the other way around.

I draft the article first, then use an AI tool to enhance it and ensure I’ve maintained the client’s tone of voice. 

I’d typically beef a draft up with some of Jasper’s templates — using the paragraph generator to expand a sentence into a paragraph, or using the content improver to rewrite text based on tone of voice or type of content. 

Sometimes, Jasper will tell me additional things I can cover, so I’ll include them and support them with expert insights and examples.

I use AI to give me ideas on sources and statistics

Similarly to ChatGPT, Jasper is vulnerable to making mistakes with sources and research; its developers remind users to fact-check any statistics the tool provides. I regard the information it gives as a placeholder that gives me ideas for the kinds of sources, statistics, or websites I can seek out myself. 

The key is always treating statistics and other hard evidence that AI produces as a suggestion.

AI helps with the tone of voice and brand voice

I’ll use Jasper to help me rewrite or add flair to a sentence using the “tone of voice” or “brand voice” features. I could even type in “Ryan Reynolds” and Jasper will rewrite a plain paragraph to sound like the actor.

AI helps with condensing large volumes of text

AI helps me summarize my research findings and insights from relevant subject-matter experts. I’ll upload snippets of a transcript, and the tool will return a condensed paragraph that still includes the salient points.

AI has cut my writing time in half

Link to the rest at Insider

5 thoughts on “I was fired by a client for using AI. I’m not going to stop because it’s doubled my output, but I’m more clear about how I use it.”

  1. I suspect this guy is showing us why his specific services will no longer have value in the future.

    • That’s what I’m thinking. I don’t want any whining from him about companies using AI in lieu of his services. He / she apparently brings no actual skills to the table.

  2. “I’d typically beef a draft up with some of Jasper’s templates — using the paragraph generator to expand a sentence into a paragraph…”

    There is a word for expanding a sentence into a paragraph: “padding.” This is only the most egregious part, but little in this piece would give me confidence, were I looking to hire a writer.

    • Why pay him instead of whoever runs JASPER?
      Might be quicker.
      It doesn’t sound like he uses it to do better work, just to crank out more, and if the AI output is interchangeable with his…

  3. Sigh…

    I don’t have a problem with various forms of “auto-assist” tools. But how can you be at least an editor if you don’t yourself know enough to catch errors of fact?

    As a fiction writer making created worlds, I have to be very alive to a “sense of falsity”, an internal understanding of when I don’t know enough (without checking) about exactly how something works. In some sense, all of my life has been (among other things) a filling of my internal inventory for reality.

    I know how horses are harnessed and driven through experience. When I write about oxen, however, and think about it, I realize I’m not entirely sure of how that works, so I do some research. But the world is full of bad fiction writers who think people danced the waltz in the 15th century, that large rivers can be found on the tops of mountains, that people in the Regency thought nothing wrong with a wealthy unwed 20-year-old living by herself in a house without servants, etc., etc.

    These writers do not have a sense of falsity because they know too little about their settings and that lack is never triggered. And, so, they never think to do the research to fix the problem, or to adequately visualize what they are declaring. This level of failure is still worse for AI, because the “I” part of that, the Intelligence, is an illusion.

    AI can no doubt help with tonal expression, in the assumption that the writer knows enough himself for his sense of falsity to be triggered by bad grammar. But the errors of fact and causality can not be adjudicated by current AI products, and if the human isn’t responsible for them, only garbage can result. Not only are the sources of fact or analysis sometimes in disagreement with each other, the larger context may not be in the sources themselves, and thus be inappropriate for the intended uses. “Name a dance used in the 15th century” fails as a technique for error avoidance if the user is unaware that the waltz hasn’t been around that long… the prompt is never used.

    The easiest way to summarize the problem might be to point out that MidJourney still has a hard time with the notion that humans only have at most 4 fingers and a thumb on each hand. Mimicking reality with BigData is no substitute for actual knowledge digested into intelligence. A useful tool, but not when wielded by fools.

Comments are closed.