CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors

This content has been archived. It may no longer be accurate or relevant.

From Futurism:

Last week, we reported that the prominent technology news site CNET had been quietly publishing articles generated by an unspecified “AI engine.”

The news sparked outrage. Critics pointed out that the experiment felt like an attempt to eliminate work for entry-level writers, and that the accuracy of current-generation AI text generators is notoriously poor. The fact that CNET never publicly announced the program, and that the disclosure that the posts were bot-written was hidden away behind a human-sounding byline — “CNET Money Staff” — made it feel as though the outlet was trying to camouflage the provocative initiative from scrutiny.

After the outcry, CNET editor-in-chief Connie Guglielmo acknowledged the AI-written articles in a post that celebrated CNET‘s reputation for “being transparent.”

Without acknowledging the criticism, Guglielmo wrote that the publication was changing the byline on its AI-generated articles from “CNET Money Staff” to simply “CNET Money,” as well as making the disclosure more prominent.

Furthermore, she promised, every story published under the program had been “reviewed, fact-checked and edited by an editor with topical expertise before we hit publish.”

That may well be the case. But we couldn’t help but notice that one of the very same AI-generated articles that Guglielmo highlighted in her post makes a series of boneheaded errors that drag the concept of replacing human writers with AI down to earth.

Take this section in the article, which is a basic explainer about compound interest (emphasis ours):

“To calculate compound interest, use the following formula:

Initial balance (1+ interest rate / number of compounding periods) ^ number of compoundings per period x number of periods 

For example, if you deposit $10,000 into a savings account that earns 3% interest compounding annually, you’ll earn $10,300 at the end of the first year.

It sounds authoritative, but it’s wrong. In reality, of course, the person the AI is describing would earn only $300 over the first year. It’s true that the total value of their principal plus their interest would total $10,300, but that’s very different from earnings — the principal is money that the investor had already accumulated prior to putting it in an interest-bearing account.

“It is simply not correct, or common practice, to say that you have ‘earned’ both the principal sum and the interest,” Michael Dowling, an associate dean and professor of finance at Dublin College University Business School, told us of the AI-generated article.

It’s a dumb error, and one that many financially literate people would have the common sense not to take at face value. But then again, the article is written at a level so basic that it would only really be of interest to those with extremely low information about personal finance in the first place, so it seems to run the risk of providing wildly unrealistic expectations — claiming you could earn $10,300 in a year on a $10,000 investment — to the exact readers who don’t know enough to be skeptical.

Another error in the article involves the AI’s description of how loans work. Here’s what it wrote (again, emphasis ours):

“With mortgages, car loans and personal loans, interest is usually calculated in simple terms.

For example, if you take out a car loan for $25,000, and your interest rate is 4%, you’ll pay a flat $1,000 in interest per year.”

Again, the AI is writing with the panache of a knowledgeable financial advisor. But as a human expert would know, it’s making another ignorant mistake.

What it’s bungling this time is that the way mortgages and auto loans are typically structured, the borrower doesn’t pay a flat amount of interest per year, or even per monthly payment. Instead, on each successive payment they owe interest only on the remaining balance. That means that toward the beginning of the loan, the borrower pays more interest and less principal, which gradually reverses as the payments continue.

It’s easy to illustrate the error by entering the details from the CNET AI’s hypothetical scenario — a $25,000 loan with an interest rate of 4 percent — into an auto loan amortization calculator. The result? Contrary to what the AI claimed, there’s never a year when the borrower will pay a full $1,000, since they start chipping away at the balance on their first payment.

CNET‘s AI is “absolutely” wrong in how it described loan payments, Dowling said.

“That’s just simply not the case that it would be $1,000 per year in interest,” he said, “as the loan balance is being reduced every year and you only pay interest on the outstanding balance.”

The problem with this description isn’t just that it’s wrong. It’s that the AI is eliding an important reality about many loans: that if you pay them down faster, you end up paying less interest in the future. In other words, it’s feeding terrible financial advice directly to people trying to improve their grasp of it.

Link to the rest at Futurism

PG says somebody (not PG) is going to start a website that features errors made by AI systems.

PG also says that AI is roughly where airplanes were when, on December 17, 1903, the Wright Flyer traveled 120 feet in 12 seconds at speed of 6.8 miles per hour at Kitty Hawk, North Carolina.

Fifteen years later, a British Sopwith Dragon flew at a speed of 149 miles per hour. Twenty-two years after that, the Lockheed P-38 flew at 400 mph. Late in World War II, a Messerschmitt Me.262 reached a sustained top speed of 540 mph.

PG says AI development isn’t like airplane development. It’s going to be much, much faster.

10 thoughts on “CNET’s Article-Writing AI Is Already Publishing Very Dumb Errors”

  1. Much much faster.
    Check this:

    https://www.offgridweb.com/preparation/infographic-the-growth-of-computer-processing-power/

    Bear in mind that it only goes to 2017.
    The PS4 they list was replaced by the XBOX Series X in 2020 which is over six times the power (12TF vs 1.8TF). And since the new processors have a newer, better architecture, the boost is more like 10 times.

    And not only is the power greater for the newer tech, prices are much lower after factoring inflation. If you’re not looking for top power, you can find full Windows PCs for under $100.

    Give “AI” a decade to adjust to what real people outside laboratories need and watch US productivity boom like the 90’s. Minimum.

    But first we have to get past the 20’s.

      • ‘On the final exam of Operations Management, a core course in the Wharton MBA program, ChatGPT did “an amazing job” and gave answers that were correct and “excellent” in their explanations.

        ‘“ChatGPT3 is remarkably good at modifying its answers in response to human hints. In other words, in the instances where it initially failed to match the problem with the right solution method, Chat GPT3 was able to correct itself after receiving an appropriate hint from a human expert. Considering this performance, Chat GPT3 would have received a B to B- grade on the exam,” the research concluded.’

        In other words, if a human knows the answer, they can tweak the AI into writing it up. This strikes me as mostly a good thing. Many people understand stuff but lack strong writing skills. This is not the same as feeding the question in and its spitting out a good answer.

        I also am bemused that an excellent explanation that correctly answers the question rates a B or B-. What are they looking for in an A?

        • No need for human cues.
          That’s still a ways off.

          One thing to remember is that a lot of human jobs are based on rote memorization and require neither insight, initiative, nor creativity. Much agency or decision making.

          FAUX AI is just s new level of information management.
          Useful and (given current demographics) necessary but no threat for humans.

  2. Those financial errors are on the same level as what we have seen from humans on the front page of the New York Times. It was a struggle, but someone on the staff now knows the difference between earnings and revenue.

  3. I tested the Open AI chatbot by discussing a topic I know extremely well, both the primary and secondary sources: the history of early baseball. What struck me is not that it got stuff wrong, but that it flagrantly bullshits. I started with a general question about the origin of baseball. Its answer was wrong, but wrong in a conventional way. I was able with follow-ups to work its way around to a reasonably correct and coherent statement on the subject. So far, so good. Then I changed tack with a related question about the involvement of Alexander Cartwright, whose actual involvement with early baseball was much more limited than is often claimed. Once again, the AI started with a conventionally wrong answer. This is where it got interesting. I pressed it to cite its sources. It went straight into full bullshit mode, both mischaracterizing secondary sources and simply making stuff up. My favorite was the claim about a newspaper article from 1846, giving the name of the paper and the date of the publication. It was a Sunday. The paper, as was typical of the era, did not publish a Sunday edition. I pressed it on why it believed this article existed, and it went into vague hand-waving mode. All in all, it was like countless online debates involving internet idiots. The only difference is that the AI would immediately back off from a specific claim once it was pointed out to be wrong, where a real person would likely dig in his heels and resort to abusive language. But it would back off one BS claim and switch to another equally BS claim.

    I can see someone who understands a subject but has weak writing skills making productive use of this technology. What I foresee is students who don’t know the subject going with the first thing that comes up. Wackiness will ensue.

    • Artificial stupidity is now a real thing?

      That’s an interesting writing prompt. What if AIs were not omnipotent as they are currently presented in SF writing – what if they could be just as stupid and short-sighted as real people…

  4. It’s official: Microsoft is dumping a truckload of cash into OpenAI:

    https://www.windowscentral.com/microsoft/microsoft-to-invest-billions-of-dollars-into-openai

    CNET has a similar report but this one was posted by a human.
    (Artisanal news!) 😀

    TL:DR – All OpenAI services (the engine and API) will run on the AZURE CLOUD. All MS software (games too, presumably) will tap into the tech, over time. Maybe future NPCs will be conversational instead of barking about “taking an arrow in the knee”.

    Neither Amazon nor Google are jumping for joy.

  5. PG also says that AI is roughly where airplanes were when, on December 17, 1903, the Wright Flyer traveled 120 feet in 12 seconds at speed of 6.8 miles per hour at Kitty Hawk, North Carolina.

    Related but barely… I ran down that same 120 feet at the same 6.8mph at Kitty Hawk a couple years ago. I do not run any faster today. But I write faster!

Comments are closed.