CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple Major Corrections

Big surprise: CNET's writing robot doesn't know what it's talking about.

We may earn a commission from links on this page.
Stock image of robot hand typing
Artificial intelligence can generate textā€”but the technology hasnā€™t yet learned to be accurate.
Image: kung_tom (Shutterstock)

Aside from stringing together human-like, fluid English language sentences, one of ChatGPTā€™s biggest skillsets seems to be getting things wrong. In the pursuit of generating passable paragraphs, the AI-program fabricates information and bungles facts like nobodyā€™s business. Unfortunately, tech outlet CNET decided to make AIā€™s mistakes its business.

The tech media site has been forced to issue multiple, major corrections to a post published on CNET, created via AI, as first reported by Futurism. In one single AI-written explainer on compounding interest, there were at least five significant inaccuracies, which have now been amended. The errors were as follows, according to CNETā€™s hefty correction:

  • The article implied a savings account initially containing $10,000 with a 3% interest rate, compounding annually, would accrue $10,300 in interest after a year. The real earned interest would amount to $300.
  • An error similar to the above showed up in a second example, based on the first.
  • The post incorrectly stated that one-year CD accountsā€™ interest only compounds annually. In reality: CD accounts compound at variable frequencies.
  • The article mis-reported how much a person would have to pay on a car loan with a 4% interest rate over five years.
  • The original post incorrectly conflated APR and APY, and offered bad advice accordingly.
Advertisement

For more than two months, CNET has been pumping out posts generated by an artificial intelligence program. The site has published 78 of these articles total, and up to 12 in a single day, originally under the byline ā€œCNET Money Staff,ā€ and now just ā€œCNET Money.ā€ Initially, the outlet seemed eager to have its AI authorship fly under the radar, disclosing the lack of a human writer only in an obscure byline description on the robotā€™s ā€œauthorā€ page. Then, Futurism and other media outlets caught on. Critique followed. CNETā€™s editor in chief, Connie Guglielmo, wrote a statement about it.

Advertisement

And just like the outletā€™s public acknowledgement of its use of AI only followed widespread criticism, CNET didnā€™t identify nor aim to fix all these inaccuracies noted on Tuesday, all on its own. The media outletā€™s correction only came after Futurism directly alerted CNET to some of the errors, Futurism reported.

Advertisement

CNET has claimed that all of its AI-generated articles are ā€œreviewed, fact-checked and editedā€ by real, human staff. And each post has an editorā€™s name attached to it in the byline. But clearly, that alleged oversight isnā€™t enough to stop artificial intelligenceā€™s many generated mistakes from slipping through the cracks.

Advertisement

Usually, when an editor approaches an article (particularly an explainer as basic as ā€œWhat is Compound Interestā€), itā€™s safe to assume that the writer has done their best to provide accurate information. But with AI, there is no intent, only the product. An editor evaluating an AI-generated text cannot assume anything, and instead has to take an exacting, critical eye to every phrase, world, and punctuation mark. Itā€™s a different type of task from editing a person, and one people might not be well-equipped for, considering the degree of complete, unfailing attention it must take and the high volume CNET seems to be aiming for with its AI-produced stories.

Itā€™s easy to understand (though not excusable) that when sifting through piles of AI-generated posts, an editor could miss an error about the nature of interest rates among the authoritative-sounding string of statements. When writing gets outsourced to AI, editors end up bearing the burden, and their failure seems inevitable.

Advertisement

And the failures are almost certainly not just limited to the one article. Nearly all of CNETā€™s AI-written articles now come with an ā€œEditorsā€™ noteā€ at the top which says, ā€œWe are currently reviewing this story for accuracy If we find errors, we will update and issue corrections,ā€ indicating the outlet has realized the inadequacy of its initial editing process.

Gizmodo reached out to CNET for more clarification about what this secondary review process means via email. (Will each story be re-read for accuracy by the same editor? A different editor? An AI fact-checker?) However, CNET didnā€™t directly respond to my questions. Instead, Ivey Oneal, the outletā€™s PR manager, referred Gizmodo to Guglielmoā€™s earlier statement and wrote, ā€œWe are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process. We will continue to issue any necessary corrections according to CNETā€™s correction policy.ā€

Advertisement

Given the apparent high likelihood of AI-generated errors, one might ask why CNET is pivoting away from people to robots. Other journalistic outlets, like the Associated Press, also use artificial intelligenceā€”but only in very limited contexts, like filling information into pre-set templates. And in these narrower settings, the use of AI seems intended to free up journalists to do other work, more worthy of their time. But CNETā€™s application of the technology is clearly different in both scope and intent.

All of the articles published under the ā€œCNET Moneyā€ byline are very general explainers with plain language questions as headlines. They are clearly optimized to take advantage of Googleā€™s search algorithms, and to end up at the top of peoplesā€™ results pagesā€”drowning out existing content and capturing clicks. CNET, like Gizmodo and many other digital media sites, earns revenue from ads on its pages. The more clicks, the more money an advertiser pays for their miniature digital billboard(s).

Advertisement

From a financial perspective, you canā€™t beat AI: thereā€™s no overhead cost and thereā€™s no human limit to how much can be produced in a day. But from a journalistic viewpoint, AI-generation is a looming crisis, wherein accuracy becomes entirely secondary to SEO and volume. Click-based revenue doesnā€™t incentivize thorough reporting or well-put explanation. And in a world where AI-posts become an accepted norm, the computer will only know how to reward itself.

Update 1/17/2023, 5:05 p.m. ET: This post has been updated with comment from CNET.

Maybe AI-Written Scripts are a Bad Idea?
Subtitles
  • Off
  • English
Maybe AI-Written Scripts are a Bad Idea?

Advertisement