The CNET AI writing conundrum has reinforced the need for fact-checking
The publisher’s recent stories that were generated by artificial intelligence contained some errors…
CNET, a venerable tech and business news website founded in 1994, recently came under fire for its use of artificial intelligence software to generate personal finance content in the past few months.
AI in journalism is nothing new. In 2014, AP collaborated with Automated Insights to start automating quarterly earnings reporting via the Wordsmith software. The following year, Reuters developed a tool called Lynx Insights to assist journalists in analyzing data, suggesting story ideas, and even writing sentences.
The bottom-line is that some of the CNET stories contained errors. While looking at a post entitled “What Is Compound Interest?”, journalists at Futurism uncovered a handful of severe inaccuracies. The article was subsequently revised, but a previous version said that if you put $10,000 into a bank account that produces 3% interest compounded yearly, “you’ll earn $10,300 at the end of the first year”… rather than only $300. Furthermore, the AI made mistakes while discussing loan interest rate payments and certificates of deposit. Clearly, there was a lack of proof-reading and fact-checking; two things that are paramount in quality journalism.
On January 20, CNET leadership said that all AI-generated content will be paused for the time being. That also includes similar pieces published on sister websites Bankrate and CreditCards.com.
As of late, AI-driven language model ChatGPT has been touted as a potential game-changer in several industries that “could replace 20% of the workforce”. Yet, the CNET controversy is a reminder that humans still have a major role to play in the knowledge economy.
This story was first published on The PhilaVerse (my Substack newsletter).