- | 1:00 pm
This ChatGPT feature has huge potential—but really needs work
Another ride on the ChatGPT emotional roller coaster reveals how far its AI has to go.
It started with a simple question: “What can you do?”
I’ve been playing around with ChatGPT for a while now and I find it to be almost magical in its ability to cobble together code on the fly, a great tool for generating ideas, and a powerful text generator—as long as you’re not too hung up on factual accuracy.
But when I asked OpenAI’s chatbot to list its capabilities for me, one thing in its answer stuck out more than any other: summarizing.
HIGH HOPES AND TUMMY ACHES
I have to read a fair amount of articles each week for work. It would be really helpful to be able to conjure up concise summaries on occasion, even for the sole purpose of helping me decide whether to devote time to actually reading an entire article based on its summary.
To get ChatGPT to summarize articles is simple enough. Simply type “summarize this” and paste a URL into its text field. It can even handle links to PDFs, which got me even more excited, since I need to read long white papers from time to time.
So I started feeding articles to ChatGPT. As is customary with the chatbot, I felt a quick burst of adrenaline when its first summary came back, looking believable enough. I began giddily calculating in my head how much time I’d be saving thanks to this wizardry.
Then—as is also customary with certain ChatGPT features—I actually read its response, cocked my head sideways like a confused labradoodle, stared at the generated text, and thought out loud, “Is this accurate?”
Several summaries later, I got the ChatGPT pit in my stomach that signifies No, I won’t be saving time with this feature at all.
SUMMARY BUMMERS
To give you an idea of how off the summaries are, here’s what was generated for my five most recent articles here at Fast Company.
Article 1: 3 free ChatGPT Chrome extensions to make you more productive
Almost nothing in the summary is accurate, beginning with ChatGPT claiming that the article is about five extensions instead of three. It then lists two extensions I didn’t write about at all and doesn’t list any of the ones I actually did write about.
Article 2: Phishing scams are getting more devious. Here’s how to outwit them
Here, ChatGPT spins a yarn about an email-based malware attack when the actual crux of the article is analysis of a phony invoice for anti-virus software that tries to lure the victim into calling a fake customer service number. It gets two out of three pieces of advice correct, which is good, but whiffs on most of the rest.
Article 3: YouTube TV is great—and these 4 hidden features make it even better
Like the first article, this summary surfaces more tips than are actually in the article, gets most of them wrong, and leaves a bunch of stuff out.
Article 4: These 4 sites offer the best free Zoom video backgrounds
This summary had promise, pretty much nailing the first three sentences. It then goes off the rails by claiming I urged people to use appropriate backgrounds for professional video calls, which I suppose is decent advice, even though I mentioned nothing of the sort in the actual article.
Article 5: The iPhone’s white noise machine and other fun, little-known features
This one starts off reasonably strong as well, correctly identifying two features I actually write about, but also serving up a whopping four tips that aren’t mentioned at all and leaves out three that are.
IN PURSUIT OF CAUTIOUS OPTIMISM
Okay, so: ChatGPT is not great for summarizing online articles. However, I believe this to be a challenge with content freshness, AI tuning, and basic reading more than anything.
ChatGPT has a big brain full of lots of information that it has gleaned by continually tuning its AI model. Perhaps it’s unrealistic to assume it’ll be able to ingest a recent article and summarize it accurately, especially since it often begs off talking about recent news by explaining that it’s based on a 2-year-old database.
The chatbot seems to be better at summarizing stuff that’s been around a bit longer, like books. This makes sense since there are likely hundreds of sources for book summaries that ChatGPT has been able to ingest over the years.
When I asked it to summarize a bunch of books—which didn’t entail feeding it web links—it fared better, although it’s still not perfect.
For instance, I asked it to summarize the book First Blood by David Morrell, which is the story of John Rambo and the basis of the first movie in the series of Rambo flicks.
ChatGPT flawlessly summarizes the book until the very last sentence, where it says, “The novel ends with Rambo in a jail cell, reflecting on the violence that has defined his life.”
Spoiler: The movie and the book each end differently. While it’s kinda-sorta implied in the movie that Rambo heads off to jail, it doesn’t end with him in a jail cell. And in the book . . . well, he dies.
So I get the impression that when asking ChatGPT to summarize text, instead of actually reading it, analyzing it, and creating a summary, ChatGPT is more like, “Okay, what do I know about Rambo?” Then it kludges together a bunch of its Rambo-based knowledge to come up with a plausible answer.
This would explain the article summaries as well. “What do I know about lesser-known iPhone features? Or Zoom backgrounds? Or YouTube TV? I’ll toss some of my knowledge of those things—knowledge I’ve collected from all over the web—on the wall and see what sticks.”
ChatGPT has spent a lot of time learning, and it’s great at writing. The next step is teaching it how to read.