GPT-5: Not quite ready to take over the world

T

The Week US

Guest
The much-anticipated rollout of OpenAI's new GPT-5 artificial intelligence model was so poorly received that it may have jammed the AI hype engine, said Dave Lee in Bloomberg. After pumping up GPT-5's launch with an image of the Star Wars Death Star and claims of near superintelligence, OpenAI CEO Sam Altman was forced into an embarrassing rollback, restoring access to an older model for displeased users. Investors in other AI companies largely shrugged off the stumbleβ€”good news for Wall Street, because the field is a singular driver of stock market records. But "what sets the narrative around AI progress (or lack of) is practical application, and it's here where all AI companies are still falling short." One piece of research from McKinsey should give pause: While 8 of out 10 companies surveyed said they were implementing generative AI in their business, the consultancy group observed, just as many said there has been "no significant bottom-line impact."

Altman promised that GPT-5 would serve as "a legitimate Ph.D.-level expert in anything," said Gary Marcus in his Substack newsletter. In fact, the new model delivered the same old "ridiculous errors and hallucinations." Users posted examples of GPT-5 struggling with basic reading and summarization, unable to correctly count the number of b's in "blueberry," and mislabeling handlebars and wheels on a bike. "It's no hyperbole to say that GPT-5 has been the most hyped and most eagerly anticipated AI product release in an industry thoroughly deluged in hype," said Brian Merchant, also in a Substack newsletter. "For years, it was spoken about in hushed tones as a fear-some harbinger of the future." But now all the talk of what OpenAI calls AGI, or artificial general intelligence, is just getting "waved away." It seems that OpenAI needed to demonstrate progress to investors and partners ahead of a pre-IPO sale of employee shares. Still, there "is a cohort of boosters, influencers, and backers who will promote OpenAI's products no matter the reality on the ground."

Some of the unhappiness about GPT-5 may be less technical than emotional, said Dylan Freedman in The New York Times. It's not clear that the new version is actually worse than the old one. But many users had developed an emotional link to the chatbot, asking it deeply personal questions. "And then, without warning, ChatGPT changed." The old version was often criticized as "sycophantic"; the new one, by contrast, is far less "warm and effusive." That was intentional: OpenAI found that an AI chatbot that was too human-like led frequently to "delusional thinking." But many perfectly stable users, it turned out, had built a relationship with the chatbot, and they've found GPT-5 to be a chilly companion.

Continue reading...
 


Join 𝕋𝕄𝕋 on Telegram
Channel PREVIEW:
Back
Top