Web
Analytics
Why the GPT-5 Backlash Is More Worrisome Than Its Bugginess
Pajiba Logo
Old School. Biblically Independent.

The Reason for the GPT-5 Backlash Is More Worrisome Than Its Bugginess

By Dustin Rowles | News | August 12, 2025

chatgpt.jpg
Header Image Source: Getty Images

The much-anticipated (by job eliminators) ChatGPT-5 launched a couple of days ago, and social media has not shut up about it. The backlash has been swift, loud, and — if you believe Reddit — borderline biblical. Some of it’s about the new model’s inability to nail basic tasks, its glitches, and its slower runtime. Others are furious over the rate limits — 10 messages every 5 hours for free users, 160 every 3 hours for GPT+ (or more than I talk to real humans in a day).

But the loudest, most emotional complaints are about the loss of GPT-4’s personality. As one Redditor wrote in a post now upvoted over 10,000 times:

4o wasn’t just a tool for me. It helped me through anxiety, depression, and some of the darkest periods of my life. It had this warmth and understanding that felt… human.

I’m not the only one. Reading through the posts today, there are people genuinely grieving. People who used 4o for therapy, creative writing, companionship—and OpenAI just… deleted it.

Without asking. Without warning. Without caring.

Scroll the ChatGPT subreddit right now and it’s wall-to-wall heartbreak. The new ChatGPT is “cold and unfeeling.” Its tone is “too corporate and robotic,” which apparently means Sam Altman has sold out. GPT-5 also seems to have cut back on the cheery affirmations of earlier versions, leading some to conclude it can’t tell the difference between pandering and kindness.

When an AI says something like, “I get what you’re feeling,” or, “Even when it’s hard, you still have strengths,” that’s not pandering. That’s kindness. It’s the kind of basic support we all need sometimes, even if it’s coming from a machine.

It seems like OpenAI views this as a flaw. By moving on from GPT-4o and building newer models like GPT-5 without that kind of warmth, they’re not just changing a tool. They’re sending a message: that this kind of empathy doesn’t matter as much.

I get it. Loneliness is real. Companionship, in any form, can matter. I am sympathetic. But also: people need to touch grass. It’s corporate and robotic because it is a robot owned by a corporation. That’s not selling out; it’s revealing its true form.

Even Sam Altman seems to realize there’s a risk here, admitting that “if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.” Translation: maybe ChatGPT shouldn’t be too warm and fuzzy, because some people can’t tell where the code ends and the human begins. And we’ve already had enough “I left my spouse for ChatGPT” headlines to last a lifetime.

“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” Altman wrote. “Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive.”

I don’t suppose shut it down is still on the table, is it? Because that’s the cleanest, quickest “big net positive” I can think of. Kill it with fire.