Thousands of people complain that AI reactions are worse.Some experts tried to see clearly.Here's what they discovered
What?Is chatgpt bad?That's why sometimes he seems less intelligent than before
Thousands of people complain that AI responds poorly.Some experts tried to see clearly.Here's what they found.
Since the launch of Gpt-5 in August, thousands of users have asked themselves the same question: Is ChatGpt bad?This sensation is widespread in social networks, technical forums, and even among professionals who use it for work.Answers are short, imprecise and sometimes feel wrong."My ChatGpt got seriously injured and forgot to read," someone wrote. Others talk about a "lobotomy," an AI that has lost its ability to reason. But is it really? The short answer is... no. But something went wrong, and to figure it out, we need to look at what the "strongest and fastest" system introduced by OpenAi's Sam Altman is.
It can say router
The main problem has a name: the router, and it is the heart of the new system.Gpt-5 is technically not a unique model.It's a network of different models—some more powerful, some cheaper—coordinated by an automated system that decides which one to use at a moment's notice.When we ask a question, the router evaluates how difficult it is and guides us accordingly: if the question is easy, it sends us a fast and cheap AI model;If it's difficult, a more powerful model that "thinks" longer before answering.In theory, this is a smart idea: it saves resources and gives everyone access to the best models when they really need it.In practice, when a router makes a bad choice—or worse, when it crashes—the result is chaos.
Ethan Mollick, author of Shared Intelligence (Lewis University Press), explained in his blog: "If you don't pay (subscription, ed.) And can manually change the model, when you ask GPT-5 for something you sometimes get the best model available, sometimes the worst. It can even change in the middle of a conversation."Jiaxuan Yu said in an interview with Fortune: "The router sometimes sends parts of the same question to different models. A cheap model gives one answer, one more powerful puts another, when the system is contrary."Perfecting such a system is as complex as building Amazon's recommendation systems, it takes years and dozens of experts.In short, ChatGpt is not stupid.We never know which version of ChatGpt we get.
Number weight
Then there is another aspect that explains many of the problems.ChatGPT had up to 800 million weekly users in September, 4x the previous year.About 3 billion meals per day.These are numbers that few online 0 services can handle.The paradox is that ChatGPT is such a popular product that it's difficult to manage.
Any connection with GPT-5, especially if "Thinks" before responding, if "Thinks" before responding, too many resources.The company signed a multi-billion business to increase its size: data products and 11.9 billion with 5.9 billion cropper over five years.It takes time outside of the commercial space occupying the product space, and there are often times when it continues to grow more than it has.And here we return to the guideline: One of the open words reported that it is better to protect these resources.If it can send simple questions for a quick, inexpensive start, they retain the power of a complex issue.But if the router makes bad decisions, the system fails.And when users end up with mixed-type answers, the implementation is failing AI.
Router (partial) protection
Warning: not everyone sees the router as faulty.The idea of using several models together is not new, it has been around since 2018.And we don't know if GPT-4 doesn't even use the same system because onaia's models are a black box.The point is that with GPT-5 it is more detailed and perhaps more visible.In any case, Juaxuan believes that the impact path is here to stay: "the power of the model, which is in the dark. We can't make it infinitely better by adding data and power to the same power. Also, the improvement from Gp-4 to GPT-5 is greater than from Gp-3 to Gp-3. Most people are the easiest believers - not physicists."
Mater AGI
The Gpt-5 flop - or rather the notion of a flop - is more noisy than optimal because expectations are so high.Altman has talked about one big, thing that will bring us closer to happiness, artificial general knowledge.Instead it is "a balance, not a revolutionary step", as Gary Markus, the expert guide I Read, wrote in his newspaper.Bad but not the revolution that he announced.For a company that doesn't make a profit and is valued at $500 billion, it's a wake-up call.
The truth is that ChatGpt no longer sees AGI as an end point, but as a continuous process.In its charter, OpenAI defines AGI as "highly autonomous systems that outperform humans in the most economically valuable jobs."But Altman now says that his perspective has changed, and today he prefers to focus on scientific acumen, on the ability of artificial intelligence to accelerate scientific discoveries.As for Marcus' skeptics, the owner of ChatGpt dismisses them by saying, "What I can say is that Gpt-6 will be significantly better than Gpt-5, and Gpt-7 will be significantly better than Gpt-6."Promises that must be translated into visible results.Because unlike other technologies where you can afford to make a few mistakes, the AI sector is fiercely competitive: Google, Anthropic, Meta, and China are hot on your heels, and every mistake costs you dearly.
