OpenAI made waves last week when it demoed its new language model GPT-5 during a live-streamed demonstration that attracted over 100,000 simultaneous viewers. When our team unveiled that chart at the event, there was one big error. This error immediately led to a flood of tweets and memes calling the Instagram post out, giving it the name “chart crime.” Sam Altman, OpenAI’s CEO, acknowledged the mistake, referring to it as a “mega chart screwup” on X, and promised prompt fixes to the issues that concerned users.
In response to our live demonstration, Altman acknowledged that OpenAI needs to be much clearer about which model is responding to what query. This new announcement is part of efforts to build confidence among users about what they can expect from GPT-5. The model employs a customized real-time router, which analyzes every incoming prompt. It quickly decides the most appropriate response strategy, automatically generating quick responses or giving users more time to process complex information.
Early GPT-5 reviewer Simon Willison had many wonderful things to say about the model’s capabilities, but did note one glaring flaw. He also emphasized the importance of data visualization. He called the process of turning data into a table a “good example of a GPT-5 failure.” This kind of feedback points to a larger issue at play, one that affects the model’s performance and flexibility.
In response to user requests and privacy concerns, Altman announced some new changes on the way. He tweeted that GPT-5 would seem a lot more intelligent from today. Ironically, just yesterday, a service outage crippled the autoswitcher for most of the day, which would have made GPT-5 feel a whole lot less smart. We’re making improvements to the decision boundary so you can more often choose the best fitting model. Secondly, we’re going to improve the transparency around which model is responding to a particular question.
This most recent rollout had some serious hiccups, including a severity event due to the autoswitching feature, which failed for a portion of the day. AI Dungeon users complained that in this time period, GPT-5 powered responses were failing to cut the mustard. Altman admitted this oversight, promising change and focusing efforts on stabilization.
To make the experience even better, OpenAI will be increasing rate limits for Plus users as GPT-5 finishes rolling out. Altman explained, “We are going to double rate limits for Plus users as we finish rollout. This should give people a chance to play and learn the new model, adopt it to their use cases without worry of running out of monthly prompts.”
Additionally, Altman announced that OpenAI is planning to issue refunds to Plus users. They’d rather keep access open to the outgoing model, GPT-4o, while they collect data on new trade-offs. He underscored the company’s dedication to stability and responsiveness to user feedback, stating, “We will continue to work to get things stable and will keep listening to feedback.”