It happened. The much-anticipated, not-a-secret new release of ChatGPT 5 is out. While there is some great news for software developers writing code at rates faster than ever, most average users will enjoy some quality of life improvements.
As you may be aware, I love technology and the hype around it, but I am not part of the hype machine. That means this product has only been out for a few days, so I can't provide a complete assessment. Instead, I will try to share some of the things that have changed.
First and foremost, as far as I can tell, most of the features you know and love are all there: voice mode, reasoning (more on that in a second), deep research, and my new favorite: Agent.
Streamlined model selection
When you used ChatGPT 4o, you could pick from a laundry list of models, like:
GPT-4o
GPT-4.1
GPT-4.1 mini
o3
And more
Now, with ChatGPT 5, there are essentially two models:
GPT-5
GPT-5 Thinking
GPT-5 Pro
GPT-5 is OpenAI’s “flagship model” and uses what OpenAI refers to as routers that determine how best to respond to your prompt. Are you asking for a simple request? ChatGPT will route you to a cheaper model without you even knowing. Are you looking for a complex request? GPT-5 will “think” longer. I assume this technology will lower the cost and power usage of simple prompts vs. more expensive prompts, but that is just a guess.
The GPT-5 Thinking model, as I understand it, is that it performs even more robust thinking (previously reasoning) to give you smarter, more informed, and thoughtful answers.
That said, I noticed if you want GPT-5 to think longer about something, you can prepend this to your prompt:
Prompt: Think long and hard about this: [Your prompt. Ex: What is the best chocolate chip cookie recipe?]
GPT-5 Pro is for Pro users (and anyone else OpenAI decides should get it). Currently, a Pro account is $200/month, which includes extended reasoning for even more comprehensive and accurate answers.
Wait, no reasoning?
o3 Reasoning was one of my favorite features. It makes sense to build such a capability into the base model, but further to that, it would appear that OpenAI has made the shift from “reasoning” to “thinking”. Cue the sounds of my frustrated brain as I think about all the training I just delivered, even at the time of the GPT-5 announcement, where I explained and trained the usage of o3-Reasoning, which has been removed from the product.
After coming to grips with this branding shift, I will point out that this new routing feature does work quite well. As you can see in the following image, I asked a hard enough question that GPT-5 automatically started thinking (reasoning) over the problem.
The good news is that if you do not want to wait, you can ask GPT-5 to provide a quick answer, which will switch from thinking to a basic prompt response. The response may not be as nuanced, but if I am in a hurry, I don’t have to wait.
Does the cost change?
Right now, ChatGPT seems to be expanding features for free and paid users, rather than charging more. For example, free users can get hours of advanced voice conversations, whereas Plus users ($20/month) get nearly unlimited advanced voice.
The pricing does change if you are a developer accessing OpenAI’s APIs, but that is a topic for another article.
Just how good is the new GPT-5 model?
One of the challenges with sharing the features of any new AI LLM (large language model) is that they are assessments over actual features.
Let’s say, for example, you know nothing about yoga, but there is a free one-hour drop-in class near you, so you do it.
Next, you walk down the street to meet your friends and say, “Now I know yoga!” Excited, your friends start asking questions like “How do you protect your sacroiliac joint in deep twists?”, or “Which of Patanjali’s eight limbs of yoga resonates most with you?”
Of course, you have zero clue what that means (and to be fair, neither do I), but the point is, you are measurably more knowledgeable about the topic of yoga now than before you took the class, but how do you measure that? Does that mean you are 100% better at yoga than the previous model of you from an hour before?
My point is that measuring such things will be based on personal experience, not numbers. Therefore, I am not going to show all the stats here.
That said, OpenAI made a big deal about sharing the following enhancements:
It will do its best to provide the best answer to your question every time. With built-in thinking (aka reasoning), it can think deeply about your prompts.
Software coding has vastly improved. They demonstrated ChatGPT’s ability to generate games, science projects, and production code. With the right tools, it can run, debug, and rebuild code until it works.
Improvements in expressive writing to help you write better and more personal content.
More useful health information. They even invited a person to the presentation who struggled communicating with her doctors and was able to use ChatGPT to make important health decisions.
Provide safer and more accurate information by using data, reducing hallucinations, and carefully responding to potentially harmful information.
Improved advanced voice mode by letting you control the tone, speed, and output of your conversations. During the launch video, they stated that the voices are more natural. You will have to decide what you think. Personally, I am not noticing a difference.
How about new UI enhancements?
In this section, I will discuss the desktop browser version, where many of these features are also available in the desktop or mobile apps.
Since the model picker is [thankfully] simplified, the models are unified into the ChatGPT 5 model family.
A welcome addition to the prompt box is some new slash commands (more on the slash in a second). By selecting the plus symbol (+) in the prompt box, you are presented with advanced features like agent mode, deep research, image generation, a think longer option, and much more.
Prompt helpers and slash commands
Some of these features offer what I like to call prompt helpers. For example, if you want to know what the sticker price is for a 2025 Toyota Prius, you might type the following prompt:
Prompt: What is the sticker price for a 2025 Toyota Prius?
Chances are, ChatGPT will not know the answer, so it will search the web and get that data for you. But, what if you want to make sure ChatGPT searches the web at all costs? You might type the following prompt:
Prompt: Please search the web and get me this answer: What is the sticker price for a 2025 Toyota Prius?
But why type all that extra text when you can select the web search option? As you can see in the following image, I did that, and you can see web search is on by default.
Some of those features are buried in sub-menus, which is why I call them slash commands. While not a new feature, I think more people will get into the habit of typing the slash (/) key on their keyboard to share their intentions with the prompt.
In the following image, you can see that I typed the slash (/) key and then the letter a, like this: /a. The slash key displays the commands. Typing the first letter filters out all the other options, so if you want to use Agent mode, type /ag and press tab or return on your keyboard.
Canvas code improvements
If you use Claude, you know that you can use their beautiful Artifacts feature, which opens an area to the side of your chats that is interactive and can run real code. While the promise was there in ChatGPT, in all honesty, it was a terrible implementation.
I do not think ChatGPT has overtaken Claude’s artifacts feature and still needs work, but it can [finally] run code and let you try out ideas pretty quickly.
I used the Canvas helper tool and used the following prompt:
Prompt: Create a calculator app inspired by butterflies and Alpha Centauri.
It did not run correctly the first time, but there was an option to fix bugs. Once complete, I had a working calculator with ethereal wings and some lightly animated stars.

Compare that to Claude’s Artifacts and its powerful coding capabilities, and you can see a more fun and whimsical take on what was already a fun and whimsical prompt.

All those models are gone now? Can I get them back?
Not really, no. However, OpenAI has capitulated to some complaints about the lack of the old 4o model and has enabled a way for you to get it back. I assume this feature is available to anyone using GPT-5 at the moment. Still, I have not researched if it is for everyone or select groups, so if the following instructions do not work for you, I guess (a) you are out of luck or (b) try again in a few days or weeks, and it might be an option.
If you want to allow the use of old models, follow these steps:
Go to Settings→General
Select the Show legacy models option and close your settings.
Select ChatGPT 5 (the model selector) and select Legacy models. Despite the word models being plural, I only receive GPT-4o as an option. I suppose if the people of Reddit complain enough, other models might appear in the Legacy models area again.
Thanks, Bill. Do you have a hot take?
Here are the things I like:
It feels faster and more polished.
I like the routing capability, so I don’t have to learn how and why to use various models.
Image creation seems to have improved. I used a few examples that took me 4-5 tries to get right, and this time (without referenced memory enabled), it took two tries, and the images were better. Better because of GPT-5 or better because AI outputs differently each time? I do not know, but I feel it has been improved.
Here are the things I find questionable:
OpenAI touts GPT-5 as a brand new model, but I do a lot of coding with ChatGPT, and it still produces the same error-prone output initially, and fixes it after I tell it to.
It would appear the training date for ChatGPT 4o and GPT-5 are the same, which is June 2024, so GPT-5 is lacking more up-to-date knowledge baked into the model.
I will stress again that I have only been able to put GPT-5 through its paces for a day, so these are my first impressions.
Is there anything you would like me to test in GPT-5? Leave a comment below.