Just days after unveiling GPT-4.1, OpenAI has taken the AI race a step further by launching two new models — o3 and o4-mini — designed to tackle complex reasoning tasks across coding, mathematics, and image-based problem solving. Both models are now publicly available and aim to push the boundaries of what generative AI can achieve.
According to OpenAI, o3 is their most advanced reasoning model to date, while o4-mini serves as a cost-effective alternative that delivers solid performance in academic and real-world applications.
“The combined power of state-of-the-art reasoning with full tool access translates into significantly stronger performance across academic benchmarks and real-world tasks, setting a new standard in both intelligence and usefulness,” says OpenAI.
OpenAI’s newly introduced o3 and o4-mini AI models are engineered to handle complex, multi-step reasoning tasks, especially in technical fields like software development, data science, and STEM education. Of the two, o3 is highlighted as OpenAI’s highest-performing reasoning model yet, offering enhanced problem-solving capabilities.
In contrast, o4-mini is designed for those looking for a more affordable solution without compromising too much on performance. Both models are built to integrate with the entire ChatGPT toolset — including web browsing, code execution, image generation, and image interpretation — for the first time.
This marks the first time that an OpenAI model can access all its tools simultaneously, giving it the ability to solve multi-modal, real-world problems independently. The AI can now plan, browse the web, analyze images, execute code, and perform data interpretation all in one cohesive workflow.
“This tool integration allows the models to tackle tougher, multi-step problems and take real steps toward acting independently,” OpenAI explains.
One of the most impressive upgrades lies in visual reasoning. Users can upload messy handwritten notes, whiteboard diagrams, or rough sketches, and the model will not only interpret the content, but analyze and solve problems based on the visuals — something that earlier GPT models struggled to handle.
This leap in image comprehension could benefit professionals in design, education, architecture, science communication, and collaborative engineering.
To complement the release of o3 and o4-mini, OpenAI has also launched a developer-centric tool called Codex CLI. This stripped-down coding assistant allows seamless integration between AI models and local codebases. It works out of the box with o3 and o4-mini, with support for GPT-4.1 coming soon.
This move is seen as a direct response to Claude Code, a developer tool launched earlier by OpenAI rival Anthropic.
Interestingly, OpenAI CEO Sam Altman had previously stated that o3 would not launch as a standalone product. However, the company changed course earlier this month. In a post on X, Altman said:
“There are a bunch of reasons for this, but the most exciting one is that we are going to be able to make GPT-5 much better than we originally thought. We also found it harder than we thought it was going to be to smoothly integrate everything, and we want to make sure we have enough capacity to support what we expect to be unprecedented demand.”
As a result, OpenAI’s previously promised streamlining of model offerings has been temporarily delayed in favor of developing and releasing robust tools ahead of the anticipated launch of GPT-5, expected in the next few months.
The new AI models are currently available to subscribers of the following ChatGPT plans:
ChatGPT Plus
ChatGPT Pro
ChatGPT Team
Additionally, OpenAI is preparing to release o3-pro, a more powerful version of o3, exclusively for Pro-tier users. Until its rollout, those on the Pro plan will continue to access the current o1-pro model.
With the launch of o3 and o4-mini, OpenAI is once again demonstrating its commitment to pushing the boundaries of artificial intelligence. These models represent a significant leap in performance, especially in reasoning, coding, mathematical logic, and image-based problem-solving.
The addition of Codex CLI strengthens OpenAI's ecosystem for developers, offering better integration and real-time coding support. By enabling full tool access in one model, OpenAI has set a new benchmark in multimodal AI capability.
While the arrival of GPT-5 is on the horizon, the current rollout gives users and developers a powerful toolkit to experiment, create, and innovate in ways not possible with previous generations of AI. Whether you're a student, researcher, coder, or content creator, the o3 and o4-mini models are poised to revolutionize how AI assists with your everyday problem-solving needs.