June 10, 2025 11:08 AM
Credit: VentureBeat made with ChatGPT
Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Good news, AI developers!
OpenAI has announced a substantial price cut on o3, its flagship reasoning large language model (LMM), slashing costs by a whopping 80% for both input and output tokens.
(Recall tokens are the individual numeric strings that LLMs use to represent words, phrases, mathematical and coding strings, and other content. They are representations of the semantic constructions the model has learned through training, and in essence, are the LLMs’ native language. Most LLM providers offer their models through application programming interfaces or APIs that developers can build apps atop of or plug their external apps into, and most LLM providers charge them for the privilege at a cost per million tokens).
The update positions the model as a more accessible option for developers seeking advanced reasoning capabilities, and places OpenAI in more direct pricing competition with rival models such as Gemini 2.5 Pro from Google DeepMind, Claude Opus 4 from Anthropic, and DeepSeek’s reasoning suite.
Announced by Altman himself on X
Sam Altman, CEO of OpenAI, confirmed the change in a post on X highlighting that the new pricing is intended to encourage broader experimentation, writing: “we dropped the price of o3 by 80%!! excited to see what people will do with it now. think you’ll also be happy with o3-pro pricing for the performance :)”
The cost of using o3 is now $2 per million input tokens and $8 per million output tokens, with an extra discount of $0.50 per million tokens when the user enters information t...