AI

OpenAI GPT-5.5: Prices Double Compared to Predecessor Despite Improved Token Efficiency

OpenAI's latest model, GPT-5.5, boasts improved token efficiency, but analyses show its usage costs have increased up to double compared to GPT-5.4.

3 min read Reviewed & edited by the SINGULISM Editorial Team

OpenAI GPT-5.5: Prices Double Compared to Predecessor Despite Improved Token Efficiency
Photo by Hitesh Choudhary on Unsplash

Rising Costs of Using the Latest Frontier Models

While surging gasoline prices dominate headlines, the cost of using cutting-edge AI models is also steadily increasing. OpenAI recently updated its GPT model family to version 5.5, raising the cost per token. Analyses reveal that, in some cases, the price has doubled compared to its predecessor, GPT-5.4.

Specifically, the cost for one million tokens with GPT-5.5 is now set at $5 for input, $0.50 for cached input, and $30 for output. By comparison, GPT-5.4 charged $2.50 for input, $0.25 for cached input, and $15 for output. OpenAI has defended the price hike, stating, “While GPT-5.5 is more expensive than GPT-5.4, it is also significantly higher-performing and offers much greater token efficiency.” The company claims that its improved efficiency allows users to achieve better results with fewer tokens, which should offset the increased cost.

Analysis Reveals Higher “Effective Costs”

Despite these claims, an analysis conducted by OpenRouter, an AI routing platform, indicates that actual costs have risen significantly even when accounting for improved efficiency. According to OpenRouter, depending on the length of the prompt, the effective cost of GPT-5.5 has increased by 49% to 92%.

“For longer prompts exceeding 10,000 tokens, the reduction in completion tokens partially offset the cost increase. However, for shorter prompts under 10,000 tokens, the completion tokens were not reduced as significantly, resulting in higher cost increases,” OpenRouter reported. The company’s measurements indicate that GPT-5.5 generates 19% to 34% fewer completion tokens for longer prompts. However, even this reduction has not fully compensated for the higher prices.

Mounting Cost Pressures Across the Industry

The rising costs are believed to stem from the enormous expenses associated with developing and operating cutting-edge AI models. If predictions that OpenAI will incur losses of $14 billion by 2026 prove accurate, further price hikes may be necessary to balance the books. Competitor Anthropic faces similar challenges, with reports suggesting the company may record losses of $11 billion by 2026.

Anthropic’s Claude Opus 4.7 has not seen an official price change, but the introduction of its improved tokenizer has had an impact on costs. According to OpenRouter’s analysis, accounting for cache savings, effective costs for prompts exceeding 2,000 tokens increased by 12% to 27%. For shorter prompts, significant reductions in completion tokens helped offset the cost increase, but for longer prompts, the total charges still rose.

Moving forward, it seems increasingly likely that further price increases for premium cutting-edge AI models will be unavoidable. Users will need to carefully evaluate not only the performance of these models but also their cost-effectiveness as they make their choices.

Frequently Asked Questions

How much have GPT-5.5 prices increased?
For one million tokens, GPT-5.5 charges $5 for input (compared to $2.50 for GPT-5.4) and $30 for output (up from $15). OpenAI argues that its improved token efficiency helps offset the higher prices.
Why are AI model usage costs increasing?
Developing and operating cutting-edge AI models requires massive computational resources and significant R&D investment. Companies like OpenAI and Anthropic are reportedly incurring substantial losses, necessitating price increases to cover costs.
What about pricing for other major AI models?
The article mentions Anthropic’s Claude Opus 4.7, which has not seen an official price change. However, OpenRouter’s analysis found that effective costs increased by 12% to 27% for longer prompts, while shorter prompts saw cost increases partially offset by reduced completion tokens. The entire industry appears to be grappling with rising costs for high-performance models.
Source: The Register

Comments

← Back to Home