prodigycorp an hour ago

This is great.

In the past month, OpenAI has released for codex users:

- subagents support

- a better multi agent interface (codex app)

- 40% faster inference

No joke, with the first two my productivity is already up like 3x. I am so stoked to try this out.

  • brianwawok an hour ago

    Try Claude and you can get x^2 performance. OpenAI is sweating

    • viraptor 12 minutes ago

      May be a bit different depending on what kind of work you're doing, but for me 5.2-codex finally reached higher level than opus.

    • klipklop 35 minutes ago

      5.2-codex is pretty solid and you get dramatically higher usage rates with cheap plans. I would assume API use is much cheaper as well.

simianwords an hour ago

It’s interesting that they kept the price the same while doing inference on Cerebras is much more expensive.

  • diwank an hour ago

    I dont think this is Cerebras. Running on cerebras would change model behavior a bit and it could potentially get a ~10x speedup and it'd be more expensive. So most likely this is them writing new more optimized kernels for Blackwell series maybe?

    • simianwords an hour ago

      Fair point but it remains to answer - why isn’t this speed up available in ChatGPT and only in the api?

  • chillee an hour ago

    this is almost certainly not being done on cerebras

OutOfHere an hour ago

OpenAI in my estimation has the habit of dropping a model's quality after its introduction. I definitely recall ChatGPT 5.2 being a lot better when it was introduced. A week or two later, its quality suddenly dropped. The initial high looked to be to throw off journalists and benchmarks. As such, nothing that OpenAI says in terms of model speed can be trusted. All they have to do is lower the reasoning effort on average, and boom, it becomes 40% faster. I hope I am wrong, because if I am right, it's a con game.

  • tedsanders 24 minutes ago

    It's good to be skeptical, but I'm happy to share that we don't pull shenanigans like this. We actually take quite a bit of care to report evals fairly, keep API model behavior constant, and track down reports of degraded performance in case we've accidentally introduced bugs. If we were degrading model behavior, it would be pretty easy to catch us with evals against our API.

    In this particular case, I'm happy to report that the speedup is time per token, so it's not a gimmick from outputting fewer tokens at lower reasoning effort. Model weights and quality remain the same.

    • zamadatix 6 minutes ago

      Hey Ted, can you confirm whether this 40% improvement is specific to API customers or if that's just a wording thing because this is the OpenAI Developers account posting?

    • wahnfrieden 15 minutes ago

      You're confirming you don't alter "juice" levels..?