Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just cancelled my OpenAI $200 sub yesterday because of all this, but sadly I can't agree.

Codex 5.3 Xhigh > Opus 4.6 in my work to this point.

Hoping for Opus 4.7 or whatever comes next to rectify this as I'm a bit annoyed over having to drop to a lower quality model.



Weirdly enough, I agree with both sides. Opus beats every version of GPT 5 as a chat interface, hands down. ChatGPT, at this point, is mostly me correcting its output style, cadence, behavior, etc, and consistently remaining dissatisfied, meanwhile Opus one-shots things I didn’t even think it could (Typst code). All that said, I do my programming in OpenAI’s Codex app for Mac. It has completely dominated Claude Code for me. I’ll only ever use Opus to check 5.3-Codex’s work. Very weird world we’re living in. I hope it gets even weirder once Deepseek does whatever they’ve been cooking.


whatever they've been cooking at deepseek, i don't think i'm going to let their coding agent run shell stuff on my computer unless they make it free or something


For coding, I agree, Codex-5.3 is the best out there.

But for the chat, I feel like ChatGPT got worse and worse.


Something very weird is going on; I just tried a free trial of Codex-5.3, and a significant fraction of what it gives me doesn't even compile (or in the case of python, run without crashing).

Unless I specifically say "use git", it won't bother using git, apparently saying "configure AGENTS.md to us best practices" isn't enough for it to (at least in this case) use git. If this was isolated I might put that down to bad luck, given the nature of LLMs, but I have been finding Codex uses the wrong approaches all over the place, also stops in the middle of tasks, skips some tasks entirely (sometimes while marking them as done, other times it just doesn't get around to it).

I'd rank the output of Claude as similar to a junior with 1-3 years experience. It's not great, but it's certainly serviceable, a bit of tweaking even shippable. Codex… what I see is more like a student project. Or perhaps someone in the first month of their first job. Even the absolute worst human developers I've worked with after university weren't as bad as Codex, but several of them I'd rank worse than Claude.


Do you have instructions.md and docs.md files?

Also, I noticed it skips instructions if I steer it with prompts while it is doing stuff instead of queueing my instructions.


Are you using it on Xhigh?


I have not observed meaningful quality differences between the default (medium) and extra high. What does make a difference is to turn the metaphorical lights back on, and instead of vibe-coding (as in, don't even look at the code) actually examine what it did at each step (either at code level or QA) before allowing it to proceed to the next step.

OpenAI's 5.3 Codex model on xhigh still makes a huge number of mistakes, somewhere between 25-50% of commits, and it's still terrible at making its own plan, estimating how long tasks will take to complete, and recognising which tasks need to be subdivided*. Claude's model last November was better on both counts, even though it still wasn't IMO ready for true lights-off-no-code-check-needed-vibe-coding, it was making mistakes far less often and was scoping task complexity appropriately.

That said, given xhigh seems to be going through my token allowance far, far slower than on medium, I wouldn't be surprised if it turns out the Codex app itself is vibe coded and has mis-mapped that setting in some weird way. Either that or they've suddenly got a lot more spare capacity because of the boycotts.

* given the METR study, in the planning phase I ask all these models (Codex and Claude) to break down tasks into things that would take a junior developer 1-2 hours, but Codex will estimate 60 minutes for everything from "write 19 lines including comments to stub 3 empty methods in new class" to tasks I'd expect to take a senior 2 days.


What where you using it for? claude is really good at agentic stuff, Pure coding, I can see codex being better, but for the entire workflow, I'm not sure


I use Codex purely for coding, and that's 90% of my use case for AI in general (10% using ChatGPT web for misc stuff). I pop out to Opus in Claude Code regularly to try to stay up on their relative performance, but so far the primary value I've been able to derive from CC is as a second set of eyes for code review / poking holes in plans. For primary planning / debugging / implementation Codex outclasses it atm sadly.


I use Opus 4.6 Fast-mode. It produces significantly better results in my work than any Codex 5.3 tier.


Me too. It's great that my employer pays for it and there's basically no budget, because this configuration is 10x more expensive than the regular default Sonnet.


Rapid iteration would possibly make up for the drop in quality, but I can't afford to use fast mode as I'm a contractor and pay for my own AI usage :(




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: