I wonder if a shared Claude Code instance has the same effect?
Doesn't CC sometimes take twenty, thirty minutes to return an attempt? I wouldn't know, because I'm not rich and my employer has decided CC is too expensive, but I wonder what you would do with your pair programming partner while you wait.
The bosses would like to think we'd start working on something else, maybe start up a different Claude instance, but can you really change contexts and back before the first one is done? You AND your partner?
Nah, just go play air hockey until your boss realizes Claude is what they need, not you.
This is a depressing comment.
I am apprehensive about the future of software development in this milieu. I've pumped out a ~15,000 line application heavily utilizing Claude Code over a few days that seems to work, but I don't know how much to trust it.
Certainly part of the fun of building something was missing during that project, but it was still fun to see something new come to life.
Maybe I should say I am cautiously optimistic but also concerned: I don't feel confident in the best ways to use these tools to build good software, and I'm not sure exactly what skills are useful in order to get them there.
Can I ask what you built?
There was a post recently where someone linked to: https://simplyexplained.com/blog/how-i-built-an-nfc-movie-li...
and I thought the project was amazing, but I didn't like how the IDs were managed in yml, so I built this to make it more dynamic. I plan to add support for other smart home automations with it as well as more streaming services.
One of the features I really like about it is it makes it easy to print and cut out stickers to slap on the NFC cards for playing media.
My toddler loves it so far and one of his friend's has asked me to make one for him as well
in fact, apple made it harder for apps to take payments from its users in the past than others.
https://gs.statcounter.com/os-market-share/mobile/worldwide
So maybe they rather please the home market, I guess.
[1]: https://www.techtarget.com/searchcio/definition/Instagram
Android is simply a much worse platform to make money on. Users spend <25% as much as iOS users. Why would they prioritize that?
It is like trying to make a living selling games to macOS users.
Why would they care about prioritizing users who spend much less? Android pays <25% per user. You need a LOT more than 70% to make that worth prioritizing. Those users are just going to eat up free tier resources without paying. It's borderline parasitic from a business perspective.
Android users are more likely to be useful for spreading word-of-mouth reputation to Apple platform users, than they are as direct spenders. Just another reason to ensure Apple platform features don't trail Android.
Globally in dollars spent, not human heads. iOS is over 2x larger than Android globally, and the gap is widening year over year.
iOS spending growth outpaces Android, which even shrunk during covid while iOS spending continued to grow
https://api.backlinko.com/app/uploads/2024/03/iphone-vs-andr...
Anthropic makes money off product sales, not ad revenue, so wallets count more than eyes for this. Free users who are less than 25% as likely to spend are a burden not to be prioritized for a product business with free tier access. They need to spend much more to get a paying user on Android.
If Android were the bigger market, they'd prioritize it
https://www.reddit.com/r/applesucks/comments/1k6m2fi/why_do_...
- Claude Code has been added to iOS
- Claude Code on the Web allows for seamless switching to Claude Code CLI
- They have open sourced an OS-native sandboxing system which limits file system and network access _without_ needing containers
However, I find the emphasis on limiting the outbound network access somewhat puzzling because the allowlists invariably include domains like gist.github.com and dozens of others which act effectively as public CMS’es and would still permit exfiltration with just a bit of extra effort.
Using a proxy through the `HTTP_PROXY` or `HTTPS_PROXY` environment variables has its own issues. It relies on the application respecting those variables—if it doesn’t, the connection will simply fail. Sure, in this case since all other network connection requests are dropped you are somewhat protected but then an application that doesn't respect them will just not work
You can also have some fun with `DYLD_INSERT_LIBRARIES`, but that often requires creating shims to make it work with codesigned binaries
Just to be clear, I'm excited for the capability to use Claude Code entirely within the browser. However, I've heard reports of Max users experiencing throttled usage limits in recent months, and am concerned as to whether this will exacerbate that issue or not.
EDIT: I had meant defer which is the first time I've made a /r/boneappletea in awhile
I'm using Claude Code locally a lot, occasionally with a couple parallel session.
I was very happy when they made the GitHub Action - I used it quite a bit, but in practice I got frustrated that I effectively only get a single back-and-forth out of it, I can't really "continue the conversation without losing context" - Sure, I can respond to it in the PR it makes, but that will be a fresh session with a fresh empty context.
So, as much as I don't like moving out of my standard development workflow with my tools, I think this could be quite useful. The ability to interrupt and/or continue a conversation should be very nice.
My main worry is - usually my unit tests and integration tests rely on a postgres database running on the machine, and it's not obvious to me if I can spin that up here?
With Happy, I managed to turn one of these Claude Code instances into a replacement for Claude that has all the MCP goodness I could ever want and more.
without 200% price increase in 3 years, there's no way any of these AI companies will survive.
It's really solid. It's effectively a web (and native mobile) UI over Claude Code CLI, more specifically "claude --dangerously-skip-permissions".
Anthropic have recognized that Claude Code where you don't have to approve every step is massively more productive and interesting than the default, so it's worth investing a lot of resources in sandboxing.
Here's an example from this morning, getting CUDA working on a NVIDIA Spark: https://simonwillison.net/2025/Oct/20/deepseek-ocr-claude-co...
I have a few more in https://github.com/simonw/research
I write code on my phone a lot using ChatGPT voice mode though!
1. Install Karabiner-Elements, a free macOS keyboard remapper[0]
2. Map F19 -> F5 (mic button) in Karabiner-Elements
3. Choose F19 as the voice hotkey in your voice app
And now you can use the handy F5 mic button on your Apple keyboard. WisprFlow automatically has it set for: - press and hold to talk
- double tap for indeterminate listening until you f5/esc
That workflow alone, of using the f5 key and switching between the two modes of speaking (holding or double-tap), has freed up a not insignificant part of my working memory. Turning abstract thoughts into text is higher cost than turning them into voice.I predict individual offices[1] will be more popular as a choice for startups.
So increasingly I let it run, and then review when it stops, and then I give it a proper review, and let it run until it stops again. It wastes far less of my time, and finishes new code much faster. At least for the things I've made it do.
Their "restricted network access" setting looks questionable to me - it allow-lists a LOT of stuff: https://docs.claude.com/en/docs/claude-code/claude-code-on-t...
If you configure your own allow-list you can restrict to just domains that you trust - which is enforced by a separate HTTP/HTTPS proxy, described here: https://docs.claude.com/en/docs/claude-code/claude-code-on-t...
You use firewalls to prevent code running inside the container from opening network connections to anywhere else. The harness that surrounds it can still be made accessible via the network.
I tend to agree. There’s an opportunity to make it easy to have Claude be able to test out workflows/software within Debian, RPM, Windows, etc… container and VM sandboxes. This could be helpful for users that want to release code on multiple platforms and help their own training and testing, which they seem to be heavily invested in given all the “How Am I doing?” prompts we’re getting.
Pretty cool though! Will need to use it for some more isolated work/code edits. Claude Code is now my workhorse for a ton of stuff including non-coding work (esp. with the right MCPs)
I don't have any relationship with any AI company, and honestly I was rooting for Anthropic, but Codex CLI is just way way better.
Also Codex CLI is cheaper than Claude Code.
I think Anthropic are going to have to somehow leapfrog OpenAI to regain the position they were in around June of this year. But right now they're being handed their hat.
Kinda sick of Codex asking for approval to run tests for each test instance
Also while not quite as smart, it's a better pair programmer. If I'm feeling out a new feature and am not sure how exactly it should work yet, I prefer to work with Sonnet 4.5 on it. It typically gives me more practical and realistic suggestions for my codebase. I've noticed that GPT-5 can jump right into very sophisticated solutions that, while correct, are probably not appropriate.
Sonnet 4.5: "Why don't we just poll at an interval with exponential backoff?"
GPT-5: "The correct solution is to include the data in the event stream...let us begin by refactoring the event system to support this..."
That said, if I do want to refactor the event system, I definitely want to use Codex for that.
Moving a bunch of verbose templated HTML around while watching results on a devserver? Haiku all day. It's a bonus that it's cheaper, but the real treat is its speed.
Adding a feature whose planning will involve intake of several files? Sonnet.
Working specifically on 'copy' or taste issues? Still I tend to prefer Opus here.
Individual experiences may vary!
Sonnet 4 was a coding companion, I could see what it was doing and it did what I asked.
Sonnet 4.5 is like Opus, it generates massive amounts of "helper scripts" and "bootstrap scripts" and all kinds of useless markdown documentation files even for the tinies PoC scripts.
The generation of helper, markdown and bootstrap scripts are very dependent on your harness.
It's as if there are two vendors saying they can give up incredibly superpowers for an affordable price, and only one of them actually delivers the full package. The other vendor's powers only work on Tuesdays, and when you're lucky. With that situation, in an environment as competitive as things currently stand, and given the trajectory we're on, Claude is an absolute non-starter for me. Without question.
Codex says “This is a lot of work, let me plan really well.”
Claude says “This is a lot of work, let me step back and do something completely different that you didn’t ask for.”
Edit: I'd like to reply to this comment in particular but can't in a threaded reply, so will do that here: "Ah, super secret problem domains that have been thoroughly represented in the LLM training data. Nice."
This exhibits a fundamental misunderstanding of why coding agents powered by LLMs are such a game changer.
The assumption this poster is making is that LLMs are regurgitating whole cloth after being trained on whole cloth.
This is a common mistake among lay people and non-practitioners. The reality is that LLMs have gained the ability to program, by learning from the code of others. Much like a human would learn from the code of others, and then be able to create a completely novel application.
The difference between a human programmer an an agentic coder is that the agent has much broader and deeper expertise across more programming languages, and understands more design patterns, more operating systems, more about programming history, etc etc and it uses all this knowledge to fulfill the task you've set it to. That's not possible for any single human.
It's important for the poster to take two realities on board: Firstly, agentic coding agents are not regurgitating whole cloth from whole cloth. Instead they are weaving new creations because they have learned how to program. Secondly, agentic coding agents have broader and deeper knowledge than any human that will ever exist, and they never tire, and their mood and energy level never changes. In fact that improves on a continuous basis as the months go by and progress continues. This means we can, as individual practitioners or fast moving teams, create things that were never before possible for us without raising huge amounts of money and hiring large very expensive teams, and then having the overhead of lining everyone up behind a goal AND dealing with the human issues that arise, including communication overhead.
This is a very exciting time. Especially if you're curious, energetic, and are willing to suspend disbelief to go and take a look.
The easier threading-focused approach to the conversation might be to add the additional comment as an edit at the end of the original and reply to the child https://news.ycombinator.com/item?id=45649068 directly. Of course, I've broken the ability to do that by responding to you now about it ;).
Well, that, and it’s just a bit annoying to claim that you’ve found some amazing new secret but that you refuse to share what the secret is. It doesn’t contribute to an interesting discussion whatsoever.
What honesty? We're not at the point of "the Godfather was a good/bad movie", we're at "no, trust, there's a really good movie called the Godfather".
Your honesty means nothing for an issue that isn't about taste or mostly subjectivness. How useful AI is and in what way is a technical discussion where the meat of the subject matter is. You've shared nothing on that front. I am not saying you have to, but like obviously people are going to downvote you - not because they might agree/disagree but because it's contributed nothing different from every other ai-hype man selling a course or something.
Talk is cheap, and we're tired of hearing people tell us how it's enabling them to make incredible software without actually demonstrating it. Your words might be true, or they might be just another over-exaggeration to throw on the pile. Without details we have no way of knowing, and so many make the empirically supported choice.
I recently vibe coded a video analysis pipeline with some related arduino-driven machine control. It was work to prototype an experience on some 3D printed hardware I’ve been skunking out.
By describing the pipeline and filters clearly, I had the analysis system generating useful JSON in an hour or so, including machine control simulation, all while watching TV and answering emails/slacks. Notable misses were that the JSON fields were inconsistent, and the python venvs were inconsistent for the piped way that I wanted the system to operate with.
Small fixes.
Then I wired up the hardware, and the thing absolutely crapped itself, swapping libraries, trying major structural changes, and creating two whole new copies of the machine control host code (asking me each time along the way). This went on for more than three hours, with me debugging the mess for about 20 minutes before resorting to 1) ChatGPT, which didn’t help, followed by 2) a few minutes of good old fashioned googling on serial port behavior on Mac, which, with an old sitting on the shelf Uno R3, meant that I needed to use the cu.* ports instead of tty.*, something that Claude Code had buried deeply in a tangle of files.
Curious about the failure, I told Claude Code to stop being an idiot and use a web browser to go research the problem of specifically locking up on the open operation. 30 seconds later, and with some reflective swearing from Opus 4.1, which I appreciate, I had the code I should have had 3 hours prior (along with other garbage code to clean up).
For my areas of sensing, computer vision, machine learning, etc., these systems are amazingly helpful if the algorithms can be completely and clearly described (e.g., Kalman filter to IoU, box blur followed by subsampling followed by split exponential filtering, etc.).
Attempts to let the robots work complex pipelines out for themselves haven’t gone as well for me.
The original post states "I am seeing Codex do much better than Claude Code", and when asked for examples, you have replied with "I don't have time to give you examples, go do it yourself, its obvious."
That is clearly going to rub folks (anyone) the wrong way. This refrain ("Wheres the data?") pops up frequently on HN, if its so obvious, giving 1 prompt where Codex is much greater than Claude doesn't seem like a heavy lift.
In absence of such an example, or any data, folks have nothing to go on but skepticism. Replying with such a polarizing comment is bound to set folks off further.
Claude struggled long time and still didn’t find.
They can save you some time by doing some fairly complex basic tasks that you can write in plain language instead of coding. To get good results you really need a lot of underlying knowledge yourself and essentially, I think of it as a translator. I can write a program in very good detail using normal language and then the LLM can convert it to code with reasonable accuracy.
I haven't been able to depend on it to do anything remotely advanced. They all make up API endpoints or methods or fill in data with things that simply don't exist, but that's the nature of the model.
What I'm saying was to compare my experience with Claude code vs Codex with GPT-5. CC's better than codex in my experience, contrary to GP's comment.
Claude, especially 4.5 Sonnet, is a lot nicer to interact with, so it may be a better choice in cases where you are co-working with the agent. Its output is nicer, it "improvises" really well even if you give it only vague prompts. That's valueable for interactive use.
But for delegating complete tasks, Codex is far better. The benchmarks indicate that, as do most practicioners I talk to (and it is indeed my own experience).
In my own work, I use Codex for complete end-to-end tasks, and Claude Sonnet for interactive sessions. They're actually quite different.
The output of codex is also not as great. Codex is great at the planning and investigation portion but sucks at execution and code quality.
Then I do a double take and re-read the summary message and realize that it pulled a "and then draw the rest of the owl", seemingly arbitrarily picking and choosing what it felt like doing in that session and what it punted over to "next steps to actually get it running".
Claude is more prone to occasional "cheating" with mocked data or "tbd: make this an actual conditional instead of hardcoded If True" stuff when it gets overwhelmed which is annoying and bad. But it at least has strong task adherence for the user's prompt and doesn't make me write a lawyer-esque contract to avoid any loopholes Codex will use to avoid doing work.
Sora 4.5 tends to randomly hallucinate odd/inappropriate decisions and goes to make stupid changes that have to be patched up manually.
I find that Codex generally requires me to remove code to get to what I want, whereas Claude I tend to use what it gives me and I add to it. Whether this is from additional prompting or from manual typing, i just find that codex requires removal to get to desired state, and Claude requires adding to get to desired state. I prefer adding incrementally than removing.
Also CC tool usage is so much better! Many, many times I’ve seen Codex writing a python script to edit a file which seems to bypass the diff view so you don’t really know what’s going on.
100% of the time Codex has done a far better job according to both Codex and Claude Code when reviewing. Meeting all the requirements where Claude would leave things out, do them lazily or badly and lose track overall.
Codex high just feels much smarter and more capable than Claude currently and even though it's quite a bit slower, it's work that I don't have to go over again and again to get it to the standards I want.
Now, I will concede that for non-coding long-horizon tasks, GPT-5 is marginally worse than Sonnet 4.5 in my own scaffolds. But GPT-5 is cheaper, and Sonnet 4.5 is about 2 months newer. However, for coding in a CLI context, GPT-5-Codex is night-and-day better. I don't know how they did it.
4.0 would chug a long for 40 mins. 4.5 refuses and straight up says the scope is too big sometimes.
My theory is anthropic is super compute constrained and even though 4.5 is smarter, the usage limits and it's obsession with rushing to finish was put in mainly to save their servers compute.
Initially, I found Codex CLI with GPT-5 to be a substitute for Claude Code - now GPT-5 Codex materially surpasses it in my line of work, with a huge asterisk. I work in a niche industry, and Codex has generally poor domain understanding of many of the critical attributes and concepts. Claude happens to have better background knowledge for my tasks, so I've found that Sonnet 4.5 with Claude Code generally does a better job at scaffolding any given new feature. Then, I call in Codex to implement actual functionality since Codex does not have the "You're absolutely right" and mocked/placeholder implementation issues of CC, and just generally writes clean, maintainable, well-planned code. It's the first time I've ever really felt the whole "it's as good as a senior engineer" hype - I think, in most cases, GPT5-Codex finally is as good as a senior engineer for my specific use case.
I think Codex is a generally better product with better pricing, typically 40-50% cheaper for about the same level of daily usage for me compared to CC. I agree that it will take a genuinely novel and material advancement to dethrone Codex now. I think the next frontier for coding agents is speed. I would use CC over Codex if it was 2x or 3x as fast, even at the same quality level. Otherwise, Codex will remain my workhorse.
This is beyond bananas to me given that I regularly see codex high and Gpt-5-high both fail to create basic react code slightly off the normal distribution.
Today I was trying to get it to temporarily shim in for development and consume the value of a redux store via merely putting a default in the reducer. Depending on that value, the application would present different state.
It failed to accomplish this and added a disgusting amount of defensive nonsense code in my saga, reducer and component to ensure the value was there. It took me a very short time to correct it but just watching it completely fail at this task was borderline absurd.
Quality varies a lot based on what you're doing, how you prompt it, how you orchestrate it, and how you babysit and correct it. I haven't seen anything I'd call senior, but I have seen it, for some classes of tasks, turn this particular engineer into many seniors. I still have to supply all the heavy lifting (here's the concurrency model, how you'll ensure exactly-once-delivery, particular functions and classes you definitely want, a few common pitfalls to avoid, etc), but then it can flesh out the details extremely well.
This is a really neat way of describing the phenomenon I've been experiencing and trying to articulate, cheers!
Isn't that the same? Just because you recognize something someone else wrote and makes you go "ohh, I understand it conceptually" doesn't mean that you can apply that concept in a few days or weeks.
So when the person you responded to says:
>almost overnight *my abilities* and throughput were profoundly increased
I'd argue the throughput did but his abilities really weren't, because without the tool in question you're just as good as before the tool. To truly claim that his abilities were profoundly increased, he has to be able to internalize the pattern, recognize the pattern, and successfully reproduce it across variable contexts.
Another example would be claiming that my painting abilities and throughput were profoundly increased, because I used to draw stick figures and now I can draw Yu-Gi-Oh! cards by using the tool. My throughput was really increased, but my abilities as a painter really haven't.
They put all of their eggs in the coding basket, with the rest of their mission couched as "effective altruism," or "safetyism," or "solving alignment," (all terms they are more loudly attempting to distance themselves from[0], because it's venture kryptonite).
Meanwhile, all OpenAI had to do was point their training cannon at it for a run, and suddenly Anthropic is irrelevant. OpenAI's focus as a consumer company (and growing as a tool company) is a safe, venture-backable bet.
Frontier AI doesn't feel like a zero-sum game, but for now, if you're betting on AI at all, you can really only bet on OpenAI, like Tesla being a proxy for the entire EV industry.
[0] https://forum.effectivealtruism.org/posts/53Gc35vDLK2u5nBxP/...
Even with the additional Sora usage and other bells & whistles that ChatGPT @ $200 provides, Claude provides more value for my use cases.
Claude Code is just a lot more comfortable being in your workflow and being a companion or going full 'agent(s)' and running for 30 minutes on one ticket. It's also a lot happier playing with Agents from other APIs.
There's nothing wrong with Anthropic wanting to completely own that segment and not have aspirations of world domination like OpenAI. I don't see how that's a negative.
If anything, the more ChatGPT becomes a 'everything app' the less likely I am to hold on to my $20 account after cancelling the $200 account. I'm finding the more it knows about me the more creeped out and "I didn't ask for this" I become.
It's very clear by their actions (not words) that they are shooting for the moon in order to survive. There is no path to sustainability as a collection of dev tools.
- Good bash command permission system
- Rollbacks coupled with conversation and code
- Easy switching between approval modes (Claude had a keybind that makes this easy)
- Ability to send messages while it’s working (Codex just queues them up for after it’s done, Claude injects them into the current task)
- Codex is very frustrating when I have to keep allowing it to run the same commands over and over, Claude this works well when I approve it to run a command for the session
- Agents (these are very useful for controlling context)
- A real plan mode (crucial)
- Skills (these are basically just lazy loaded context and are amazing)
- The sandboxing in codex is so confusing, commands fail all the time because they try to log to some system directory or use internet access which is blocked by default and hard to figure out
- Codex prefers python snippets to bash commands which is very hard to permission and audit
When Codex gets to feature parity, I’ll seriously look at switching, but until then it’s just a really good model wrapped in an okay harness
But these CLI tools are still fairly thin wrappers around an LLM. Remember: they're "just an LLM in a while loop with access to tool calls." (I exaggerate, and I love Claude Code's more advanced features like "skills" as much as anyone, but at the core, that's what they are.) The real issue at stake is what is the better LLM behind the agent: is GPT-5 or Sonnet 4.5 better at coding. On that I think opinion is split.
Incidentally, you can run Claude Code with GPT-5 if you want a fair(er) comparison. You need a proxy like LiteLLM and you will have to use the OpenAI api and pay per-token, but it's not hard to do and quite interesting. I haven't used it enough to make a good comparison, however.
I think this is because they see it as a checkbox whereas Anthropic sees it as a primary feature. OpenAI and Google just have to invest enough to kill Anthropic off and then decide what their own vision of coding agents looks like.
The anthropic (ClaudeCode) tooling is best-in-class to me. You listed many features that I have become so reliant on now, that I consider them the Ante that other competitors need to even be considered.
I have been very impressed with the Anthropic agent for code generation and review. I have found the OpenAI agent to be significantly lacking by comparison. But to be fair, the last time I used OpenAI's agent for code was about a month ago, so maybe it has improved recently (not at all unreasonable in this space). But at least a month ago when using them side-by-side the codex CLI was VERY basic compared to the wealth of features and UI in the ClaudeCode CLI. The agents for Claude were also so much better than OpenAI, that it wasn't even close. OpenAI has always delivered me improper code (non-working or invalid) at a very high rate, whereas Claude is generally valid code, the debate is just whether it is the desired way to build something.
https://github.com/just-every/code
Fixed all of these in a heartbeat. This has been a game changer
This is especially the case in a fast moving field such as this. You would not want to get stuck in the same local minimum as your competitor.
I would rather we have competing products that try different things to arrive at a better solution overall.
CC on the other hand feels more creative and has mostly given better UI.
Of course, once the page is ready, I switch to Codex to build further.
Claude feels like a better fit for an experienced engineer. He's a positive, eager little fellow.
I set up an MCP tool to use gpt-5 with high reasoning with Claude Code (like tools with "personas" like architect, security reviewer, etc), and I feel that it SIGNIFICANTLY amplifies the performance of Claude alone. I don't see other people using LLMs as tools in these environments, and it's making me wonder if I'm either missing something or somehow ahead of the curve.
Basically instead of "do x (with details)" I say "ask the architect tool for how you should implement X" and it gets into this back and forth that's more productive because it's forcing some "introspection" on the plan.
Sourcegraph Amp (https://sourcegraph.com/amp) has had this exact feature built in for quite a while: "ask the oracle" triggered an O1 Pro sub-agent (now, I believe, GPT-5 High), and searching can be delegated to cheaper, faster, longer-context sub-agents based on Gemini 2.5 Flash.
Its very odd because I was hoping they were very on par.
ClaudeCode is used by me almost daily, and it continues to blow me away. I don't use Codex often because every time I have used it, the output is next to worthless and generally invalid. Even if it does get me what I eventually want, it will take much more prompting for me to get the functioning result. ClaudeCode on the other hand gets me good code from the initial prompt. I'm continually surprised at exactly how little prompting it requires. I have given it challenges with very vague prompts where it really exceeds my expectations.
It also often fails to escalate a command, it'll even be like, oh well I'm in a sandbox so I guess I can't do this, and will just not do it and try to find a workaround instead of escalating permission to do the command.
For me, though, it's not remotely close. Codex has fucked up 95% of the 50-or-so tasks I asked it to do, while Claude Code fucks up only maybe 60%.
I'm big on asking LLMs to do the first major step of something, and then coming back later, and if it looks like it kinda sucks, just Ctrl-C and git revert that container/folder. And I also explicitly set up "here are the commands you need to run to self-check your work" every time. (Which Codex somewhat weirdly sometimes ignores with the explicit (false) claim that it skipped that step because it wasn't requested... hmm.)
So, those kinds of workflow preferences might be a factor, but I haven't seen Codex ever be good yet, and I regret the time I invested trying it too early.
YMMV, but this definitely doesn't track with everything I've been seeing and hearing, which is that Codex is inferior to Claude on almost every measure.
1. Codex takes longer to fail and with less helpful feedback, but tends to at least not produce as many compiler errors 2. Claude fails faster and with more interesting back-and-forth, though tends to fail a bit harder
Neither of them are fixing the problems I want them to fix, so I prefer the faster iteration and back-and-forth so I can guide it better
So it's a bit surprising to me when so many people are pickign a "clear winner" that I prefer less atm
The best answer is each has its uses. Using codex to do bulk edits is dumb because it takes forever, etc etc
Let's call it the skeptical public? We've been listening to a group of people rave about how revolutionary these tools are, how they're able to perform senior level developer work, how good their code is, and how they're able to work autonomously through the use of sub-agents (i.e. vibe coding), without ever providing evidence that would support any of those grandiose claims.
But then I use these tools myself[1] and I speak to real developers who have used them and our evaluation centers around lukewarm, e.g. good at straightforward, junior level tasks, or good for prototyping, or good for initially generating tests, or good for answering certain types of questions, or good for one-off scripts, but approximately none of them would trust these LLMs to implement a more complex feature like a mid-level or senior developer would without very extensive guidance and hand-holding that takes longer than just doing it ourselves.
Given the overwhelming absence of evidence, the most charitable conclusion I can come to is that the vast majority of people making these claims have simply gone from being 0.2X developers to being 0.3X developers who happen to generate 5X more code per unit of time.
[1] e.g. my reply to https://news.ycombinator.com/item?id=45651948
To me, the tool inherently makes sense and vibes with my own personality. It allows me to write code that I would otherwise procrastinate on. It allows me to turn ideas into reality, so much faster.
Maybe you're just hyper focused on metrics? Productivity, especially when dealing with code, is hard to quanitfy. This is a new paradigm and so it's also hard to compare apples to oranges. Does this help?
The people I talked to use a wide variety of environments and their experience is similar across the board, whether they're working in Nodejs, React, Vue, Ruby, PHP, Java, Elixir, or Python.
> Productivity, especially when dealing with code, is hard to quanitfy.
Indeed, that's why I think most people claiming these obscene benefits are really bad at evaluating their own performance and/or started from a really low baseline.
I always think back to a study I read a while ago where people without ADHD were given stimulant medication and reported massive improvements in productivity but objective measurements showed that their real-world performance was equal to, or slightly lower than their baseline.
I think it's very relevant to the psychology behind this AI worship. Some people are being elevated from a low baseline whilst others are imagining the benefits.
> there are also products being made are actually versatile, complex, and completely vibe-coded.
Which ones? I'm looking for repositories that are at least partially video-documented to see the author's process in action.
Does that help you? I doubt it. But there you go.
Neither tool is worth paying even $20 a month for when it comes to Elixir, that's how little value I get out of them, and it's not because I can't afford it.
Both LLMs suck if you let it do everything without architecting the solution first. So, I always instruct the high level architecture of how I want something, specifically around how the data should flow and be consumed and what I really want to avoid. With these constraints and bit of some prompt engineering, they are actually quite good.
I always do that. Last time I spent an hour planning, going through the requirements, having it ask questions, only for it to completely botch the implementation.
Sure, I can treat it like a junior and spend 2-3 hours planning everything down to the individual function level and it's going to implement it alright. The code will work but it won't be idiomatic. Or I can just do it myself in 3 hours total to a much higher standard of quality, without gambling on a successful outcome, while simultaneously improving my own knowledge, understanding, and abilities.
No matter how I try to use them, agentic coding is always a net negative on my productivity (disposable one-off scripts excluded).
That said, you're right that the broader internet (Reddit especially) is heavily astroturfed. It's not unusual to see "What's the best X?" threads seeded by marketers, followed by hoard of suspiciously aligned comments.
But without actual evidence, these kind of meta comments like yours (and mine) are just a cynical noise.
Fails to escalate permissions, gets derailed, loves changing too many things everywhere.
GPT5 is good, but codex is not.
I understand the 70K spend as a corporate expense, not an individual... right?
I've been seeing more of this lately despite initial excellent results. Not sure what's going on, but the value is certainly dropping for me. I'll have to check out codex. CLI integration is critical for me at this point. For me it is the only thing that actually helps realize the benefits of LLM models we have today. My last NixOS install was completely managed by Claude Code and it worked very well. This was the result of my latest frustrations:
https://i.imgur.com/C4nykhA.png
Though I know the statement it made isn't "true". I've had much better luck pursuing other implementation paths with CC in the same space. I could have prompted around this and should have reset the context much earlier but I was drunk "coding" at that point and drove it into a corner.
I thought claude code is still better in tool calling and something like that
On the other hand, last time I tried GPT-5 from Cursor, it was so disappointing. It kept getting confused while we were iterating on a plan, and I had to explain to it multiple times that it's thinking about the problem the same way. After a while I gave up, opened a new chat and gave it my own summary of the conversation (with the wrong parts removed) and then it worked fine. Maybe my initial prompt was vague, but it continually seemed to forget course corrections in that chat.
I mostly tend to use them more to save me from typing, rather than asking it to design things. Occasionally we do a more open ended discussion, but those have great variance. It seems to do better with such discussions online than within the coding tool (I've bounced maths/implementation ideas off of while writing shaders on a personal project)
when making boilerplatish changes in the product in areas I'm not familiar with (it's a large codebase) gpt-5-high is a monster.
Claude code has only been generally available since May last year (a year and half ago)... I'm surprised by the process that you are implying; within a year and a half, you both spent 70k on claude code, and knew enough about it and its competition to switch away from it? I dont think I'd be able to due diligence even if LLM evaluation was my fulltime job. Let alone the fact that the capabilities of each provider are changing dramatically every few weeks.
Which means it wasn’t true any of the previous times, so why would it be true this time? It feels like an endless loop of the “friendship ended” meme with AI companies.
https://knowyourmeme.com/editorials/guides/what-is-the-frien...
It’s much more likely commenters are still in the honeymoon hype phase and (again) haven’t found the problems because they’re hyper focused on what the new thing is good at that the previous one wasn’t, ignoring the other flaws. I see that a lot with human relationships as well, where people latch on to new partners because they obviously don’t have the big problem that was a strain on the previous relationship. But eventually something else arises. Rinse and repeat.
GPT-5 is not the final deal, but it's incredibly good as is at coding.
Anecdotal, but it's something completely else in terms of capabilities, ignore it at your own peril, but I think it will profoundly change software development.
I’m not arguing for ignoring it, my point is different.
> but I think it will profoundly change software development.
The point is that this is said every time, together with “the previous thing to which the exact same praise was given, wasn’t it”. So it’s several rounds of “yes yes, the previous time the criticisms were right, but this time it’s different, trust me”. So everyone else is justified in being skeptical.
No one wants AI to have the problems it has (technical, ethical, and others). If they didn’t it would be better for everyone. Criticism is a way of surfacing issues so they can be fixed.
And sure, I’ll grant that some people want to bash the other side more than they want to arrive at the truth, but those exist in all sides of the argument (probably in roughly equal measure?). So to have a productive conversation we need to go in with the mindset of “we’re on the same side in the goal of not having this suck”.
Maybe the productive thing is actually to ignore naysayers and goalpost movers and use the tools.
You aren’t enlightened for not liking a tool. “Oh, hammers? Absolutely a bubble, after all they never fixed the hit-your-thumb issue i blogged about, and nail guns just let you hurt your thumbs faster”
I’ll say it again:
> So to have a productive conversation we need to go in with the mindset of “we’re on the same side in the goal of not having this suck”.
If you’re unwilling to engage in those terms and steel man the argument, I don’t see the point in engaging in conversation. If what you want is to straw man and throw unsubstantiated jabs at someone, there are other communities better suited for that.
True, but even the boy who cried wolf too many times eventually got his sheep eaten by the wolf.
I have my own personal anecdotal benchmarks and I never hyped LLMs before GPT-5.
Things that simply did not work before GPT-5 no matter how many shots I gave them, GPT-5 breezed through.
For me, it would take at least 2 generations of no felt progress in the models to call for diminishing returns, and I'm not seeing them.
The commenter in question is the CTO of the company which makes Wordfence. My instinct says they're not on the OpenAI payroll and you're looking at a normal comment and not advertisement.
I think you should check your priors man; it's worth thinking critically before you toss out accusations like that.
I suspect the only way to prove that I’m legit to the doubters is to do something a paid shill or a bot would never ever do.
To the commenters who think I’m a shill or bot: fuck every single one of you and the various motherfucking horses you rode in on.
I suspect we may be entering a dystopia where vulgarity is proof of life.
Maybe you're an LLM trained on 4chan
It I posted on reddit about how I just spent 70k on a watch and someone replied that they didn’t trust me, maybe I would laugh or reply with “whatever”, but never would I reply in anger.
also i was referring to broadly the phenomenon not your post, e.g. even your post is from real human, it's the replies and upvotes push your post to the top.
i don't expect to convince you, but if there's anything I can do to un-upset you, I'd happy to try. :)
...is what a reasonable argument against would sound like. But in truth, nobody really knows who is running that account. There's nothing stopping anybody from passing off their HN account to someone else, having it stolen from them, or even selling it. They could possibly even be who they say they are, but have an undisclosed vested interest in the thing they're promoting.
Internet communities aren't dead, but social media sure is, and Hacker News is ultimately a social media site.
Many people are complaning about this on HN and Reddit. I do not have any proof, but there is a pattern, I suppose Antrophic first attracts customers, then starts to optimize costs/margins.
Depending on the time of the year you can expect fresh updates from any given company on how their new models and tools perform and they'll generally blow the competition out of the water.
The trick is to either realize that your current tools will just become magically better in a few months OR lean in and switch companies as their tools and models update.
If we ship more for less because the new agent doesn't tap out, that's not a honeymoon, it's an upgrade.
If you ship more for less, but less maintainable or less correct, then it's not necessarily an upgrade. Always the same question: does it benefit the developer? The product? The company?
It was already possible, without AI, to look like one is doing a great job ("they are producing so much! Let's promote them!") but actually just building a bad codebase. The art being to get the promotion and move to the next step before the project implodes.
Not saying that AI necessarily ends up doing that, but it most certainly help.
However, Anthropic restricted Opus use for Max plan users 10 days or so ago severly (12-fold from 40h/week down to 5h week) [1].
Sonnet is a vastly inferioir model for my use cases (but still frequently writes better Rust code than Codex). So now I use Codex for planning and Sonnet for writing the code. However, I usually need about 3--5 loops with Codex reviewing, Sonnet fixing, rinse & repeat.
Before I could use one-shot Opus and review myself directly, and do one polish run following my review (also via Opus). That was possible from June--mid October but no more.
They all like to push synthetic benchmarks for marketing, but to me there's zero doubt that both Anthropic and OpenAI are well aware that they're not representative of logical thinking and creativity.
Costs are 6x cheaper and it's way faster and good at test writing and tool calling. It some times can be a bit messy though so use Gemini or Claude or codex for that hard problems....
* Is really only magic on Linux or WSL. Mediocre on Windows
* Is quite mediocre at UI code but exceptional at backend, engineering, ops, etc. (I use Claude to spruce up everything user facing -- Codex _can_ mirror designs already in place fairly well).
* Exceptional at certain languages, OK at others.
* GPT-5 and GPT-5-Codex are not the same. Both are models used by the Codex CLI and the GPT-5-Codex model is recent and fantastically good.
* Codex CLI is not "conversational" in the way that Claude is. You kind of interact with it differently.
I often wonder about the impact of different prompting styles. I think the WOW moment for me is that I am no longer returning to code to find tangled messes, duplicate silo'd versions of the same solution (in a different project in the same codebase), or strangely novice style coding and error handling.
As a developer for 20yrs+, using Codex running the GPT-5-Codex model has felt like working with a peer or near-peer for the first time ever. I've been able to move beyond smaller efforts and also make quite a lot of progress that didn't have to be undone/redone. I've used it for a solid month making phenomenal progress and able to offload as-if I had another developer.
Honestly, my biggest concern is that OpenAI is teasing this capable model and then pulls the rug in a month with an "update".
As for the topic at hand, I think Claude Code has without a doubt the best "harness" and interface. It's faster, painless, and has a very clean and readable way of laying out findings when troubleshooting. If there were a cheap and usable version of Opus... perhaps that would keep Claude Code on the cutting edge.
Initially, I had great success with codex medium- I could refactor with confidence, code generally ran on the first or second try, etc.
Then when that suddenly dumbed down to Claude Sonnet 3.5 quality I moved to GPT5 High to get back what had been lost. That was okay for a few days. Now GPT5 High has dropped to Claude Sonnet 3.5 quality.
There's nothing left to fallback to.
The only edge Claude has is context window, which we do sometimes hit, but I’m sure that gap will close.
Would love any suggestions if anyone in a similar story.
https://developer.microsoft.com/blog/azure-devops-with-githu...
(if they want employee to use more AI, ditch ADO, embrace GitHub)
I don't want to leak data either way by using some "let's throw SSO from a sketchy adtech company into the trust loop".
I don't want to wait a minute for Anthropic's login-by-email link, and have the process slam the brakes on my workflow and train of thought.
I don't want to wait a minute for OpenAI's MFA-by-email code (even though I disabled that in the account settings, it still did it).
I don't want to deal with desktop clients I don't trust, or that might not keep up with feature improvements. Nor have to kludge up a clumsy virtualization sandbox for an untrusted client, just to ask an LLM questions that could just be in a Web browser.
I wish the standard were for companies to check new passwords against leaked password lists, e.g. what https://haveibeenpwned.com uses.
I use a similar workflow and have found that websites that allow passkey-based login can avoid the friction of waiting for TOTP codes or magic links.
The current claude.ai signin mechanism is rather annoying.
I have setup a little workflow where given linear tags it sets up a work tree on my dev box installs deps and starts the implementation so I can take it over I prefer this workflow to the fully managed cloud based solutions.
This kind of fits in for issues where I’m basically sure I won’t have to take it over (and it can do it fully on its own). Which aren’t that many.
Very simple example there was a warning pop up on something where I thought there shouldn’t be now it’s done fully automatically from my phone in 5 mins. I quite like that these small changes become so easy.
I got my environment working well with Codex's Cloud Task. Trying to same repo with Claude Code Web (which started off with Claude Code CLI mind you), and the yarn install just hangs with no debuggable output.
I'll try this, but the grounding seems crucial for these LLMs to deliver results that are fewer shot than otherwise.
I then have to go in and advise it on factoring and things like that, but the functionality itself is present and working.
AI coding should be tightly in the inner dev loop! PRs are a bad way to review and iterate on code. They are a last line of defense, not the primary way to develop.
Give me an isolated environment that is one click hooked up to Cursor/VSCode Remote SSH. It should be the default. I can't think of a single time that Claude or any other AI tool nailed the request on the first try (other than trivial things). I always need to touch it up or at least navigate around and validate it in my IDE.
Also it isn't always about editing. It is about seeing the surrounding code, navigating around, and ensuring the AI did the right thing in all of the right places.
[1] https://ona.com/
I want to run a prompt that operates in an isolated environment that is open in my IDE where I can iterate with the AI. I think maybe it can do this?
I'd love to see a short animation of what it would actually look like to do the core flow. Prompt -> environment creation -> iterating -> popping open VSCode Web -> Popping open Cursor desktop.
Also, a lot of the links on that page you linked me to are broken:
* "manual edits and Ona Agents is very powerful."
* "Ona’s automations.yaml extends it with tasks and services"
* "devcontainer.json describes the tools" * Once in Cursor I can't click on modified files or lines and have my IDE jump to it. Very hard to review changes.
* I closed the Ona tab and couldn't figure out how to get it back so I could prompt it again.
* I can't pin the Ona tab to the right like Cursor does
* Is there a way to select lines and add them to context?
* Is there a way I can pick a model?idk, we’ve (humans) gotten this far with them. I don’t think they are the right tool for AI generated code and coding agents though, and that these circles are being forced to fit into those squares. imho it’s time for an AI-native git or something.
AI is more akin to pair programming with another person sitting next to you. I don't want to ship a PR or even a branch off to someone sitting next to me. I want to discuss and type together in real time.
I like the idea of background agents running in the cloud but it has to be a more persistent environment. It also has to run on a GUI so it can develop web applications or run the programs we are developing, and run them properly with the GUI and requiring clicking around, typing things etc. Computer use, is what we need. But that would probably be too expensive to serve to the masses with the current models
I'm mainly aiming for a good experience with what we have today. Welding an AI agent onto my IDE turned out to be great. The next incremental step feels like being able to parallelize that. I want four concurrent IDEs with AI welded onto it.
That said, maybe this is the turning point where these companies work toward solving it in earnest, since it's a key differentiator of their larger PLATFORM and not just a cost. Heck, if they get something like that working well, I'd pay for it even without the AI!
Edit: that could end up being really slick too if it was able to learn from your teammates and offer guidance. Like when you're checking some e2e UI flows but you need a test item that has some specific detail, it maybe saw how your teammate changed the value or which item they used or created, and can copy it for you. "Hey it looks like you're trying to test this flow. Here's how Chen did it. Want me to guide you through that?" They can't really do that with just CLI, so the web interface could really be a game changer if they take full advantage of it.
It would be great to be able to check in on Claude on a walk or something to make sure it hasn't gone off the rails or send it a quick "LGTM" to keep moving down a large PLAN.md file without being tethered to a keyboard and monitor. I can SSH from my phone but the CLI ergonomics are ... not great with an on screen keyboard, when all it really needs is just needs a simple threaded chat UI.
I've seen a couple Github projects and "Happy Coder" on a Show HN which I haven't got around to setting up yet which seem in the ballpark of what I want, but a first party integration would always be cool.
Each agent has its own isolated container. With Pairing Mode, you can sync the agent's code and git state directly into your local Cursor/any IDE so you can instantly validate its work. The sync is bidirectional so your local changes flow back to the agent in realtime.
Happy to answer any questions - I think you'll really like the tight feedback loop :)
I’d like
- agent to consolidate simple non-conflicting PRs
- faster previews and CI tests (Render currently)
- detect and suggest solutions for merge conflicts
Codex web doesn’t update the PR which is also something to change, maybe a setting, but for web Code agents (?) I’d like the PR once opened to stay open
Also PRs need an overhaul in general. I create lots of speculative agents, if I like the solution I merge, leading to lots of PRs
Plus they generate so much noise with all the extra commits and comments that go to everyone in slack and email rather than just me.
They want to turn everything into a bootstrap framework, which is probably the limit of their mental horizon. And many people maintain that the emperor is fully clothed and that the scam works.
Would you like me to print a chart of the growth of cloud businesses from 2010-2025?
Seeing comments like this all over the place. I switched to CC from Cursor in June / July because I saw the same types of comments. I switched from VSCode + Copilot about 8 months before that for the same reason. I remember being skeptical that this sort of thing was guerilla marketing, but CC was in fact better than Cursor. Guess I'll try Codex, and I guess that it's good that there are multiple competing products making big strides.
Never would have imagined myself ditching IDEs and workflows 3x in a few months. A little exhausting
I use CC and codex somewhat interchangeably, but I have to agree with the comments. Codex is a compete monster, and there really isn’t any competition right now.
Wouldn't work for my case since I need a lot of HDD space, GPUs etc. to run the thing I'm working on, but it would be great if I could run a Claude Code server in my server, expose the port and then connect via web or iOS interface.
Sure I can use tmux/ssh but it's very impractical specially in mobile.
It can run in a front-end only mode (I'll put up a hosted version soon), and then you need to specify your OpenCode API server and it'll connect to it. Alternatively, it can spin up the API server itself and proxy it, and then you just need to expose (securely) the server to the internet.
The UI is responsive and my main idea was that I can easily continue directing the AI from my phone, but it's also of course possible to just spin up new sessions. So often I have an idea while I'm away from my keyboard, and being up able to just say "create an X" and let it do its thing while I'm on the go is quite exciting.
It doesn't spin up a special sandbox environment or anything like that, but you're really free to run it inside whatever sandboxing solution you want. And unlike Claude Code, you're of course free to choose whatever model you want.
It’s Sonnet 4.5 + GPT-5 working together.
Codex just isn’t as good as people make it out to be. OpenAI seems to train on a lot of JavaScript/Tailwind to make visuals look more impressive but when it comes to actual backend work it just fails more than it succeeds. Sonnet is much better at chewing through tasks and GPT 5 is great at consulting planning and analysis.
Using Amp and asking it to check everything with the oracle leads to superior results.
But no one on HN has heard of it. I’m guessing HN hates twitter?
Worth trying out. The free version doesn’t have the oracle so I use the paid version.
And you can think through with first principles to see why it won’t expand developer hiring. Since AI progress is jagged, some industries will be affected in outsized ways while others may thrive more. But the increase in demand from new industries won’t absorb the reduction in demand from disrupted industries.
Although I wish that the performance of Jules is worse than Gemini CLI. I hope that this is as good as the Claude Code CLI.
Imagine if this would just be able to use your nix file in your repo to fetch all the dependencies needed to run your project. That'd be extremely sick
It’s interesting how all the LLMs slowly end up with the same feature set and picking one really ends up with personal preference.
Me as a dev am happy that I now have 4 autonomous engineers that I can delegate stuff to depending on task difficulty and rate limits. Even just Copilot + Codex has made me a lot more productive
Also rip to all the startups that tried to provide “Claude in the cloud”, though this was very predictable to happen
https://youtu.be/s-avRazvmLg?si=eQqY6w8kbxv3TFhQ
The dev just types in a prompt, scrolls down the bottom and makes a PR asking others to review without even looking at what they just did.
Lmao. Know their target market for sure.
How does Codex / Claude Code compare to working within Cursor with the chat and agents? Are they effectively the same thing?
Is one significantly better than the other. Please share your experiences around this I’m trying to be ass effective of an engineer as I can be at our company. - Mike
I was curious how the 'Open in CLI' works - it copies a command to clipboard like 'claude --teleport session_XXXXX', which opens the same chat in the CLI, and checks out a new branch off origin/main which it's created for the thread, called 'claude/feature-name-XXXXX'.
I prefer not to use CC at the 'PR level' because it still needs too much hand-holding, so very happy to see that they've added this.
Update: Session titles are either being leaked between users or have a very bad LLM writing them. I'm seeing "Update Ton Blockchain Configuration" and "Retrieve Current PIN Code" for a project that has nothing to do with blockchain or PIN codes...
“Quick, Claude, what have I supposedly been WorkingOn all morning? … ‘Blockchain Token Configuration Update’, perfect!”
total used free shared buff/cache available
Mem: 13Gi 306Mi 12Gi 0B 126Mi 12Gi
Swap: 0B 0B 0B
the sandbox has ~12G RAM, but no docker or podman allowed.unfortunately it doesn't work for me as I need docker compose or equivalent to fire up some env for local test
Google’s version Jules has one.
Claude Code has better UX. Period. The permission system, rollbacks, plan mode - it's more polished. Iterative work feels natural. Quick fixes, exploratory coding, when I'm not sure exactly what I want yet - Claude wins.
Codex is more reliable when stakes are high. Hard problems. Multi-file refactors. Complex business logic. The model just grinds through it. Less hand-holding needed.
Here's the split I've landed on - Claude for fast iteration tasks where I'm actively involved. Codex for delegate-and-walk-away work that needs to be right first time.
Not about which is "better" - wrong question. It's about tooling vs model capability. Claude optimized the wrapper. OpenAI optimized the engine.
I'd much rather give it read permissions, have it work in its own clone, and then manually pull changes back through (either with a web review UI somehow, or just pulling the changes locally). Partly for security, partly just to provide a good review gate.
Would also allow using this with other people's repos, where I _can't_ give write permissions, which would be super helpful for exploring dependency repos, or doing more general research. I've found this super helpful with Claude Code locally but seems impossible on the web right now.
We're on Gitlab for historic reasons. Where Github now has numerous opporuntities to use AI as part of your workflow, there's nothing in Gitlab (from what I can tell), unless you're paying big bucks.
I like using AI to boost my productivity. I'm surprised that that'll be the thing that makes me migrate to Github.
I am trying to stay vendor neutral with my own coding agent (1). To approach this, I created a desktop app that connects to coding agent running on either my own infra or yours (local or your cloud server). Desktop app and coding agent are separate binaries.
If you host on your own infra then you can bring your own AI provider too. Similarly, I want to give choice for git host. Right now I am targetting GitHub but I want to add Gitlab soon after MVP. All this has made the path to my MVP longer but I see a clear long-term aim for myself - we should have choices.
minimaxir•3mo ago