frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Uv is the best thing to happen to the Python ecosystem in a decade

https://emily.space/posts/251023-uv
1205•todsacerdoti•8h ago•691 comments

China has added forest the size of Texas since 1990

https://e360.yale.edu/digest/china-new-forest-report
385•Brajeshwar•1d ago•241 comments

Tell HN: Azure outage

662•tartieret•11h ago•633 comments

IRCd service written in awk

https://example.fi/blog/ircd.html
14•pabs3•31m ago•2 comments

Minecraft removing obfuscation in Java Edition

https://www.minecraft.net/en-us/article/removing-obfuscation-in-java-edition
575•SteveHawk27•10h ago•197 comments

Raspberry Pi Pico Bit-Bangs 100 Mbit/S Ethernet

https://www.elektormagazine.com/news/rp2350-bit-bangs-100-mbit-ethernet
70•chaosprint•3h ago•14 comments

OS/2 Warp, PowerPC Edition

https://www.os2museum.com/wp/os2-history/os2-warp-powerpc-edition/
30•TMWNN•3h ago•12 comments

AWS to bare metal two years later: Answering your questions about leaving AWS

https://oneuptime.com/blog/post/2025-10-29-aws-to-bare-metal-two-years-later/view
627•ndhandala•15h ago•431 comments

Dithering – Part 1

https://visualrambling.space/dithering-part-1/
225•Bogdanp•8h ago•48 comments

How the U.S. National Science Foundation Enabled Software-Defined Networking

https://cacm.acm.org/federal-funding-of-academic-research/how-the-u-s-national-science-foundation...
57•zdw•5h ago•15 comments

AOL to be sold to Bending Spoons for $1.5B

https://www.axios.com/2025/10/29/aol-bending-spoons-deal
192•jmsflknr•10h ago•170 comments

Kafka is Fast – I'll use Postgres

https://topicpartition.io/blog/postgres-pubsub-queue-benchmarks
312•enether•12h ago•250 comments

A century of reforestation helped keep the eastern US cool

https://news.agu.org/press-release/a-century-of-reforestation-helped-keep-the-eastern-us-cool/
89•softwaredoug•3h ago•10 comments

Tailscale Peer Relays

https://tailscale.com/blog/peer-relays-beta
259•seemaze•10h ago•71 comments

Crunchyroll is destroying its subtitles

https://daiz.moe/crunchyroll-is-destroying-its-subtitles-for-no-good-reason/
175•Daiz•3h ago•58 comments

OpenAI’s promise to stay in California helped clear the path for its IPO

https://www.wsj.com/tech/ai/openais-promise-to-stay-in-california-helped-clear-the-path-for-its-i...
156•badprobe•9h ago•210 comments

Board: New game console recognizes physical pieces, with an open SDK

https://board.fun/
147•nicoles•23h ago•56 comments

The Internet runs on free and open source software and so does the DNS

https://www.icann.org/en/blogs/details/the-internet-runs-on-free-and-open-source-softwareand-so-d...
111•ChrisArchitect•8h ago•7 comments

GLP-1 therapeutics: Their emerging role in alcohol and substance use disorders

https://academic.oup.com/jes/article/9/11/bvaf141/8277723?login=false
156•PaulHoule•2d ago•67 comments

How to Obsessively Tune WezTerm

https://rashil2000.me/blogs/tune-wezterm
79•todsacerdoti•7h ago•47 comments

Keep Android Open

http://keepandroidopen.org/
2342•LorenDB•22h ago•748 comments

Responses from LLMs are not facts

https://stopcitingai.com/
148•xd1936•5h ago•100 comments

Meta and TikTok are obstructing researchers' access to data, EU commission rules

https://www.science.org/content/article/meta-and-tiktok-are-obstructing-researchers-access-data-e...
147•anigbrowl•4h ago•68 comments

More than DNS: Learnings from the 14 hour AWS outage

https://thundergolfer.com/blog/aws-us-east-1-outage-oct20
79•birdculture•2d ago•25 comments

Using Atomic State to Improve React Performance in Deeply Nested Component Trees

https://runharbor.com/blog/2025-10-26-improving-deeply-nested-react-render-performance-with-jotai...
4•18nleung•3d ago•0 comments

Upwave (YC S12) is hiring software engineers

https://www.upwave.com/job/8228849002/
1•ckelly•10h ago

Composer: Building a fast frontier model with RL

https://cursor.com/blog/composer
179•leerob•10h ago•133 comments

How blocks are chained in a blockchain

https://www.johndcook.com/blog/2025/10/27/blockchain/
50•tapanjk•2d ago•21 comments

Extropic is building thermodynamic computing hardware

https://extropic.ai/
97•vyrotek•8h ago•70 comments

Tailscale Services

https://tailscale.com/blog/services-beta
126•xd1936•1d ago•28 comments
Open in hackernews

Composer: Building a fast frontier model with RL

https://cursor.com/blog/composer
179•leerob•10h ago

Comments

ibash•10h ago
Very cool, congrats!
romanovcode•10h ago
Where is the comparison with Sonnet 4.5? That would be the only thing that matters, really.
matheist•10h ago
> "Best Frontier" includes GPT-5 and Sonnet 4.5, which both outperform Composer.
yodon•10h ago
>> "Best Frontier" includes GPT-5 and Sonnet 4.5, which both outperform Composer.

Looking at the graph, it would appear there's an implicit "today" in that statement, as they do appear poised to equal or surpass Sonnet 4.5 on that same benchmark in the near future.

timcobb•7h ago
Does anyone code with GPT-5? I've never had it work in Cursor. I mean, like, at all.
srush•5h ago
A lot of people use it! It scores very well on our benchmarks, significantly better than Composer-1.
alyxya•10h ago
I wonder if this custom model is trained on cursor users. There’s a lot of potential on how much better a custom model could be the closer it is integrated with the product. Having the model learn to adapt to different user preferences would make it stand out compared to memoryless frontier models.
Sammi•10h ago
The fact that you are wondering this is bad. You definitely should know this. _ALL_ the online ai providers are training on your data. They have more expensive enterprise plans if want to opt out.
alyxya•10h ago
I’ve generally seen providers allow you to opt in or out. What may vary is what the default is and what they may offer in exchange for using your data (perhaps they could offer higher rate limits).
nu11ptr•10h ago
I love Cursor. I've tried Copilot/Claude/etc. but keep coming back to Cursor. I just want to work, and Cursor tab complete is dang accurate, esp. for refactoring tasks.
Sammi•10h ago
I tried going back to VS Code + Copilot a month ago. I only lasted 4 days because it was to bad. It was super slow and gave poor suggestions, but mostly it just flat out did not suggest anything. Cursor feels snappy in comparison and the suggestions are more often than not useful. The most annoying thing about Cursor tab complete, is that it is so fast that when I am doing something unusual then it will keep on jumping in with useless suggestions. They have a snooze function for this though.
WanderPanda•9h ago
Damn TIL, I always used > Cursor: disable completions and forgot to turn it on again I need to try snooze then!
stared•10h ago
While I am excited to see a new model, I am skeptical when there is so much vagueness - charts with "frontier models" without actually spelling out which ones, charts with no numbers (time axis, or in one chart - entirely).
srush•9h ago
There is a footnote that should help with the models. Training is a harder thing to report on, but roughly our finding here is that RL scales.
solarkraft•10h ago
People on here love to be contrarian about Cursor, but I’ve tried all the popular alternatives (Copilot, Claude Code, Codex, Gemini CLI, Cline) and found Cursor’s overall experience to just be unmatched. A big part of that is its speed, another its reliability.

It’s the only coding agent I’m actually really motivated to use out of the box because it really does make me feel more productive while the others keep messing up the project, from way too large changes I didn’t ask for all the way to constant syntax and request errors.

It’s the only coding agent I’ve used that feels serious about being a product rather than a prototype. Their effort in improving their stack is totally paying off.

pqdbr•10h ago
I dropped cursor for the precise reason you mention: reliability.

Countless times my requests in the AI chat just hang there for 30+ seconds more until I can retry them.

When I decided to give Claude Code a try (I thought I didn't need it because I used Claude in Cursor) I couldn't believe how faster it was, and literally 100% reliable.

EDIT: given today's release, decided to give it a go. The Composer1 model _is_ fast, but right at the second new agent I started I got this:

> Connection failed. If the problem persists, please check your internet connection or VPN

davidgomes•10h ago
A lot of progress is being made here on the Cursor side I encourage you to try it again.

(Cursor dev)

cleak•10h ago
This is the exact reason I left Cursor for Claude Code. Night and day difference in reliability. The Windows experience might be especially bad, but it would get constantly hung or otherwise fail when trying to run commands. I also had to babysit Cursor and tell it to continue for mid sized tasks.
jonasnelle•10h ago
They've improved performance dramatically in the last few weeks, might have fixed your issues.
hobs•9h ago
Its clear they've been shipping a lot of windows updates.
chasebank•10h ago
I use cursor daily, my business partner uses CC. Without a doubt, CC is certainly better, I'm just not willing to let go of the flow I spent the last year fine tuning. I'll probably make the leap after we finish the latest release.
infecto•9h ago
Sounds like you have a network problem. Did you try checking the network diagnostic in settings? They default to http2 which can throw a wrench in some corporate networks.

I would be willing to bet money your issue is on your side. I am a daily user since the beginning and cannot recall when I have had issues like you describe unless it was related to my corp network.

ramon156•10h ago
You tried Claude and still prefer cursor?
solarkraft•9h ago
Absolutely. CC can be tuned to not do too much crap on its own, but even with the new extension its IDE integration and multi thread management are still significantly worse, as is its status reporting, which I find to be very important.

Also, somehow magically, I’ve found Cursor’s Auto mode to be significantly faster than the specific models I’ve tried, Claude being among them.

infecto•9h ago
Auto is pretty amazing and I think most folks that have issues or complain about cost are simply not using Auto.
enraged_camel•8h ago
Auto is only good for trivial stuff at this point. It is quite subpar at everything else. Th is is probably because it almost always defaults to Claude Sonnet 3.5 (which you can tell if you ask the agent to identify itself and tell you its version), and that is pretty outdated.
infecto•6h ago
Again it goes back to what your workflow is. I don’t think trivial is the right word. I use auto to write fairly advanced code but I do it in bite size chunks or relatively bite size. So thinking function level or a couple of interdependent functions ruins being written.

I would agree it is not as good on doing lengthy work where it’s taking design all the way through implementing a feature in a single shot but trivial is not a good description.

I also don’t think you’re right. 3.5 was recently deprecated and even before then, Cursor has been hitting rate limits with Anthropic. Auto is as much a token cost optimization as it is a rate limit optimization.

lubujackson•8h ago
Auto had a big improvement a few weeks ago (around when pricing changed)
infecto•6h ago
If a few weeks is months I would agree I think the change to Auto was 2-3+ months ago when they moved to charging named models and higher limits on Auto.
infecto•9h ago
Absolutely. I actually don’t understand the preference folks have for Claude code. I don’t find it that powerful. That said, I think some of it comes down to preference and work context.
psygn89•9h ago
Yep, it just works seamlessly. Sure, it hangs sometimes, but their UI allows you to retry or undo changes to an earlier point in the conversation easily. The autocompletion is nice as well and pretty satisfying to tab through the small and menial things when refactoring.
rtfeldman•9h ago
> I’ve tried all the popular alternatives (Copilot, Claude Code, Codex, Gemini CLI, Cline)

Can't help but notice you haven't tried Zed!

infecto•9h ago
I too have tried them all and have settled with Cursor being the best. That said I see the current space split between folks like me who know generally what I want built and appreciate a tool that helps me get to goal quicker and on the otherwise of the spectrum, folks who want the tool to orchestrate most of the engineering. I have no opinion on which is better but for me I sit on the first camp. In that camp Cursor is by far the best tool.
jasonjmcghee•10h ago
Maybe I'm an outlier but Sonnet 4.5 quality is about as low as I'm willing to go.

It's generation speed is not the problem or the time sink.

It's wrestling with it to get the right output.

---

And just to clarify as maybe I misunderstood again but people are comparing cursor to Claude Code and codex etc here- isn't this whole article all cursor just using different models?

srush•10h ago
Agree that Sonnet 4.5 is an excellent model. Would be curious to hear your experience using Composer though, it's quite good.
jasonjmcghee•9h ago
I'll try it out! I haven't yet - just generally conveying my opinion that I personally weigh "better model" much more important than speed, assuming some "fast enough"

Also, didn't realize you worked at Cursor - I'm a fan of your work - they're lucky to have you!

srush•8h ago
Thanks! Yeah, been working here for 9 months now. Fascinated byt agentic coding both as a researcher and user.

Totally agree that "smart model" is the table stakes for usefulness these days.

swyx•10h ago
> Sonnet 4.5 quality is about as low as I'm willing to go.

literally a 30 day old model and you've moved the "low" goalpost all the way there haha. funny how humans work

vidarh•9h ago
Yes? Because why should we settle for less now that it is available?
swyx•9h ago
because engineering is the art of "good enough" and composer is clearly "good enough but a lot faster" which makes up for intelligence gaps in interesting ways
vidarh•9h ago
It's not good enough for a lot of us, though, clearly.
jasonjmcghee•9h ago
Yup - just like sibling comment said - my "low bar" is going to be whatever the best model is that isn't unreasonably costly/expensive.

Speed of model just isn't the bottleneck for me.

Before it I used Opus 4.1, and before that Opus 4.0 and before that Sonnet 4.0 - which each have been getting slightly better. It's not like Sonnet 4.5 is some crazy step function improvement (but the speed over Opus is definitely nice)

solarkraft•9h ago
The reason I pulled out the comparison is to highlight how serious they are about all the important parts that make or break the AI coding experience - speed being very important to me. I’d rather catch my model doing the wrong thing quickly than having a higher chance of one-shotting it at the cost of having to do a lot of specification upfront.
alyxya•9h ago
There’s two different kinds of users, on one side people are more hands off and want the model to autonomously handle longer tasks on its own with minimal guidance, and on the other side is users who want to interactively collaborate with the model to produce desired results. Speed matters much more for the second case, where you know what you want and just want the model to implement whatever you had in mind as quick as possible. Intelligence/ability matters more for the first case when you don’t have full understanding of all the code. I think it’s context dependent for me where more serious work tends to be more interactive. The intelligence of a model doesn’t make up for issues due to lack of context to me.
jasonjmcghee•9h ago
I'm very solidly in the second group - but I review all the code. If it writes faster than I can read, that's fast enough.
timcobb•7h ago
Same... I've found that using a non-Claude model just ends up being more expensive and not worth it. "Auto" tokens are hardly free, and I've had plenty of experiences putting "Auto" to work on a "simple" seeming task to have it consume like 1 USD of tokens quite quickly while producing nothing of value, when I'd replay with Claude 4.5 Sonnet non-thinking and it would provide a solid solution for 0.5 USD.
NaomiLehman•5h ago
gpt-5-high is as low as i can go :]
jonasnelle•10h ago
Cursor has the best Tab model, and I feel like their lead there has kept growing - they're doing some really cool things there. https://cursor.com/blog/tab-rl

I wonder how much the methods/systems/data transfer, if they can pull off the same with their agentic coding model that would be exciting.

srush•10h ago
We also are big Tab users here at Cursor. In the blog we talk about the motivation for this project came from thinking about a Tab-like agent.
dagss•10h ago
It's great. BUT: Wish they had selected another shortcut like shift+tab.

Every time I write code myself I find myself racing the AI to get an indentation in before the AI is done... gets annoying

RosalieCodes•9h ago
You can change the key bind, I personally set it to ctrl+tab
vidarh•9h ago
I feel like that's like having a lead in producing better buggy whips.

I run Claude Code in the background near constantly for a variety of projects, with --dangerously-skip-permissions, and review progress periodically. Tabbing is only relevant when it's totally failing to make progress and I have to manually intervene, and that to me is a failure scenario that is happening less and less often.

lubujackson•8h ago
This is just a completely different use of LLMs and has little to do with working at a real business with a live site and users. Cursor is great when you want to gain understanding of an issue quickly, or resolve something clear and specific quickly.

I'm not against YOLO vibe coding, but being against tab completion is just insane to me. At the end of the day, LLMs help you achieve goals quicker. You still need to know what goal you want to achieve, and tab completion basically let's me complete a focused goal nearly as soon as I determine what my goal is.

vidarh•4h ago
Some of these projects are at a "real business with a live site and users". Two of the current ones are.

And it's not remotely "YOLO vibe coding". All the code gets reviewed, and tested thoroughly, and they are worked to specs, and gated by test suites.

What I don't do is babysit the LLM until it's code passes both the test suite and automated review stages, because it's a waste of time.

Others of these projects are research tasks. While I wrote this comment, Claude unilaterally fixed a number of bugs in a compiler.

camdenreslink•7h ago
What are you building with this workflow? Is it an application live in production with users? It is such a foreign way of working to me.
vidarh•4h ago
A compiler (hobby project). A web application server (tooling for my consultancy). An agentic framework to part-automate end-to-end development of a large web app (customer project). An analytics platform to analyze infrastructure maturity (customer project).

Usually I'll have several Claude Code sessions running in parallel on different projects, and when one of them stops I will review the code for that project and start it again - either moving forwards or re-doing things that have issues.

enraged_camel•9h ago
Tab model is fantastic but I wish it was somehow aware of the conversation happening in the currently active AI chat session.
kilroy123•10h ago
What I can't stand about cursor is the constantly changing and confusing billing and usage.

I think competition in the space is a good thing, but I'm very skeptical their model will outperform Claude.

srush•10h ago
Hi everyone,

I am an ML researcher at Cursor, and worked on this project. Would love to hear any feedback you may have on the model, and can answer question about the blog post.

alyxya•9h ago
Is the new model trained from scratch? What training data went into it?
dfltr•9h ago
Is it true that Cheetah is Grok Code Fast 2? Does this mean that the new Cursor model is also based on Grok?
srush•9h ago
Cheetah was an earlier (and dumber) version of this model that we used to test production speed. They are both developed in-house. If you liked Cheetah, give this model a try.
dfltr•9h ago
Awesome, thanks for the clarification. So are the rumors around Cheetah being based on a Grok model just straight up untrue? I want to try Composer but have a pretty strict no X/Grok policy.
srush•8h ago
Straight up untrue.
carlosbaraza•9h ago
This is nice. I liked Cheetah for grunt work that I want to get out quickly and is not too hard. The speed is really awesome. A model that would run at even higher speeds like the OSS models at groq/cerebras would really be workflow changing, because the slowness of SOTA models really breaks the flow. I find myself taking a ton of breaks and getting distracted while I wait for a model to complete a task (e.g. just now).
srush•8h ago
Let us know how you like it.
WanderPanda•9h ago
Why did you stop training shy of the frontier models? From the log plot it seems like you would only need ~50% more compute to reach frontier capability
srush•9h ago
We did a lot of internal testing and thought this model was already quite useful for release.
WanderPanda•9h ago
Makes sense! I like that you guys are more open about it. The other labs just drop stuff from the ivory tower. I think your style matches better with engineers who are used to datasheets etc. and usually don't like poking a black box
srush•9h ago
Thanks! I do like the labs blog posts as well though, OpenAI and Anthropic have some classics.
pdeva1•9h ago
is Composer a fine tune of an existing open source base model?
srush•9h ago
Our primary focus is on RL post-training. We think that is the best way to get the model to be a strong interactive agent.
comex•9h ago
So, yes, but you won’t say what the base model is? :)
chaidhat•9h ago
Which model did you distill it from? Great work! PS getting a few scenarios where it doesn't follow rules as well as sonnet 4.5
srush•9h ago
The blog talks about the training process. Specifically we trained with RL post-training on coding examples.
chis•9h ago
Makes sense, but what model was used for the base? Is it some open-source model, and you're not at liberty to disclose?
chaidhat•8h ago
that's cool thanks!
MysticFear•9h ago
There is a youtube livestreamer building with it now, if you are looking for direct feedback: https://www.youtube.com/watch?v=1bDPMVq69ac
srush•7h ago
neat!
carlosbaraza•9h ago
How do you work with multiple agents?
srush•7h ago
We train with a single agent. is that the question?
az226•8h ago
How many times have you needed to reset the optimizer during the RL training cycles?
juanma0216•7h ago
I prefer the approach of focusing on faster models despite their lower intelligence because I want my IDE to fly when I can see the code. I find this useful when I need to manually debug something that any model is able to do, so I know it's going to fail but at least it will fail fast. On the other hand, if I need more intelligence I have my other CLI that doesn't allow me to see the code but gets the planning and difficult code done.
srush•7h ago
Our view is that there is a now a minimal amount of intelligence that is necessary to be productive, and that if you can pair that with speed that is awesome.
smg•6h ago
Can you please tell us more about how you used Ray for setting up the RL infrastructure?
srush•6h ago
Oh good question. Actually speaking at the Ray Summit next week in SF so we will talk more about it. We used Ray throughout the pipeline for running evals, for the RL controller, for data collation, and for visualizations. One tool we found helpful was Ray Data which let us easily scale over data and run logs.
nvartolomei•6h ago
Please share more about Ray Data use case.
srush•6h ago
We use Ray data for our map-style processing jobs. For example one tool have runs over all the rollouts from the RL system and collects qualitative statistics to understand which type of agent trajectories are being reward, and what types of searches and terminal commands are being made.
embedding-shape•6h ago
Do you have any graphs handy that kind of replicates the one used first in the blog post but a bit less ambiguous, maybe without model grouping? I feel like it would have been a bit more fair to include proper names, and individualize them rather than group everything together by something, and then present your own model on its own.
coder543•5h ago
Impressive systems write-up. A question: if Composer is an RL finetune on an open model, why keep weights closed? The edge from a slightly better checkpoint erodes quickly in this market, it's not a durable advantage. Composer protects Cursor's margins from being squeezed by the big AI labs, but that is true whether the weights are open or closed, and I think Cursor would have more lasting benefit by generating developer goodwill than from a narrow, short-lived advantage. But, that's just my opinion. I personally find it hard to get excited about yet-another proprietary model. GPT-5 and Sonnet 4.5 are around when I need one of those, but I think the future is open.
ripped_britches•3h ago
Amazing work! The UX is great.

GPT-5-codex does more research before tackling a task, that is the biggest weakness for me not using Composer yet.

Could you provide any color on whether ACP (from zed) will be supported?

dlojudice•3h ago
Congratulations on your work. I spent the day working with a mix of the Composer/Sonnet 4.5/Gemini 2.5 Pro models. In terms of quality, the Composer seems to perform well compared to the others. I have no complaints so far. I'm still using Claude for planning/starting a task, but the Composer performed very well in execution. What I've really enjoyed is the speed. I had already tested other fast models, but with poor quality. Composer is the first one that combines speed and quality, and the experience has been very enjoyable to work with.
Agingcoder•2h ago
It's stunning.

I don't use these tools that much ( I tried and rejected Cursor a while ago, and decided not to use it ) but having played with GPT5 Codex ( as a paying customer) yesterday in regular VSCode , and having had Composer1 do the exact same things just now, it's night and day.

Composer did everything better, didn't stumble where Codex failed, and most importantly, the speed makes a huge difference. It's extremely comfortable to use, congrats.

Edit: I will therefore reconsider my previous rejection

srush•1h ago
Awesome to hear, I will share with the team.
swyx•10h ago
see also https://cursor.com/changelog/2-0 and https://cursor.com/blog/2-0

other links across the web:

https://x.com/amanrsanger/status/1983581288755032320?s=46

https://x.com/cursor_ai/status/1983567619946147967?s=46

swyx•9h ago
my very small nit is... why is the model called Composer?? of all things?? when there was already a Cursor Composer from 2024.

Cursor Cheetah wouldve been amazing. reusing the Composer name feels like the reverse OpenAI Codex move haha

srush•9h ago
We like the name Composer and were sad to see it go. Excited to bring it back. (Agree Cheetah is a cool name too.)
OsrsNeedsf2P•10h ago
One thing no competitor is serious on is average response completion time. Cursor lapped everyone there
srush•8h ago
There are lots of good models we like here. But we agree that getting the right point on the smart+fast graph can make agentic coding feel really good.

(Cursor researcher)

80hd•10h ago
Insane velocity from the Cursor team. I wonder how they move so fast?
srush•10h ago
We don't wear shoes [1].

[1] https://www.businessinsider.com/no-shoes-policy-in-office-cu...

timcobb•7h ago
I would have thought it's because you use Cursor...
numbers•9h ago
Please keep the naming of your models sane, I'd like to know that composer 1 is the first model and composer 2 is second but composer 1o is not yet another 1 variant that's actually newer and better than 2, that's just dumb. Not that you're doing that, some other companies do that.
srush•8h ago
We will do our best. Luckily I don't think there are major telecom companies called Composer-2.
carlosbaraza•9h ago
Could anyone explain how to use multiple agents and subagents in Cursor, Claude Code, or others? It is already challenging to me taming one model doing work, let alone synchronizing multiple parallel workers.

Do you have to split the plan in parallelizable tasks that could be worked in parallel in one codebase without breaking and confusing the other agents?

asdev•9h ago
you can use git worktrees and just have multiple Claude Code terminal instances working on each worktree. That way they don't clash, just delete the worktree when the task is done.
carlosbaraza•8h ago
I have never leveraged git worktrees... That is such a crazy useful tool that I am almost ashamed of not having researched it before. Git is such a beautiful piece of software.
asdev•8h ago
I built an open source project to make the whole workflow easier: https://github.com/built-by-as/FleetCode
asdev•9h ago
is Cursor Bench open? Would like to see an open benchmark for agentic coding
srush•9h ago
Unfortunately not, as we used our own internal code for the benchmark. We would also like to see more benchmarks that reflect the day-to-day agentic coding use.
gabriel666smith•7h ago
Is there any information at all available, anywhere, on what Cursor Bench is testing and how?

It's the most prominent part of the release post - but it's really hard to understand what exactly it's saying.

srush•5h ago
Roughly, we had Cursor software engineers record real questions they were asking models, and then had them record the PR that they made that contained the result. We then cleaned these up. That is the benchmark.
ukblewis•5h ago
Which programming languages/tools/libraries did the teams questions/code involve?
carlosbaraza•9h ago
Cursor 2.0 keeps crashing on me while having an agent running and opening the IDE part of the application. I might have to rollback.
amilich•8h ago
Hey - really sorry to hear this - could you email me andrew@cursor.com? Here are 3 suggestions to try- 1. Reset your settings.json - if shared with vscode, sometimes settings can cause perf regressions 2. Could you try cmd-shift-p -> "capture and send debugging data"? Will send us some profiling data to debug 3. Clear your user data (will delete chats) as a last resort - cmd-shift-p, "reveal user data," close the app, then delete this folder and restart the app
neuronexmachina•8h ago
For anyone else who was wondering, it looks like the within-Cursor model pricing for Cursor Composer is identical to gemini-2.5-pro, gpt-5, and gpt-5-codex: https://cursor.com/docs/models#model-pricing

($1.25 input, $1.25 cache write, $0.13 cache read, and $10 output per million tokens)

lubujackson•7h ago
I'm curious if their near-term expectation is that this is be better than these models or is this a model they tend to use in Auto mode, or if the focus is really if you want speed...? I guess my question is why would I actively chose this over Auto?
cwyers•8h ago
The lack of transparency here is wild. They aggregate the scores of the models they test against, which obscures the performance. They only release results on their own internal benchmark that they won't release. They talk about RL training but they don't discuss anything else about how the model was trained, including if they did their own pre-training or fine-tuned an existing model. I'm skeptical of basically everything claimed here until either they share more details or someone is able to interpedently benchmark this.
criemen•3h ago
I understand where you're coming from, and I'd love to have learned about pre-training vs. off-the-shelf base model too. But

> their own internal benchmark that they won't release

If they'd release their internal benchmark suite, it'd make it into the training set of about every LLM, which from a strictly scientific standpoint, invalidates all conclusions drawn from that benchmark from then on. On the other hand, not releasing the benchmark means they could've hand-picked the datapoints to favor them. It's a problem that can't be resolved unfortunately.

timcobb•7h ago
I wish it was easy to find out how much it costs relative to Claude :)
skeptrune•7h ago
Facts. They really need to make pricing more clear across the entire product.
sebdufbeau•7h ago
As a stealth model, it was priced as $1.25M in / $10M out

Right now, it seems free when you are a Cursor Pro user, but I'd love more clarity on how much it will cost (I can't believe it'll be unlimited usage for subscribers)

Jayakumark•7h ago
This looks like a model RLed on top of Qwen3-Coder or GLM 4.6 as per their graph and foot note.
netcraft•6h ago
I love cursor, the tab completion and agent mode. But I really dislike vscode after using intellij for so many years. I really wish the underlying editor was better, or I could get cursor features in intellij instead. The editing of the files is mostly fine, but its everything else around it that a full IDE provides thats just so much better. Right now its intellij + claude code for me, and its fine, but I wish I could get the AI power of cursor in a better package.
Jcampuzano2•6h ago
Building off of VSCode was probably Cursors silver bullet and the best decision they could have ever made.

It made migrating for everyone using VSCode (probably the single most popular editor) or another vscode forked editor (but at the time it was basically all VSCode) as simple as install and import settings.

I do not think Cursor would have done nearly as well as it has if it didn't. So even though it can be subpar in some areas due to VSCodes baggage, its probably staying that way for a while.

netcraft•6h ago
I dont disagree with anything you said. If I was in their shoes, I would have done exactly the same thing.

Maybe my complaint is that I wish vscode had more features like intellij, or that intellij was the open source baseline a lot of other things could be built on.

Intellij is not without its cruft and problems, dont get me wrong. But its git integration, search, navigation, database tools - I could go on - all of these features are just so much nicer than what vscode offers.

pbowyer•5h ago
Intellij's tab-complete is coming along; it's hit and miss if it will work but for similar edits I'm finding it picks up the pattern quickly and I can tab - tab - tab to make them happen.

Still not up to Cursor standards though :)

simonw•6h ago
Here's the Composer 1 pelican riding a bicycle: https://static.simonwillison.net/static/2025/cursor-1-pelica...
jeffnv•6h ago
honestly better than I expected
bn-l•3h ago
Nah. That ain’t a good pelican.
bn-l•3h ago
Same price as GPT-5
SafeDusk•2h ago
I think both Cursor and Cognition and going in the same direction of SWE-grep[0].

SWE-grep was able to hit ~700tokens/s and Cursor ~300token/s, hard to compare the precision/recall and cost effectiveness though, considering SWE-grep also adopted a "hack" of running it on Cerebras.

I'm trying to kickstart a RL-based code search project called "op-grep" here[1], still pretty early, but looking for collaborators!

[0]: https://cognition.ai/blog/swe-grep [1]: https://github.com/aperoc/op-grep

koakuma-chan•1h ago
I just gave it a try and it's reaally fast. Didn't expect this from you Cursor, good job.