frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
624•klaussilveira•12h ago•182 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
926•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•24 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•27 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
9•kaonwarb•3d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
40•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
219•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
210•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
322•vecti•15h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
370•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
358•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
477•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
272•eljojo•15h ago•160 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
402•lstoll•19h ago•271 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•20 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
14•jesperordrup•2h ago•6 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
3•theblazehen•2d ago•0 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
12•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
244•i5heu•15h ago•188 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•21 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
140•vmatsiiako•17h ago•63 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
280•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
132•SerCe•8h ago•117 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•8h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
176•limoce•3d ago•96 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments
Open in hackernews

Reverse Engineering Cursor's LLM Client

https://www.tensorzero.com/blog/reverse-engineering-cursors-llm-client/
159•paulwarren•8mo ago

Comments

CafeRacer•8mo ago
Soooo.... wireshark is no longer available or something?
Maxious•8mo ago
The article literally says at the end this was just the first post about looking before getting into actually changing the responses.

(that being said, mitmproxy has gotten pretty good for just looking lately https://docs.mitmproxy.org/stable/concepts/modes/#local-capt... )

spmurrayzzz•8mo ago
Yea the proxying/observability is without question the simplest part of this whole problem space. Once you get into the weeds of automating all the eval and prompt optimizing, you realize how irrelevant wireshark actually is in the feedback loop.

But I also like you landed on mitmproxy as well, after starting with tcpdump/wireshark. I recently started building a tiny streaming textual gradient based optimizer (similar to what adalflow is doing) by parsing the mitmproxy outputs in realtime. Having a turnkey solution for this sort of thing will definitely be valuable at least in the near to mid term.

vrm•8mo ago
if you haven't check out our repo -- it's free, fully self-hosted, production-grade, and designed for precisely this application :)

https://github.com/TensorZero/tensorzero

spmurrayzzz•8mo ago
Looks very buttoned up. My local project has some features tuned for my explicit agent flows however (built directly into my inference engine), so can't really jump ship just yet.

Looking great so far though!

vrm•8mo ago
wireshark would work for seeing the requests from the desktop app to Cursor’s servers (which make the actual LLM requests). But if you’re interested in what the actual requests to LLMs look like from Cursor’s servers you have to set something like this up. Plus, this lets us modify the request and A/B test variations!
stavros•8mo ago
Sorry, can you explain this a bit more? Either you're putting something between your desktop to the server (in which case Wireshark would work) or you're putting something between Cursor's infrastructure and their LLM provider, in which case, how?
vrm•8mo ago
we're doing the latter! Cursor lets you configure the OpenAI base URL so we were able to have Cursor call Ngrok -> Nginx (for auth) -> TensorZero -> LLMs. We explain in detail in the blog post.
stavros•8mo ago
Ah OK, I saw that, but I thought that was the desktop client hitting the endpoint, not the server. Thanks!
robkop•8mo ago
There is much missing from this prompt, tool call descriptors is the most obvious. See for yourself using even a year old jailbreak [1]. There’s some great ideas in how they’ve setup other pieces such as cursor rules.

[1]: https://gist.github.com/lucasmrdt/4215e483257e1d81e44842eddb...

ericrallen•8mo ago
Maybe there is some optimization logic that only appends tool details that are required for the user’s query?

I’m sure they are trying to slash tokens where they can, and removing potentially irrelevant tool descriptors seems like low-hanging fruit to reduce token consumption.

vrm•8mo ago
I definitely see different prompts based on what I'm doing in the app. As we mentioned there are different prompts for if you're asking questions, doing Cmd-K edits, working in the shell, etc. I'd also imagine that they customize the prompt by model (unobserved here, but we can also customize per-model using TensorZero and A/B test).
joshmlewis•8mo ago
Yes this is one of the techniques apps can use. You vectorize the tool description and then do a lookup based on the users query to select the most relevant tools, this is called pre-computed semantic profiles. You can even hash queries themselves and cache tools that were used and then do similarity lookups by query.
tough•8mo ago
cool stuff
GabrielBianconi•8mo ago
They use different prompts depending on the action you're taking. We provided just a sample because our ultimate goal here is to start A/B testing models, optimizing prompts + models, etc. We provide the code to reproduce our work so you can see other prompts!

The Gist you shared is a good resource too though!

cloudking•8mo ago
https://github.com/elder-plinius/CL4R1T4S/blob/main/CURSOR/C...
notpushkin•8mo ago
Hmm, now that we have the prompts, would it be possible to reimplement Cursor servers and have a fully local (ahem pirated) version?
deadbabe•8mo ago
Absolutely
handfuloflight•8mo ago
Were you really waiting for the prompts before disembarking on this adventure?
tomr75•8mo ago
presumably their apply model is run on their servers

I wonder how hard it would be to build a local apply model/surely that would be faster on a macbook

notpushkin•8mo ago
It’s possible, but they allow you to specify your own API (that’s how they got the prompts in this article).
bhaktatejas922•8mo ago
Its hard to get a model that does it usefully on your laptop. Theres an open source 1.5B model from QuocDat, and Morph - https://morphllm.com which is a fast apply model as an API (that I run)
smcleod•8mo ago
Or you could just use Cline / Roo Code which are better for agentic coding and open source anyway...
tmikaeld•8mo ago
But extremely expensive in comparison
bredren•8mo ago
Cursor and other IDE modality solutions are interesting but train sloppy use of context.

From the extracted prompting Cursor is using:

> Each time the USER sends a message, we may automatically attach some information about their current state…edit history in their session so far, linter errors, and more. This information may or may not be relevant to the coding task, it is up for you to decide.

This is the context bloat that limits effectiveness of LLMs in solving very hard problems.

This particular .env example illustrates the low stakes type of problem cursor is great at solving but also lacks the complexity that will keep SWE’s employed.

Instead I suggest folks working with AI start at chat interface and work on editing conversations to keep clean contexts as they explore a truly challenging problem.

This often includes meeting and slack transcripts, internal docs, external content and code.

I’ve built a tool for surgical use of code called FileKitty: https://github.com/banagale/FileKitty and more recently slackprep: https://github.com/banagale/slackprep

That let a person be more intentional about what the problem they are trying to solve by only including information relevant to the problem.

jacob019•8mo ago
I had this thought as well and find it a bit surprising. For my own agentic applications, I have found it necessary to carefully curate the context. Instead of including an instruction that we "may automatically attach", only include an instruction WHEN something is attached. Instead of "may or may not be relevant to the coding task, it is up for you to decide"; provide explicit instruction to consider the relevance and what to do when it is relevant and when it is not relevant. When the context is short, it doesn't matter as much, but when there is a difficult problem with long context length, fine tuned instructions make all the difference. Cursor may be keeping instructions more generic to take advantage of cached token pricing, but the phrasing does seem rather sloppy. This is all still relatively new, I'm sure both the models and the prompts will see a lot more change before things settle down.
lyjackal•8mo ago
I've been curious to see the process for selecting relevant context from a long conversation. has anyone reverse engineered what that looks like? how is the conversion history pruned, and how is the latest state of a file represented?
GabrielBianconi•8mo ago
We didn't look into that workflow closely, but you can reproduce our work (code in GitHub) and potentially find some insights!

We plan to continue investigating how it works (+ optimize the models and prompts using TensorZero).

serf•8mo ago
Cursor is the only product that I have cancelled in 20+ years due to a lack of customer service response.

Emailed them multiple times over weeks about billing questions -- not a single response. These weren't like VS code questions , either -- they needed Cursor staff intervention.

No problem getting promo emails though!

The quicker their 'value' can be spread to other services the better, imo. Maybe the next group will answer emails.

jjani•8mo ago
Similarly: https://github.com/getcursor/cursor/issues/1052
bomewish•8mo ago
Super weird. It never did this to my defaults. Why would it do to some users and not others? Terrible behavior. Surprised Mac even allows a program to simply overwrite such settings.
jjani•8mo ago
No clue, it very much did it for me, but given the lack of new comments since March they might have silently stopped doing that. That they never responded there nor even bothered to close the thread fits squarely with GP's experience with their non-existent customer service :)

Had no idea that this was possible on Mac either, never seen any other app do this, though commonplace on Windows.

angst•8mo ago
Indeed.

Mail to <hi@cursor.com> is replied by “Sam from Cursor” which is “Cursor's AI Support Assistant” and after few back and forth it tells “I'm connecting you with a teammate who can better investigate”. Guess what? It’s been a month and no further communication whatsoever.

I don’t have high hopes for its customer services.

SbNn6uJO5wzPww•8mo ago
doing same with mitmproxy https://news.ycombinator.com/item?id=44213073
Rastonbury•8mo ago
Another similar look at their prompting: https://news.ycombinator.com/item?id=44154962