frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
6•AlexeyBrin•1h ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
37•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•125 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
11•__natty__•3h ago•0 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
265•i5heu•18h ago•216 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

Windsurf SWE-1: Our First Frontier Models

https://windsurf.com/blog/windsurf-wave-9-swe-1
190•arittr•8mo ago

Comments

firejake308•8mo ago
I'm confused why they are working on their own frontier models if they are going to be bought by OpenAI anyway. I guess this is something they were working on before the announcement?
kristopolous•8mo ago
Must have been. These things take months.
anshumankmr•8mo ago
Getting more money perhaps also, if they believed their model to be good, and had amassed some good training data Open AI can leverage, apart from the user base.
allenleein•8mo ago
It seems OpenAI acquired Windsurf but is letting it operate independently, keeping its own brand and developing its own coding models. That way, if Windsurf runs into technical problems, the backlash lands on Windsurf—not OpenAI. It’s a smart way to innovate while keeping the main brand safe.
riffraff•8mo ago
But doesn't this mean they have twice the costs in training? I was under the impression that was still the most expensive part of these companies' balance.
kcorbitt•8mo ago
It's very unlikely that they're doing their own pre-training, which is the longest and most expensive part of creating a frontier model (if they were, they'd likely brag about it).

Most likely they built this as a post-train of an open model that is already strong on coding like Qwen 2.5.

rfoo•8mo ago
mid/post training does not cost that much, except maybe large scale RL, but even this is more of an infra problem. If anything, the cost is mostly in running various experiments (i.e. the process of doing research).

It is very puzzling why "wrapper" companies don't (and religiously say they won't ever) do something on this front. The only barrier is talents.

anshumankmr•8mo ago
You might be underestimating the barrier to hiring the really smart people. Open AI/Google etc would be hiring and poaching people like crazy, offering cushy bonuses and TCs that would make blow your mind.(Like say Noam Brown at Open AI) And some of the more ambitious ones would start their own ventures (like say Ilya etc.).

That being said I am sure a lot of the so called wrapper companies are paying insanely well too, but competing with FAANGMULA might be trickier for them.

NitpickLawyer•8mo ago
FAANGMULA ... Microsoft, Uber?, L??, Anthropic? Who's the L?
Archonical•8mo ago
Lyft.
riffraff•8mo ago
A is Airbnb, afair.
whywhywhywhy•8mo ago
Any half decent and methodical software engineer can fine tune/repurpose a model if you have the data and the money to burn on compute and experiment runs, which they do.
anshumankmr•8mo ago
Fine tuning/distilling etc is fine. I was speaking to the original commenter's question about research, which is where things are trickier. Fine tuning is something I even managed and Unsloth has removed even barriers for training some of the more commonly used open source models.
brookst•8mo ago
They can absolutely do it, but they will get poorer results than someone who really understands LLMs. There is still a huge amount of taste and art in the sourcing and curation of data for fine tuning.
OtherShrezzing•8mo ago
This is effectively how Microsoft is treating OpenAI.
ActionHank•8mo ago
Windsurf is a hedge against MS + VSCode and GH + copilot.

OAI is trying frantically to build a moat without doing any digging.

sunshinekitty•8mo ago
This is an incredibly premature statement to make. The acquisition announcement is days old.
dyl000•8mo ago
openAI models have an issue where they are pretty good at everything but not incredible at anything. They're too well rounded.

for coding you use anthropic or google models, I haven't found anyone who swears by openAI models for coding... Their reasoning models are either too expensive or hallucinate massively to the point of being useless... I would assume the gpt 4.1 family will be popular for SWE's

Having a smaller scope model (agentic coding only) allows for much cheaper inference and windsurf building its own moat (so far agentic IDE's haven't had a moat)

jjani•8mo ago
> openAI models have an issue where they are pretty good at everything but not incredible at anything. They're too well rounded.

This suggests OpenAI models do have tasks they're better at than the "less rounded" competition, who have taks they're weaker in. Could you name a single sucg task (except for image generation, which is an entirely different usecase), that OpenAI models are better at than Gemini 2.5 and Claude 3.7 without costing at least 5x as much?

seunosewa•8mo ago
They were working on the model before the acquisition. It makes sense to test it and see how it does instead of throwing the work away. Their data will probably be used to improve gpt-4.1, o4 mini high, and other OpenAI coding models
jstummbillig•8mo ago
Why would OpenAI not let smart people work on models? That seems to be what they do. The point is: They are no longer "their own" models. They are now OpenAI models. If they suck, if they are redundant, if there is no idea there that makes sense, that effort will not continue indefinitely.
blixt•8mo ago
> Enabled from the insight from our heavily-used Windsurf Editor, we got to work building a completely new data model (the shared timeline) and a training recipe that encapsulates incomplete states, long-running tasks, and multiple surfaces.

This data is very valuable if you're trying to create fully automated SWEs, while most foundation model providers have probably been scraping together second hand data to simulate long horizon engineering work. Cursor probably has way more of this data, and I wonder how Microsoft's own Copilot is doing (and how they share this data with the foundation model providers)...

figassis•8mo ago
And is probably why OpenAI paid $$$ to acquire
lemming•8mo ago
The company that is best placed to collect tons of high quality data of this type is undoubtedly Google. They’ve had publications talking about how they capture data from their in house SWE tools and use it to improve their tooling.
blixt•8mo ago
They certainly can automate their own SWE but I wonder if that’s as good as getting full computer use logs (terminal, web browsing, code acceptance/rejection, etc etc — as claimed in the linked article) from millions of individuals and thousands of companies all with their quirky technology setups.
throwaway314155•8mo ago
This summarizes Google's approach to software engineering well; just pretend the outside world doesn't exist and the "Google way" is the only way.
whywhywhywhy•8mo ago
There is a world where the wrapper makers surpass the current model makers in their area of focus. Cursor/Windsurf have all the data on when people got so frustrated with Claude they switched to Gemini/GPT and also all the data of when the problem was actually solved and when it wasn't.
dyl000•8mo ago
it was only a matter of time, they have too much good data to not train their own models, not to mention that claude API calls were probably killing their profitability.

open source alternative https://huggingface.co/SWE-bench/SWE-agent-LM-32B

though I haven't been able to find a mlx quant that wasn't completely broken.

aquir•8mo ago
It's a shame that my development work needs a specific VSCode extension (domain specific language for ERP systems) so my options are VSCode+Copilot or Cursor.
albertot•8mo ago
you can use the codeium extension I believe no? Also I think that if the license of the extension that you are using permits it you could export that extension to the open source store
DrBenCarson•8mo ago
You can try Cline is VSCode as well, many engineers swear by it
aitchnyu•8mo ago
Aider runs in your terminal and you can make comments against your code in any editor and it will execute your requests. It can use any model. CLine, mentioned in sibling comment is is same space.
tintor•8mo ago
Aider wastes tokens like crazy.
aitchnyu•8mo ago
In which cases?
TiredOfLife•8mo ago
Windsurf is also a VS Code fork like Cursor
knes•8mo ago
Check augmentcode.com
bicepjai•8mo ago
My favorite tools are cline and roo. My experience says cline eats tokens like crazy and roo eats less. I don’t try Aider since I do like to watch the mesmerizing diffs (fire verification) :)
antirez•8mo ago
So because they need to have a better business model, they will try to move users to weaker models compared to the best available? This "AI inside the editor" thing makes every day less sense in many dimensions: it makes you not really capable of escaping the accept, accept, accept trap. It makes the design interaction with the LLM too much about code and too little about the design itself. And you can't do what many of us do: have that three subscriptions for the top LLMs available (it's 60$ for 3, after all) and use each for it's best. And by default write your stuff without help if LLMs are not needed in a given moment.
visarga•8mo ago
> it makes you not really capable of escaping the accept, accept, accept trap

The definition of vibe coding - trust the process, let it make errors and recover

conartist6•8mo ago
"press pay to think for me button" "press pay to think for me button" "press pay to think for me button" "press pay to think for me button" "press pay to think for me button" I love it
DrBenCarson•8mo ago
“Hmm seems we’re very far off course but we have thousands of lines…I can’t figure all that out rn…press magic thinking button”
ipnon•8mo ago
I don't think they are targeting software engineers as users. They are seeking those on the software engineering margins, users who know what Python and for-loops are but don't care to configure Aider and review each of the overwhelming number of models released daily. They want to tell the editor to add function foo to bar.py. I suspect this latter market segment is much larger than the former!
_hcuq•8mo ago
When I got my first job in 1986, the company had a tool that allowed non engineers to write code. Of course it didn't work. They could write code, but it ended up as a buggy, unreliable, unmaintainable mess. It turned out it was a good sales tool, get our technology into the company, then we would get paid to write the programs.

Then the were the the MS Access and Excel amateur efforts. I worked at a company that for years had a very profitable business replacing in house MS Access spaghetti with our well designed application.

Aaaand..... here we go.... deja vu all over again....

bluelightning2k•8mo ago
I don't like or agree with this take. You're basically saying - "something good exists, so why try to improve upon it".

Their stated goal is to improve on the frontier models. It's ambitious, but on the other hand they were a model company before they were an IDE company (IIRC) and they have a lot of data, and the scope is to make a model which is specialized for their specific case.

At the very least I would expect they would succeed in specializing a fronteir model for their use-case by feeding their pipeline of data (whether they should have that data to begin with is another question).

The blog post doesn't say much about the model itself, but there's a few candidates to fine tune from.

infecto•8mo ago
You’ve got a couple of ideas colliding here, let me try to unpack them.

First, most of the major players already have their own models or have been developing them for some time. Your take feels a bit reductive. Take Windsurf pre-acquisition, for example, their risk was being too tightly coupled to third-party vendors. It’s only logical to assume that building task- or language-specific models will ultimately help reduce costs and offer more control.

As for the other point: in my experience, trying to fully leverage LLMs actually makes me more prescriptive in my designs. I spend more time thinking through architecture and making my code modular, more so than when I wasn’t using an LLM. I’m sure others may design less or take shortcuts, but for me it’s pushed the opposite behavior. Is it the “right” way? I’m not sure, but I’m enjoying it and staying productive.

phillipcarter•8mo ago
I think the point is that the UX favors accepting code changes as the primary action, rather than using the chat interface as an ideation tool. It's quite valid, because as a user of all these tools, Winsurf and Cursor very much do try to make you slap the Accept button uncritically!
infecto•8mo ago
Does it though? I use the chat option quite a bit in the tools. The only UX that favors accept pattern is tab which makes sense.
phillipcarter•8mo ago
It does. Defaults matter, and the defaults for these tools are agent mode with code changes meant to be accepted, rather than forcing you to read the code and manually apply those changes.

Note: I'm not saying that's a bad thing! It's significantly more convenient for many use cases, so I can see why it's a default. But the incentive being created is to accept first, analyze later.

keeganpoppen•8mo ago
i think this comment is just a reflection of how the world has not caught up with the inevitable shift of “software engineering” up further into “idea space”. i completely agree that the tooling has not caught up with this new world order yet. personally, i think “true software engineering” is more valuable than ever in the AI era, but the tools for actually realizing this are woefully behind.
bhl•8mo ago
Slightly weaker, but cheaper models mostly good for Windsurf only. As a developer, I would rather have stronger models I can throw more money at.
vunderba•8mo ago
> they will try to move users to weaker models compared to the best available

> you can't do what many of us do: have three subscriptions and use each for its best

I don't think has anything to do with whether or not AI is in the editor so much as it is the difference between a subscription (Cursor) vs. a BYOK approach (VS Codium + Cline, Zed, etc). Most BYOK plug-ins will let you set up multiple profiles against various providers so that you can choose the most optimal LLM for the given problem you're trying to solve.

bluelightning2k•8mo ago
Two takes here. Cynical and optimistic.

Cynical take: describing yourself as a full stack AI IDE company sounds very invest-able in a "what if they're right" kind of way. They could plausibly ask for higher valuations, etc.

Optimistic take: fine tuning a model for their use-case (incomplete code snippets with a very specific data model of context) should work. Or even has from their claims. It certainly sounds plausible that fine-tuning a frontier model would make it better for their needs. Whether it's reasonable to go beyond fine-tuning and consider pre-training etc. I don't know. If I remember correctly they were a model company before Windsurf, so they have the skillset.

Bonus take: doesn't this mean they're basically training on large-scale gathered user data?

heymijo•8mo ago
FYI, OpenAI acquired Windsurf so valuation is not an issue.

I don’t know Varun (their founder/CEO) personally but I get highly competent vibes from him. I’d let my skeptical self lean on your optimistic take.

OkGoDoIt•8mo ago
I don’t think the acquisition has closed yet, maybe this is still useful for a leverage/negotiating perspective. And it was almost certainly something they were working on before the acquisition anyway.

I do think that’s an overly cynical way to look at this though.

infecto•8mo ago
Can we get arm Linux builds? Would be really nice!
resters•8mo ago
A few points that are getting overlooked:

- OpenAI is buying WindSurf and probably did diligence on these models before it decided to invest.

- WindSurf may have collected valuable data from it users that is helpful in training a coding-focused AI model. The data would give a 6 month lead to OpenAI which is probably worth the $3B.

- Even if Windsurf's frontier models are not better than other models for coding, if they excel in a few key areas it would justify significant investment in their methodology (see points above).

- There are still areas of coding where even the top frontier models falter that would seemingly be ripe for improvement via more careful training. Notably, making the model better at working within a particular framework and version, programming language version, etc. Also better support for more obscure languages and libraries/versions and the ability to "lock in" on the versions that the developer is using. I've wasted a lot of time trying to convince OpenAI models to use OpenAI's latest Python API -- even when given docs and explicit constraints to use the new API, OpenAI frontier models routinely (incorrectly) update my code to use old API conventions and even methods that have been removed!

Consider that the basic competency of doing a frontier coding model well is likely one of the biggest opportunities in AI right now (second to reasoning and in my opinion tied with image analysis and production). An LLM that can both reason and code accurately could read a chapter in a textbook and code a 3D animation illustrating all of the concepts as a one-shot exercise. We are far from that at present even in OpenAI's best stuff.

keeganpoppen•8mo ago
this is clearly the right take… it’s fun to semi-dunk on “how on earth is that the valuation”, but this is one of those rare cases where the tech and platform are genuinely more valuable in the hands of the acquirer than they ever could be in the hands of the acquiree. because i think windsurf has executed as well as one possibly could in the space, but openai is the SOTA model king, and i don’t see that changing anytime soon.
dghlsakjg•8mo ago
Minor nit: OpenAI is in a three way tie for SOTA models with Google and Anthropic. They are the king of marketing attention, studio Ghibli imitation, and consumer subscriptions, though.
paulddraper•8mo ago
First mover too
libraryofbabel•8mo ago
Thanks - this does help contextualize the $3B acquisition. When the story first broke all they seemed to be paying for was a coding agent (of which there are sooo many out there) and the large windsurf user base (but with no moat). So a lot of us were rather skeptical. The valuation is still kinda insane, I think, but Windsurf’s ability to train a frontier model - and with a much smaller team than the big AI shops - is the key differentiator from the Clines, Cursors, Aiders etc.

It is a bit of a shame that we’ll never get to see what they could do on their own. But I hope their clearly very talented employees do very well out of this.

resters•8mo ago
> Thanks - this does help contextualize the $3B acquisition.

Agreed. My initial reaction to the $3B acquisition was similar to yours. Seeing this announcement made me rethink it a bit.

tianshuo•8mo ago
Sorry, but as a paid Windsurf user, I think Windsurf should stop chasing shiny frontier models and focus on building better predictable & manageable workflows to build real-life products. - How about providing a Jira/Trello-style dashboard with subtasks for our AI, instead of copy-pasting "Cline Memory Bank" to .windsurfrules? - How about supporting TDD and regression-fixing by default? - How about using git with branches instead of the current undo-redo system? - How about a better way of syncing documentation vs real code?

We are paying for more "manageable" AI agents to get stuff done, not a chaotic "genius-hacker" to hack together quick prototypes.