frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
58•theblazehen•2d ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
638•klaussilveira•13h ago•188 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
936•xnx•18h ago•549 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
35•helloplanets•4d ago•31 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
113•matheusalmeida•1d ago•28 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
13•kaonwarb•3d ago•12 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
45•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
222•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
214•dmpetrov•13h ago•106 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
324•vecti•15h ago•142 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
374•ostacke•19h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
479•todsacerdoti•21h ago•238 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•19h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
279•eljojo•16h ago•166 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
407•lstoll•19h ago•273 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
17•jesperordrup•3h ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•21 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
58•kmm•5d ago•4 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
27•romes•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
245•i5heu•16h ago•193 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
14•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
54•gfortaine•11h ago•22 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
143•vmatsiiako•18h ago•65 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1061•cdrnsf•22h ago•438 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
179•limoce•3d ago•96 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
284•surprisetalk•3d ago•38 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
137•SerCe•9h ago•125 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
29•gmays•8h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•21h ago•23 comments
Open in hackernews

Choosing learning over autopilot

https://anniecherkaev.com/choosing-learning-over-autopilot
58•evakhoury•3w ago

Comments

joe_mamba•3w ago
From the author:

>ai-generated code is throw-away code

Mate, most code I ever written across my career has been throw away code. The only exception being some embedded code that's most likely on the streets to this day. But most of my desktop and web code has been thrown away by now by my previous employers or replaced by someone else's throwaway code.

Most of us aren't building DOOM, the Voyager probe or the Golden Gate bridge here, epic feats of art and engineering designed to last 30-100+ years, we're just plumbers hacking something quickly to hold things together until the music chairs stop playing and I have no issue offloading that to a clanker if I can, so i can focus on the things I enjoy doing. There's no shame in that and no pride in that either, I'm just paid to "put the fries in the bag", that's it. Do you think I grew up dreaming about writing GitHub Actions yaml files for a living?

Oh and BTW, code being throwaway, is the main reason demand and pay for web SW engineers has been so high. In industries where code is one-and-done, pay tends to scale down accordingly since a customer is more than happy to keep using your C app on a Window XP machine down in the warehouse, instead of keep paying you to keep rewriting it every year in a facier framework in the cloud.

vjerancrnjak•3w ago
RAG, llm pipeline industry just continues in the same fashion of throwing even more glue, insanely slow, expensive, but works due to somehow companies having money to waste, perpetually. Not that much different from the whole Apache stack or similar gluey expensive and slow software.

There is similar mindless glue in all tech stacks. LLMs are trained on it, and successfully do more of it.

Even AI companies just wastefully do massive experiments with suboptimal data and compute bandwidth.

dgxyz•3w ago
Yeah this is what kills me. Most of the problems we solve are pretty simple. We just made the stacks really painful and now LLMs look sensible because they are trained to reproduce that same old crap mindlessly.

What the hell are we really doing?

What looked sensible to me is designing a table, form and report in Microsoft Access in 30 minutes without requiring 5 engineers and writing 50k lines of React and fucking around with kubernetes and microservices to get there.

LLMs just paste over the pile of shit we build on.

spion•3w ago
cold take speculation: the architecture astronautics of the Java era probably destroyed a lot of the desire for better abstractions and thinking over copy-pasting, minimalism and open standards

hot take speculation: we base a lot of our work on open source software and libraries, but a lot of that software is cheaply made, or made for the needs of a company that happens to open-source it. the pull of the low-quality "standardized" open source foundations is preventing further progress.

califool•3w ago
“LLMs just paste over the pile of shit we build on.” This is the perfect description. Nice job.
Hamuko•3w ago
I feel like a lot code is pretty sticky actually. I spend two weeks working on a feature and most likely that code will live for a time period measured in years. Even the deprecation period for a piece of software might be measured in years.
m463•3w ago
It's kind of amazing that the really mainstream jobs create and pitch throwaway code, while a few key niche jobs, with little demand, can really create enduring products.

Kind of like designing a better social media interface probably pays 100x what a toilet designer would be paid, but a better toilet would benefit the world 1000x.

esafak•3w ago
The difference between economic value and social value.
joe_mamba•3w ago
Which is why I dislike the GDP being thrown around in discussions as the ultimate dick measuring metric. High economic value activities don't translate or don't trickle down into high social value environments.

For example, I went to visit SF as young lad and I was expecting to be blown away given the immense wealth that area generates, but was severely disappointed with what I saw on the street. I used to think my home area of Eastern Europe is kind of a shithole, but SF beats that hands down. Like there's dozens of places on this planet that are way nicer to live in than SF despite being way poorer by comparison.

m463•3w ago
> kind of a shithole

literally supporting the toilet designer argument.

tangentially, japanese toilets are quite amazing.

ofalkaed•3w ago
The missing step seems to be identifying what is worth learning and your goals. Will learning X actually benefit you? We already do this with libraries, they save us a great deal of time partially by freeing us from having to learn everything required to implement that library, and we use them despite those libraries often being less than ideal for the task.
jrm4•3w ago
I respect this choice, but also I feel like one might need to respect that it may end up not being particularly "externally" valuable.

Which is to say, if it's a thing you love spending your time on and it tickles your brain in that way, go for it, whatever it is.

But (and still first takeaways) if the goal is "making good and useful software," today one has to be at least open to the possibility that "not using AI" will be like an accountant not using a calculator.

RealityVoid•3w ago
While I tend to agree, I think it's super easy to think you are using AI and being productive and then hitting a brick wall once all the things start failing because the system is not internally coherent.
saulpw•3w ago
Yeah, it's more like an accountant throwing away this "double-entry" system in favor of a single-entry spreadsheet that any Jimbob or Maryanne can use.
spion•3w ago
Has anyone measured whether doing things with AI leads to any learning? One way to do this is to measure whether subsequent related tasks have improvements in time-to-functional-results with and without AI, as % improvement. Additionally two more datapoints can be taken: with-ai -> without-ai, and without-ai -> with-ai
somethingsome•3w ago
I'm only a data point, but some years ago I spent a whole year learning a mathematical book above my level at the time. It was painful and I only grasped parts of it.

I did again the same book this year, this time spending much time questioning an llm about concepts that I couldn't grasp, copy pasting sections of the book and ask to rewrite for my understanding, asking for fast visualization scripts for concepts, ask to give me corrected examples, concrete examples, to link several chapters together, etc..

It was still painful, but in 2 months (~8h-10h a day) I covered the book in many more details that what I ever could do some years ago.

Of course I still got some memories of the content from that time, and I'm better prepared as I have studied other things in the meantime. Also the model sometimes give bad explanations and bad links, so you must stay really critic about the output. (same goes for plots code)

But I missed a lot of deep insights years ago, and now, everything is perfectly clear in two months.

The ability to create instant plots for concepts that I try to learn was invaluable, then asking the model to twist the plot, change the data, use some other method, compare methods, etc..

Note: for every part, when I finally grasped it, I rewrited it in my own notes and style, and asked the model often to critic my notes and improve a bit. But all the concepts that I wrote down, I truly understand them deeply.

Of course, this is not coding, but for learning at least, LLMs were extremely helpful for me.

By this experiments I would say at least 6x speedup.

epolanski•3w ago
Honestly I feel I have never learned as much as I do now.

LLMs remove quite a lot of fatigue from my job. I am a consultant/freelancer, but even as an employee large parts of my job was not writing the code, but taking notes and jumping from file to file to connect the dots. Or trying to figure out the business logic of some odd feature. Or the endless googling for responses lying deep inside some github issue or figuring out some advances regex or unix tool pattern. Or writing plans around the business logic and implementation changes.

LLMs removed the need for most of it which means that I'm less fatigued when it comes to reading code, focusing on architectural and product stuff. I can experiment more, and I have the mental strength to do some leetcode/codewars exercise where incidentally I'll also learn stuff by comparing my solution to others that can then apply back to my code. I am less bored and fatigued by the details to take some time more focusing on the design.

If I want to learn about some new tool or database I'm less concerned with the details of setting it up or exploring its features or reading outdated poorly written docs, when I can clone the entire project in a git subtree and give the source code to the LLM which can answer me by reading the signature, implementation and tests.

Honestly, LLMs remove so much mental fatigue that I've been learning a lot more than I've ever done. Yet naysayers will conflate LLMs as a tool with some lovable crap vibecoding, I don't get it.

Havoc•3w ago
I learned a fair bit about architectural choices while vibecoding because if you don’t spec out how things should work it goes off the rails fast.

Haven’t found a good way to learn programming language basics via AI though

pizzafeelsright•3w ago
How many people could, from scratch, build a ball point pen?

Do we have to understand the 100 years of history behind the tool or the ability to use it? Some level of repair knowledge is great. Knowing the spring vs ink level is also helpful.

pizzafeelsright•3w ago
Following up - I am the most excited about using computers because the barrier from intent to product are being dropped. At this point my children can 'code' software without knowing anything other than intent. Reality is being made manifest. Building physics into a game would take a decade of experience but today we can say "allow for collision between vehicles".

If you have ever gone running the ability to coordinate four limbs, maintain balance, assert trajectory, negotiate uneven terrain, and modify velocity and speed at will is completely unknown to 99.9% of mortals who ever lived and yet is possible because 'biological black box hand wave'.

amelius•3w ago
> What scares me most is an existential fear that I won’t learn anything if I work in the “lazy” way.

You're basically becoming a manager. If you're wondering what AI will turn you into just think of that manager.

epolanski•3w ago
Imho this AI "revolution" will be the death of non technical middle management first.

Engineers who practice engineering (as in thinking about the pros and cons, impact, cost) will simply get to work closer with relevant stakeholders in smaller teams and the role of the project manager will start to be seen more of a barrier than facilitator.

belval•3w ago
I get where the author is coming from, but (I promise from an intellectually honest place) does it really matter?

Modeling software in general greatly reduced the ability of engineers to compute 3rd, 4th and 5th order derivatives by hand when working on projects and also broke their ability to create technical drawing by hand. Both of those were arguably proof of a master engineer in their field, yet today this would be mostly irrelevant when hiring.

Are they lesser engineers for it? Or was it never really about derivatives and drawings, and all about building bridges, engines, software that works?

esafak•3w ago
I can't believe I took a mandatory technical drawing class.
mkoubaa•3w ago
Are you arguing that we are no worse at building bridges than we were 100 years ago?
furyofantares•3w ago
Post is clearly very heavily glued together/formatted and more by an LLM, but it's sort of fascinating how bits and spurts of the author's lowercase style made it through unscathed.
dandano•3w ago
Lately I have had the cursed vision as I'm building a new IoT product. I have to learn _so_ much, so I have stopped using claude code. I find directly altering my code too hands off.

Instead I still use claude in the browser mainly for high level thinking/architecture > generating small chunks of code > copying pasta-ing it over. I always make sure I'm reading said library/code docs as well and asking claude to clarify anything I'm unsure of. This is akin to when I started development using stackoverflow just 10x productive. And I still feel like I'm learning along the way.

JP44•3w ago
I wouldn't call that cursed but useful tooling usage. Had the same scenario where I wanted to work on a tool for a project written in Go, of which I know next to nothing. Claude code was able to spit out 100's line of code that worked and I (almost) understood and could explain what was happening where and why, but I had no chance of debugging or extending it on my own.

I've limited myself to only use Claude's webchat to do almost exactly as you've mentioned except creating snippets, it can only explain or debug code I enter. I prompt it to link relevant sources for solutions I seek. Plus it assists me subdivide, prioritise and plan my project in chunks so I don't get lost.

It has saved me a lot of time this way while still enjoying working on a project

dandano•3w ago
Interesting how you write the code first then put it into claude. What's the reason there? I guess that is where I find the most benefit is not writing out the syntax, even though I could I just can't be bothered. I often start with the snippet then refactor to the style of code I like. For code I don't know that well like c++ I like to get a snippet so I can then research into those functions that is used and go from there.
JP44•3w ago
Mostly because I learn the best by doing, reiterating and then expanding, especially with programming. Essentialy, building a form of context or mindmap if you will.

When I was testing Typescript/React I followed the docs and some guides and got thrown in the deep end, I could follow and understand the steps but not reproduce or adapt them because the (or my) scope was limited, also, libraries; so many libraries used..

By starting with a HelloWorld and expanding it step by step, going back and forth. Using forums/blogs to see available functions or similar oss projects for what I wanted to do, then use the docs to read about the used functions.

Kagi already helped save me a lot of time by reducing spam posts and using language shebangs etc. With Claude I either give a snippet that I cannot translate or am stuck on, like you do, or I'll prompt something like: 'describe steps used to get from input=.. to output=.. in go, this/that needs to be done/transformed, do not output actual code'.

I guess the main thing is that I want to be engaged in my personal/hobby projects and think about the problem and solution and not just copy/paste because that takes the fun away (in case of work, if it makes me more productive I'll take it. Just need to remember I'm the one who is responsible). It's like buying a pre-assembled puzzle.

Harsha_Jaya_13•3w ago
Yeah it actually is beautiful, because we get the brainstorming ideas so that ,sometimes we lack the experience in coding to get that idea to be converted into a powerful working application ,we need the artificial intelligence so that ,we can always enjoy the process , only if we are loving the work we are doing ,I mean there is a lot of difference between blind pasting and completing a work ,just my opinion of what I have been through
soundworlds•3w ago
I do AI trainings, and the framework I try to teach is "Using AI as a Learning Accelerator, not a Learning Replacement"
andai•3w ago
Recently after a month of heavily AI assisted programming, I spent a few days programming manually.

The most striking thing to me was how frustrating it was.

"Oh my god, I've melted my brain" I thought. But I persisted in my frustration -- basically continuous frustration, for seven hours straight -- and was able to complete the task.

Then I remembered, actually, it was always like that. If I had attempted this task in 2019 (or a similar task, in terms of difficulty and novelty), it would have been the same thing. In fact I have many memories of such experiences. It is only that these days I am not used to enduring the discomfort of frustration for so long, without reaching for the "magically fix everything (probably)" button.

If I had to guess, I'd say that is the main "skill" being lost. There's a quote like that, I think.

Genius ... means transcendent capacity of taking trouble, first of all. —Thomas Carlyle

r0x0r007•3w ago
'If I had to guess, I'd say that is the main "skill" being lost'(to endure frustration).

I think this might be true for you, but for less experienced and new developers, well, they actually won't get to that stage because their 'learning' is basically prompting and they have nothing to forget nor remember. And that might be bigger issue.

Harsha_Jaya_13•3w ago
Yeah I accept it , because most of the new gen are actually trained to be ,copy and paste and that is where the actual capability of the critical thinking is lost and I would accept your words being a new dev, because it feels cool after using ai for our work to complete and then after it completes and If I just come back and see ,what I have really did is,it feels like am I really unproductive? Actually no,but it is a bit like yes, because when we are good at driving the car and then help with the auto pilot is so good ,but when we don't actually know ,how to drive the car and then it feels like ,we are in the control of the autopilot disguised as the automation,what most people won't admit and I wanted to say that,we must actually have some knowledge on what we are trying to produce because, without any prior knowledge on what we are trying to do,we are just using the autopilot not our brain for the brainstorming ideas,even though we can get and we can take the help of the ai to convert that ideas to execution, atleast we must learn what we have been going through
beej71•3w ago
Seems like a decent balance to me. They note that there's no substitute for experiential learning. The harder you work, the more you get out of it. But there's a balance to be struck there with time spent.

What I do worry about is that all senior developers got that experiential education working hard, and they're applying it to their AI usage. How are juniors going to get that same education?

epolanski•3w ago
This is also what I often wonder.

Imho, AI is a multiplier and it compounds more as seniority grows and you know how to leverage it as a tool.

But in case of juniors, what does it compound exactly?

Sure, I see juniors being more independent and productive. But I also see them being stuck with little growth. Few years ago in an year, they would've tremendously grow at least on the technical side, what do they get better at now? Glueing APIs via prompting while never even getting intimate with the coding aspect?

poulpy123•3w ago
I'm using AI for 2 things: as a very good autocompleter, and as a partial replacement for a now crap google search.

I regularly try the agent mode (recently google's antigravity), but I was two issue that always come back. The first one is technical: if I'm always amazed the first 10-15mn, after a while the agent get stuck, and I would spend more time trying to make it to the job properly than looking directly and making changes myself. The second one is practical: I don't like to not know and understand what the LLM did, so I have to spend a lot of time trying to understand the code