frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Floppinux – An Embedded Linux on a Single Floppy, 2025 Edition

https://krzysztofjankowski.com/floppinux/floppinux-2025.html
76•GalaxySnail•3h ago•40 comments

Coding assistants are solving the wrong problem

https://www.bicameral-ai.com/blog/introducing-bicameral
66•jinhkuan•3h ago•20 comments

How does misalignment scale with model intelligence and task complexity?

https://alignment.anthropic.com/2026/hot-mess-of-ai/
169•salkahfi•7h ago•45 comments

The Codex App

https://openai.com/index/introducing-the-codex-app/
631•meetpateltech•13h ago•444 comments

Anki ownership transferred to AnkiHub

https://forums.ankiweb.net/t/ankis-growing-up/68610
340•trms•10h ago•86 comments

GitHub experience various partial-outages/degradations

https://www.githubstatus.com?todayis=2026-02-02
196•bhouston•10h ago•62 comments

Todd C. Miller – Sudo maintainer for over 30 years

https://www.millert.dev/
390•wodniok•14h ago•202 comments

xAI joins SpaceX

https://www.spacex.com/updates#xai-joins-spacex
647•g-mork•9h ago•1413 comments

The Connection Machine CM-1 "Feynman" T-shirt

https://tamikothiel.com/cm/cm-tshirt.html
68•tosh•3d ago•15 comments

Carnegie Mellon Unversity Computer Club FTP Server

http://128.237.157.9/pub/
70•1vuio0pswjnm7•5d ago•13 comments

Ask HN: Anyone else struggle with how to learn coding in the AI era?

28•44Bulldog•3h ago•30 comments

See how many words you have written in Hacker News comments

https://serjaimelannister.github.io/hn-words/
45•Imustaskforhelp•3d ago•58 comments

Ask HN: Who is hiring? (February 2026)

262•whoishiring•15h ago•322 comments

The TSA's New $45 Fee to Fly Without ID Is Illegal

https://www.frommers.com/tips/airfare/the-tsa-new-45-fee-to-fly-without-id-is-illegal-says-regula...
365•donohoe•8h ago•406 comments

Ask HN: Where do all the web devs talk?

19•LinguaBrowse•3h ago•17 comments

Phenakistoscopes (1833)

https://publicdomainreview.org/collection/phenakistoscopes-1833/
7•tobr•2d ago•0 comments

Frog 'saunas' could help endangered species beat a deadly fungus (2024)

https://www.science.org/content/article/frog-saunas-could-help-endangered-species-beat-deadly-fungus
7•noleary•3h ago•1 comments

Hacking Moltbook

https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
301•galnagli•15h ago•169 comments

Court orders restart of all US offshore wind power construction

https://arstechnica.com/science/2026/02/court-orders-restart-of-all-us-offshore-wind-construction/
330•ck2•8h ago•193 comments

The Physics of Ideas: Reality as a Coordination Problem

https://bpe.xyz
24•shoes_for_thee•4d ago•2 comments

Linux From Scratch ends SysVinit support

https://lists.linuxfromscratch.org/sympa/arc/lfs-announce/2026-02/msg00000.html
141•cf100clunk•13h ago•183 comments

4x faster network file sync with rclone (vs rsync) (2025)

https://www.jeffgeerling.com/blog/2025/4x-faster-network-file-sync-rclone-vs-rsync/
290•indigodaddy•4d ago•138 comments

50 years ago, a young Bill Gates took on the 'software pirates'

https://thenewstack.io/50-years-ago-a-young-bill-gates-took-on-the-software-pirates/
32•MilnerRoute•1d ago•24 comments

Archive.today is directing a DDoS attack against my blog?

https://gyrovague.com/2026/02/01/archive-today-is-directing-a-ddos-attack-against-my-blog/
111•gyrovague-com•2d ago•41 comments

Julia

https://borretti.me/fiction/julia
98•ashergill•8h ago•12 comments

Zig Libc

https://ziglang.org/devlog/2026/#2026-01-31
223•ingve•14h ago•97 comments

Nano-vLLM: How a vLLM-style inference engine works

https://neutree.ai/blog/nano-vllm-part-1
238•yz-yu•18h ago•24 comments

Joedb, the Journal-Only Embedded Database

https://www.joedb.org/index.html
65•mci•3d ago•9 comments

Pretty soon, heat pumps will be able to store and distribute heat as needed

https://www.sintef.no/en/latest-news/2026/pretty-soon-heat-pumps-will-be-able-to-store-and-distri...
182•PaulHoule•1d ago•153 comments

On being sane in insane places (1973) [pdf]

https://www.weber.edu/wsuimages/psychology/FacultySites/Horvat/OnBeingSaneInInsanePlaces.PDF
80•dbgrman•13h ago•47 comments
Open in hackernews

Coding assistants are solving the wrong problem

https://www.bicameral-ai.com/blog/introducing-bicameral
65•jinhkuan•3h ago

Comments

monero-xmr•1h ago
First you must accept that engineering elegance != market value. Only certain applications and business models need the crème de le crème of engineers.

LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.

slau•1h ago
OT: I applaud your correct use of the grave accent, however minor nitpick: crème in French is feminine, therefore it would be “la”.
crabmusket•38m ago
There's an interesting aside about the origin of the phrase in Leslie Claret's Integral Principles of the Structural Dynamics of Flow

https://youtu.be/ca27ndN2fVM?si=hNxSY6vm0g-Pt7uR

adithyassekhar•1h ago
It's not just about elegance.

I'm going to give an example of a software with multiple processes.

Humans can imagine scenarios where a process can break. Claude can also do it, but only when the breakage happens from inside the process and if you specify it. It can not identify future issues from a separate process unless you specifically describe that external process, the fact that it could interact with our original process and the ways in which it can interact.

Identifying these are the skills of a developer, you could say you can document all these cases and let the agent do the coding. But here's the kicker, you only get to know these issues once you started coding them by hand. You go through the variables and function calls and suddenly remember a process elsewhere changes or depends on these values.

Unit tests could catch them in a decently architected system, but those tests needs to be defined by the one coding it. Also if the architect himself is using AI, because why not, it's doomed from the start.

pmontra•1h ago
Well, it takes time to assess and adapt, and large organizations need more time than smaller ones. We will see.

In my experience the limiting factor is doing the right choices. I've got a costumer with the usual backlog of features. There are some very important issues in the backlog that stay in the backlog and are never picked for a sprint. We're doing small bug fixes, but the big ones. We're doing new features that are in part useless because of the outstanding bugs that prevent customers from fully using them. AI can make us code faster but nobody is using it to sort issues for importance.

exodust•1h ago
> nobody is using it to sort issues for importance

True, and I'd add the reminder that AI doesn't care. When it makes mistakes it pretends to be sorry.

Simulated emotion is dangerous IMHO, it can lead to undeserved trust. I always tell AI to never say my name, and never use exclamation points or simulated emotion. "Be the cold imperfect calculator that you are."

When it was giving me complements for noticing things it failed to, I had to put a stop to that. Very dangerous. When business decisions or important technical decisions are made by an entity that literally is incapable of caring, but instead pretends to like a sociopath, that's when trouble brews.

Madmallard•1h ago
Based on my experience using Claude opus 4.5, it doesn't really even get functionality correct. It'll get scaffolding stuff right if you tell it exactly what you want but as soon as you tell it to do testing and features it ranges from mediocre to worse than useless.
WD-42•48m ago
I keep hearing this but I don’t understand. If inelegant code means more bugs that are harder to fix later, that translates into negative business value. You won’t see it right away which is probably where this sentiment is coming from, but it will absolutely catch up to you.

Elegant code isn’t just for looks. It’s code that can still adapt weeks, months, years after it has shipped and created “business value”.

locknitpicker•26m ago
> I keep hearing this but I don’t understand. If inelegant code means more bugs that are harder to fix later, that translates into negative business value.

That's a rather short-sighted opinion. Ask yourself how "inelegant code" find it's way into a codebase, even with working code review processes.

The answer more often than not is what's typically referred to as tech debt driven development. Meaning, sometimes a hacky solution with glaring failure modes left unaddressed is all it takes to deliver a major feature in a short development cycle. Once the feature is out, it becomes less pressing to pay off that tech debt because the risk was already assumed and the business value was already created.

Later you stumble upon a weird bug in your hacky solution. Is that bug negative business value?

lmm•15m ago
Of course a bug is negative business value. Perhaps the benefit of shipping faster was worth the cost of introducing bugs, but that doesn't make it not a cost.
aurareturn•6m ago

  LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.
The talent isn't used for writing code anymore though. They're used for directing, which an LLM isn't very good at since it has limited real world experience, interacting with other humans, and goals.

OpenAI has said they're slowing down hiring drastically because their models are making them that much more productive. Codex itself is being built by Codex. Same with Claude Code.

verdverm•43m ago
meh piece, don't feel like I learned anything from it. Mainly words around old stats in a rapidly evolving field, and then trying to pitch their product

tl;dr content marketing

There is this super interesting post in new about agent swarms and how the field is evolving towards formal verification like airlines, or how there are ideas we can draw on. Any, imo it should be on the front over this piece

"Why AI Swarms Cannot Build Architecture"

An analysis of the structural limitations preventing AI agent swarms from producing coherent software architecture

https://news.ycombinator.com/item?id=46866184

locknitpicker•34m ago
> meh piece, don't feel like I learned anything from it.

That's fine. I found the leading stats interesting. If coding assistants slowed down experienced developers while creating a false sense of development speed then that should be thought-provoking. Also, nearly half of code churned by coding assistants having security issues. That he's tough.

Perhaps it's just me, but that's in line with my personal experience, and I rarely see those points being raised.

> There is this super interesting post in new about agent swarms and how (...)

That's fine. Feel free to submit the link. I find it far more interesting to discuss the post-rose tinted glasses view of coding agents. I don't think it makes any sense at all to laud promises of formal verification when the same technology right now is unable to introduce security vulnerabilities.

verdverm•20m ago
> found the leading stats interesting

They are from before the current generation of models and agent tools, they are almost certainly out of date and now different and will continue to evolve

We're still learning to crawl, haven't gotten to walking yet

verdverm•19m ago
> Feel free to submit the link

I did, or someone else did, it's the link in the post you replied to

zkmon•35m ago
Wondering why is ths on front page? There is hardly any new insight other than a few minutes of exposure to greenish glow that makes everything looks brownish after you close that page.
Quothling•31m ago
I think AI will fail in any organisation where the business process problems are sometimes discuvered during engineering. I use AI quite a lot, I recently had Claude upgrade one of our old services from hubspot api v1 to v3 without basically any human interaction beyond the code review. I had to ask it for two changes I think, but over all I barely got out of my regular work to get it done. I did know exactly what to ask of it because the IT business partners who had discovered the flaw had basically written the tasks already. Anyway. AI worked well there.

Where AI fails us is when we build new software to improve the business related to solar energy production and sale. It fails us because the tasks are never really well defined. Or even if they are, sometimes developers or engineers come up with a better way to do the business process than what was planned for. AI can write the code, but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to do X first. If we only did code-reviews then we would miss that step.

In a perfect organisation your BPM people would do this. In the world I live in there are virtually no BPM people, and those who know the processes are too busy to really deal with improving them. Hell... sometimes their processes are changed and they don't realize until their results are measurably better than they used to be. So I think it depends a lot on the situation. If you've got people breaking up processes, improving them and then decribing each little bit in decent detail. Then I think AI will work fine, otherwise it's probably not the best place to go full vibe.

Onavo•17m ago
> business process problems are sometimes discovered (sic.) during engineering

This deserves a blog post all on its own. OP you should write one and submit it. It's a good counterweight to all the AI optimistic/pessimistic extremism.

micw•28m ago
For me, AI is an enabler for things you can't do otherwise (or that would take many weeks of learning). But you still need to know how to do things properly in general, otherwise the results are bad.

E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.

Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.