frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why Local-First Apps Haven't Become Popular?

https://marcobambini.substack.com/p/why-local-first-apps-havent-become
50•marcobambini•47m ago•39 comments

Easy Forth

https://skilldrick.github.io/easyforth/
59•pkilgore•2h ago•16 comments

CompileBench: Can AI Compile 22-year-old Code?

https://quesma.com/blog/introducing-compilebench/
17•jakozaur•1h ago•4 comments

Kmart's use of facial recognition to tackle refund fraud unlawful

https://www.oaic.gov.au/news/media-centre/18-kmarts-use-of-facial-recognition-to-tackle-refund-fr...
110•Improvement•3h ago•82 comments

SGI demos from long ago in the browser via WASM

https://github.com/sgi-demos
133•yankcrime•6h ago•28 comments

How I, a beginner developer, read the tutorial you, a developer, wrote for me

https://anniemueller.com/posts/how-i-a-non-developer-read-the-tutorial-you-a-developer-wrote-for-...
574•wonger_•12h ago•273 comments

A Beautiful Maths Game

https://sinerider.com/
25•waonderer•2d ago•7 comments

Beyond the Front Page: A Personal Guide to Hacker News

https://hsu.cy/2025/09/how-to-read-hn/
33•firexcy•4h ago•4 comments

You did this with an AI and you do not understand what you're doing here

https://hackerone.com/reports/3340109
604•redbell•6h ago•283 comments

Biconnected components

https://emi-h.com/articles/bcc.html
25•emih•14h ago•5 comments

M4.6 Earthquake – 2 km ESE of Berkeley, CA

https://earthquake.usgs.gov/earthquakes/eventpage/ew1758534970/executive
104•brian-armstrong•4h ago•49 comments

What happens when coding agents stop feeling like dialup?

https://martinalderson.com/posts/what-happens-when-coding-agents-stop-feeling-like-dialup/
36•martinald•1d ago•23 comments

Privacy and Security Risks in the eSIM Ecosystem [pdf]

https://www.usenix.org/system/files/usenixsecurity25-motallebighomi.pdf
203•walterbell•9h ago•107 comments

DeepSeek-v3.1-Terminus

https://api-docs.deepseek.com/news/news250922
34•meetpateltech•1h ago•11 comments

Sj.h: A tiny little JSON parsing library in ~150 lines of C99

https://github.com/rxi/sj.h
435•simonpure•21h ago•214 comments

Show HN: Software Freelancers Contract Template

https://sopimusgeneraattori.ohjelmistofriikit.fi/?lang=en
85•baobabKoodaa•6h ago•25 comments

Optimized Materials in a Flash

https://newscenter.lbl.gov/2025/09/18/optimized-materials-in-a-flash/
9•gnabgib•3d ago•0 comments

We Politely Insist: Your LLM Must Learn the Persian Art of Taarof

https://arxiv.org/abs/2509.01035
103•chosenbeard•13h ago•45 comments

Metamaterials, AI, and the Road to Invisibility Cloaks

https://open.substack.com/pub/thepotentialsurface/p/metamaterials-ai-and-the-road-to
26•Annabella_W•5h ago•8 comments

Why is Venus hell and Earth an Eden?

https://www.quantamagazine.org/why-is-venus-hell-and-earth-an-eden-20250915/
159•pseudolus•15h ago•247 comments

A Generalized Algebraic Theory of Directed Equality

https://jacobneu.phd/
49•matt_d•3d ago•15 comments

The death rays that guard life

https://worksinprogress.co/issue/the-death-rays-that-guard-life/
24•ortegaygasset•4d ago•12 comments

Download responsibly

https://blog.geofabrik.de/index.php/2025/09/10/download-responsibly/
262•marklit•8h ago•182 comments

Simulating a Machine from the 80s

https://rmazur.io/blog/fahivets.html
58•roman-mazur•3d ago•8 comments

How can I influence others without manipulating them?

https://andiroberts.com/leadership-questions/how-to-influence-others-without-manipulating
165•kiyanwang•15h ago•162 comments

I uncovered an ACPI bug in my Dell Inspiron 5567. It was plaguing me for 8 years

https://triangulatedexistence.mataroa.blog/blog/i-uncovered-an-acpi-bug-in-my-dell-inspiron-5667-...
130•thunderbong•4d ago•16 comments

40k-Year-Old Symbols in Caves Worldwide May Be the Earliest Written Language

https://www.openculture.com/2025/09/40000-year-old-symbols-found-in-caves-worldwide-may-be-the-ea...
171•mdp2021•4d ago•99 comments

What if AMD FX had "real" cores? [video]

https://www.youtube.com/watch?v=Lb4FDtAwnqU
12•zdw•3d ago•4 comments

Lightweight, highly accurate line and paragraph detection

https://arxiv.org/abs/2203.09638
131•colonCapitalDee•16h ago•23 comments

Be careful with Go struct embedding

https://mattjhall.co.uk/posts/be-careful-with-go-struct-embedding.html
116•mattjhall•14h ago•83 comments
Open in hackernews

What happens when coding agents stop feeling like dialup?

https://martinalderson.com/posts/what-happens-when-coding-agents-stop-feeling-like-dialup/
36•martinald•1d ago

Comments

SirensOfTitan•2h ago
> Each of these 'phases' of LLM growth is unlocking a lot more developer productivity, for teams and developers that know how to harness it.

I still find myself incredibly skeptical LLM use is increasing productivity. Because AI reduces cognitive engagement with tasks, it feels to me like AI increases perceptive productivity but actually decreases it in many cases (and this probably compounds as AI-generated code piles up in a codebase, as there isn't an author who can attach context as to why decisions were made).

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

I realize the author qualified his or her statement with "know how to harness it," which feels like a cop-out I'm seeing an awful lot in recent explorations of AI's relationship with productivity. In my mind, like TikTok or online dating, AI is just another product motion toward comfort maximizing over all things, as cognitive engagement is difficult and not always pleasant. In a nutshell, it is another instant gratification product from tech.

That's not to say that I don't use AI, but I use it primarily as search to see what is out there. If I use it for coding at all, I tend to primarily use it for code review. Even when AI does a good job at implementation of a feature, unless I put in the cognitive engagement I typically put in during code review, its code feels alien to me and I feel uncomfortable merging it (and I employ similar levels of cognitive engagement during code reviews as I do while writing software).

breakfastduck•2h ago
It depends what environment you're operating within.

I've used LLMs for code gen at work as well as for personal stuff.

At work primarily for quick and dirty internal UIs / tools / CLIs it's been fantastic, but we've not unleashed it on our core codebases. It's worth noting all the stuff we've got out of out are things we'd not normally have the time to work on - so a net positive there.

Outside of work I've built some bespoke software almost entirely generated with human tweaks here and there - again, super useful software for me and some friends to use for planning and managing music events we put on that I'd never normally have the time to build.

So in those ways I see it as massively increasing productivity - to build lower stakes things that would normally just never get done due to lack of time.

add-sub-mul-div•2h ago
> I realize the author qualified his or her statement with "know how to harness it," which feels like a cop-out I'm seeing an awful lot in recent explorations of AI's relationship with productivity.

"You're doing AI wrong" is the new "you're doing agile wrong" which was the new "you're doing XP wrong".

bitwize•2h ago
More like the new "you're holding it wrong"
pjmlp•2h ago
Unfortunely many of us are old enough to know how those wrong eventually became the new normal, the wrong way.
polotics•2h ago
My experience is exactly the opposite of "AI reduces cognitive engagement with tasks": I have to constantly be on my toes to follow what the LLMs are proposing and make sure they are not getting off track over-engineering things, or entering something that's likely to turn into a death loop several turns later. AI use definitely makes my brain run warmer, got to get a FLIR camera to prove it I guess...
walleeee•1h ago
So, reduces cognitive engagement with the actual task at hand, and forces a huge attention share to hand-holding.

I don't think you two are disagreeing.

I have noticed this personally. It's a lot like the fatigue one gets from too long scrolling online. Engagement is shallower but not any less mentally exhausting than reading a book. You end up feeling more exhausted due to the involuntary attention-scattering.

dist-epoch•53m ago
> AI is just another product motion toward comfort maximizing over all things, as cognitive engagement is difficult and not always pleasant. In a nutshell, it is another instant gratification product from tech.

For me is the exact opposite. When not using AI, while coding you notice various things that could be improved, you can think about the architecture and what features you want next.

But AI codes so fast, that it's a real struggle keeping up to it. I feel like I need to focus 10 times harder to be able to think about features/architecture in a way that AI doesn't wait after me most of the time.

joz1-k•2h ago
From the article: Anthropic has been suffering from pretty terrible reliability problems.

In the past, factories used to shut down when there was a shortage of coal for steam engines or when the electricity supply failed. In the future, programmers will have factory holidays when their AI-coding language model is down.

corentin88•2h ago
Same as GitHub or Slack downtimes severely impact productivity.
thw_9a83c•1h ago
I would argue that dependency on GitHub and Slack is not the same as dependency on AI coding agents. GitHub/Slack are just straightforward tools. You can run them locally or have similar emergency backup tools ready to run locally. But depending on AI agents is like relying on external brains that have knowledge you suddenly don't have if they disappear. Moreover, how many companies could afford to run these models locally? Some of those models aren't even open.
danielbln•20m ago
There are plenty of open weight agentic coding models out there. Small ones you can run on a Macbook, big heavy ones you can run on some rented cloud instance. Also, if Anthropic is down, there is still Google, OpenAI, Mistral, Deepseek and so on. This seems like not much of an issue, honestly.
catigula•48m ago
>in the future

>programmers

Don't Look Up

ActionHank•25m ago
There are still people who dictate their emails to a secretary.

Technology changes, people often don't.

Programmers will be around for a longer time than anyone realises because most people don't understand how the magic box works let alone the arcane magics that run on it.

howmayiannoyyou•1h ago
I expected to see OpenAI, Google, Anthropic, etc. provide desktop applications with integrated local utility models and sandboxed MCP functionality to reduce unnecessary token and task flow, and I still expect this to occur at some point.

The biggest long-term risk to the AI giant's profitability will be increasingly capable desktop GPU and CPU capability combined with improving performance by local models.

mmmllm•1h ago
Speed is not a problem for me. I feel they are at the right speed now where I am able to see what it is doing in real time and check it's on the right track.

Honestly if it were any faster I would want a feature to slow it down, as I often intervene if it's going in the wrong direction.

infecto•1h ago
Cursor imo is still one of the only real players in the space. I don’t like the claude code style of coding, I feel too disconnected. Cursor is the right balance for me and it is generally pretty darn quick and I only expect it to get quicker. I hope there are more players that pop up in this space.
sealeck•31m ago
Have you tried https://zed.dev ?
dmix•17m ago
How is the pricing? I see it says "500 prompts a month" and only Claude. Cursor is built around token usage and distributes them across multiple models when you hit limits on one which turns out to be pretty economical.
everyone•1h ago
Ive used chatGPT to help me learn new stuff about which I know nothing (this is it's best use imo) and also write boilerplatey functions, eg. write a function that does precisely X.

Having it integrated into my IDE sounds like a nightmare though. Even the "intellisense" stuff in visual studio is annoying af and I have to turn it off to stop it auto-wrecking my code (eg. adding tonnes of pointless using statement). I dont know how the integrated llm would actually work, but I defo dont want that.

vrighter•33m ago
Writing the mindless boilerplate is when I'm thinking about the next step. If I didn't write it, i'd still have to take the time to think things through for the next step.
dmix•15m ago
I looked up the graph they are using

https://openrouter.ai/rankings

It says "Grok Code Fast 1" is ranked first in token usage? That's surprising. Maybe it's just OpenRouter bias or how the LLM is used.

I would have assumed Claude would be #1

bryanlarsen•10m ago
Multiple contexts is hard, and often counter-productive. It used to be popular on HN to talk about keeping your "flow", and railing against everything that broke a programmer's flow. These slow AI's constantly break flow.