frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Stop Generating, Start Thinking

https://localghost.dev/blog/stop-generating-start-thinking/
42•frizlab•6h ago

Comments

logicprog•4h ago
"It’s very unsettling, then, to find myself feeling like I’m in danger of being left behind - like I’m missing something. As much as I don’t like it, so many people have started going so hard on LLM-generated code in a way that I just can’t wrap my head around.

...

’ve been using Copilot - and more recently Claude - as a sort of “spicy autocomplete” and occasional debugging assistant for some time, but any time I try to get it to do anything remotely clever, it completely shits the bed. Don’t get me wrong, I know that a large part of this is me holding it wrong, but I find it hard to justify the value of investing so much of my time perfecting the art of asking a machine to write what I could do perfectly well in less time than it takes to hone the prompt.

You’ve got to give it enough context - but not too much or it gets overloaded. You’re supposed to craft lengthy prompts that massage the AI assistant’s apparently fragile ego by telling it “you are an expert in distributed systems” as if it were an insecure, mediocre software developer.

Or I could just write the damn code in less time than all of this takes to get working."

Well there's your problem. Nobody does roll-based prompts anymore, and the entire point of coding agents is that they search your code base, do internet searches, and do web fetches, as well as launch sub agents and use todo lists, to fill and adjust their context exactly as needed themselves, without you having to do it manually.

It's funny reading people planatively saying, "I just don't get how people could possibly be getting used out of these things. I don't understand it." And then they immediately reveal that it's not the baffling mystery or existential question there pretending it is for the purpose of this essay — the reason they don't understand it is that they literally don't understand the tech itself lol

martin-t•2h ago
This just shows that the models (not AI, statistical models of text used without consent) are not that smart, it's the tooling around them which allows using these models as a heuristic for brute force search of the solution space.

Just last week, I prompted (not asked, it is not sentient) Claude to generate (not tell me or find out or any other anthropomorphization) whether I need to call Dispose on objects passed to me from 2 different libraries for industrial cameras. Being industrial, most people using them typically don't post their code publicly, which means the models have poor statistical coverage around these topics.

The LLM generated a response which triggered the tooling around it to perform dozens of internet searches and based on my initial prompt, the search results and lots of intermediate tokens ("thinking"), generated a reply which said that yes, I need to call Dispose in both cases.

It was phrased authoritatively and confidently.

So I tried it, one library segfaulted, the other returned an exception on a later call. I performed my own internet search (a single one) and immediately found documentation from one of the libraries clearly stating I don't need to call Dispose. The other library being much more poorly documented didn't mention this explicitly but had examples which didn't call Dispose.

I am sure if I used LLMs "properly" "agentically", then they would have triggered the tooling around them to build and execute the code, gotten the same results as me much faster, then equally authoritatively and confidently stated that I don't need to call Dispose.

This is not thinking. It's a form of automation but not thinking and not intelligence.

Isamu•1h ago
>brute force search of the solution space

“Brute force” is mostly what makes it all work, and what is most disappointing to me currently. Including the brute force necessary to train an LLM, the vast quantity of text necessary to approach almost human quality, the massive scale of data centers necessary to deploy these models, etc.

I am hoping this is a transitional period, where LLMs could be used to create better models that are more finesse and less brute force.

martin-t•1h ago
To be honest, these models being bad is what gives me some hope we can figure out how to approach a potential future AI as a society before it arrives.

Because right now everything in the west is structured around rich people owning things they have not built while people who did the actual work with their hands and their minds are left in the dust.

For a brief period of time (a couple decades), tech was a path for anyone from any background to get at least enough to not struggle. Not become truly rich as for that you need to own real estate or companies but having all your reasonable material needs taken care of and being able to save up for retirement (or in countries without free education, to pay for kids' college).

And that might be coming to an end, with people who benefited from this opportunity cheering it on.

raincole•54m ago
Yeah, remind me of this: https://news.ycombinator.com/item?id=46929505

> I have a source file of a few hundred lines implementing an algorithm that no LLM I've tried (and I've tried them all) is able to replicate, or even suggest, when prompted with the problem. Even with many follow up prompts and hints.

People making this kind of claim will never post the question and prompts they tried. Because if they did, everyone will know it's just they don't know how to prompt.

ares623•33m ago
At one point will the proper way to prompt just be "built-in"? Why aren't they built-in already if the "proper way to prompt" is so well understood?
ares623•35m ago
So I guess you could say, they're "holding it wrong"?
Quothling•1h ago
Maybe I don't understand it correctly but to me this reads like the author isn't actually using AI agents. I don't talk or write prompts anymore. I write tasks and I let a couple of AI agent complete those tasks. Exactly how I'd distribute tasks to a human. The AI code is of variating quality and they certainly aren't great at computer science (at least not yet), but it's not like they write worse code than some actual humans would.

I like to say that you don't need computer science to write software, until you do. The thing is that a lot of software in the organisations I've worked in, doesn't actually need computer science. I've seen horrible javascript code on the back-end live a full lifecycle of 5+ years without needing much maintainence, if any, and be fine. It could've probably have been more efficient, but compute is so cheap that it never really mattered. Of course I've also seen inefficient software or errors cost us a lot of money when our solar plants didn't output what they were supposed to. I'd let AI's write one of those things any day.

Hell I did recently. We had an old javascript service which was doing something with the hubspot API. I say something because I didn't ever really find out what it was. Basically hubspot sunset the v1 of their API, and before the issue arrived at my table my colleagues had figured out that was the issue. I didn't really have the time to fix this, so when I saw how much of a mess the javascript code was and realized it would take me a few hours to figure out what it even did... well... I told my AI agent running on our company framework to fix it. It did so in 5-10 minutes with a single correction needed. It improved the javascript quite a bit while doing it, typing everything. I barely even got out of my flow to make it happen. So far it's run without any issues for a month. I was frankly completely unnecessary in this process. The only reason it was me who fired up the AI is because the people who sent me the task haven't yet adopted AI agents.

That being said... AI's are a major security risk that needs to be handled accordingly.

heliumtera•1h ago
Programmers for some reason love to be told what do to. First thing in the morning they look out for someone else to tell them how to do, how to test, how to validate.

Why don't do it yourself, like you want to do it, when you could just fallback to mediocrity and instead do like everybody else does?

Why think when you can be told what to do?

Why have intercourse with your wife when instead you can let someone else do? This is the typical llm user mentality

thunky•1h ago
This text color and background is unreadable.
potatoman22•1h ago
What theme did you use? I really like the "garden" theme
awesome_dude•59m ago
There's a couple of news stories doing the rounds at the moment which point to the fact that AI isn't "there yet"

1. Microsoft's announcement of cutting their copilot products sales targets[0]

2. Moltbook's security issues[1] after being "vibe coded" into life

Leaving the undeniable conclusion to be - the vast majority (seriously) distrusts AI much more than we're led to believe, and with good reason.

Thinking (as a SWE) is still very much the most important skill in SWE, and relying on AI has limitations.

For me, AI is a great tool for helping me to discover ideas I had not previously thought of, and it's helpful for boilerplate, but it still requires me to understand what's being suggested, and, even, push back with my ideas.

[0] https://arstechnica.com/ai/2025/12/microsoft-slashes-ai-sale...

[1] https://www.reuters.com/legal/litigation/moltbook-social-med...

henry_bone•20m ago
"Thinking (as a SWE) is still very much the most important skill in SWE, and relying on AI has limitations."

I'd go further and say the thinking is humanity's fur and claws and teeth. It's our strong muscles. It's the only thing that has kept us alive in a natural world that would have us extinct long, long ago.

But now we're building machine with the very purpose of thinking, or at least of producing the results of thinking. And we use it. Boy, do we use it. We use it to think of birthday presents (it's the thought that counts) and greeting card messages. We use it for education coursework (against the rules, but still). We use it, as programmers, to come up with solutions and to find bugs.

If AI (of any stripe, LLM or some later invention) represents an existential threat, it is not because it will rise up and destroy us. Its threat lies solely in the fact that it is in our nature to take the path of least resistance. AI is the ultimate such path, and it does weaken our minds.

My challenge to anyone who thinks it's harmless: use it for a while. Figure out what it's good at and lean on it. Then, after some months, or years, drop it and try working on your own like in the before times. I would bet that one will discover that significant amounts of fluency will be lost.

acjohnson55•24m ago
I read this and thought, "are we using the same software?" For me, I have turned the corner where I barely hand-edit anything. Most of the tasks I take on are nearly one-shot successful, simply pointing Claude Code at a ticket URL. I feel like I'm barely scratching the surface of what's possible.

I'm not saying this is perfect or unproblematic. Far from it. But I do think that shops that invest in this way of working are going to vastly outproduce ones that don't.

LLMs are the first technology where everyone literally has a different experience. There are so many degrees of freedom in how you prompt. I actually believe that people's expectations and biases tend to correlate with the outcomes they experience. People who approach it with optimism will be more likely to problem-solve the speed bumps that pop up. And the speed bumps are often things that can mostly be addressed systemically, with tooling and configuration.

Art of Roads in Games

https://sandboxspirit.com/blog/art-of-roads-in-games/
100•linolevan•6h ago•32 comments

Vouch

https://github.com/mitchellh/vouch
690•chwtutha•1d ago•306 comments

More Mac malware from Google search

https://eclecticlight.co/2026/01/30/more-malware-from-google-search/
119•kristianp•7h ago•76 comments

Reverse Engineering the Prom for the SGI O2

https://mattst88.com/blog/2026/02/08/Reverse_Engineering_the_PROM_for_the_SGI_O2/
60•mattst88•5h ago•13 comments

Quartz crystals

https://www.pa3fwm.nl/technotes/tn13a.html
44•gtsnexp•19h ago•6 comments

Apple XNU: Clutch Scheduler

https://github.com/apple-oss-distributions/xnu/blob/main/doc/scheduler/sched_clutch_edge.md
101•tosh•7h ago•14 comments

NanoClaw now supports Claude's Agent Swarms in containers

https://twitter.com/Gavriel_Cohen/status/2020701159175155874
18•spendy_clao•29m ago•0 comments

Show HN: A custom font that displays Cistercian numerals using ligatures

https://bobbiec.github.io/cistercian-font.html
31•bobbiechen•5h ago•2 comments

Every book recommended on the Odd Lots Discord

https://odd-lots-books.netlify.app/
37•muggermuch•4h ago•13 comments

Ask HN: What are you working on? (February 2026)

92•david927•8h ago•289 comments

Show HN: I created a Mars colony RPG based on Kim Stanley Robinson's Mars books

https://underhillgame.com/
165•ariaalam•10h ago•54 comments

Roundcube Webmail: SVG feImage bypasses image blocking to track email opens

https://nullcathedral.com/posts/2026-02-08-roundcube-svg-feimage-remote-image-bypass/
112•nullcathedral•9h ago•32 comments

Custom Firmware for the MZ-RH1 – Ready for Testing

https://sir68k.re/posts/rh1-firmware-available/
6•jimbauwens•4d ago•0 comments

AI makes the easy part easier and the hard part harder

https://www.blundergoat.com/articles/ai-makes-the-easy-part-easier-and-the-hard-part-harder
182•weaksauce•4h ago•147 comments

The Little Bool of Doom (2025)

https://blog.svgames.pl/article/the-little-bool-of-doom
87•pocksuppet•10h ago•31 comments

Stop Generating, Start Thinking

https://localghost.dev/blog/stop-generating-start-thinking/
42•frizlab•6h ago•14 comments

Toma (YC W24) Is Hiring Founding Engineers

https://www.ycombinator.com/companies/toma/jobs/oONUnCf-founding-engineer-ai-products
1•anthonykrivonos•5h ago

A GTA modder has got the 1997 original working on modern PCs and Steam Deck

https://gtaforums.com/topic/986492-grand-theft-auto-ready2play-full-game-windows-version/
146•HelloUsername•7h ago•67 comments

Shifts in U.S. Social Media Use, 2020–2024: Decline, Fragmentation, Polarization (2025)

https://arxiv.org/abs/2510.25417
149•vinnyglennon•6h ago•146 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
64•nwparker•3d ago•15 comments

Running Your Own As: BGP on FreeBSD with FRR, GRE Tunnels, and Policy Routing

https://blog.hofstede.it/running-your-own-as-bgp-on-freebsd-with-frr-gre-tunnels-and-policy-routing/
150•todsacerdoti•14h ago•59 comments

Dave Farber has died

https://lists.nanog.org/archives/list/nanog@lists.nanog.org/thread/TSNPJVFH4DKLINIKSMRIIVNHDG5XKJCM/
216•vitplister•16h ago•36 comments

Exploiting signed bootloaders to circumvent UEFI Secure Boot

https://habr.com/en/articles/446238/
108•todsacerdoti•13h ago•61 comments

RFC 3092 – Etymology of "Foo" (2001)

https://datatracker.ietf.org/doc/html/rfc3092
130•ipnon•13h ago•38 comments

GitHub Agentic Workflows

https://github.github.io/gh-aw/
219•mooreds•14h ago•116 comments

I put a real-time 3D shader on the Game Boy Color

https://blog.otterstack.com/posts/202512-gbshader/
265•adunk•11h ago•36 comments

OpenClaw is changing my life

https://reorx.com/blog/openclaw-is-changing-my-life/
253•novoreorx•21h ago•414 comments

Ktkit: A Kotlin toolkit for building server applications with Ktor

https://github.com/smyrgeorge/ktkit
17•smyrgeorge•4d ago•4 comments

Self-referential functions and the design of options (2014)

https://commandcenter.blogspot.com/2014/01/self-referential-functions-and-design.html
11•hambes•18h ago•2 comments

Curating a Show on My Ineffable Mother, Ursula K. Le Guin

https://hyperallergic.com/curating-a-show-on-my-ineffable-mother-ursula-k-le-guin/
169•bryanrasmussen•17h ago•61 comments