frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI Coding

https://geohot.github.io//blog/jekyll/update/2025/09/12/ai-coding.html
110•abhaynayar•2h ago

Comments

ChrisMarshallNY•59m ago
> AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this?

Adderall is similar. It makes people feel a lot more productive, but research on its effectiveness[0] seems to show that, at best, we get only a mild improvement in productivity, and marked deterioration of cognitive abilities.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC6165228/

luckylion•56m ago
Research on _13_ people, that's a very important caveat when evaluating something like adderal.
ChrisMarshallNY•53m ago
I’m quite sure that there’s a ton more research on it. The drug’s been around for decades. Lots of time for plenty of studies.

If legitimate research had found it to be drastically better, that study would definitely have been published in a big way.

Unscientifically, I personally know quite a number of folks that sincerely believed that they couldn’t function without it, but have since learned that they do far better on their own. I haven’t met a single one that actually had their productivity decline (after an adjustment period, of course), after giving up Adderall. In fact, I know several, that have had their careers really take off, after giving it up.

luckylion•10m ago
My point is that micro-studies like that on a tiny random (or even counter-indicated, "healthy") selection of the general population don't tell you much for drugs that do specific things.

"Antibiotics don't improve your life, but can damage your health" would likely be the outcome on 13 randomly selected healthy individuals. But do the same study on 13 people with a bacterial infection susceptible to antibiotics and your results will be vastly different.

Eikon•52m ago
It’s interesting how science can become closer to pseudoscience than proper research through paper-milling.

It seems like that with such small groups and effects you could run the same “study” again and again until you get the result that you initially desired.

ChrisMarshallNY•46m ago
So it should be easy to find studies that prove that non-ADHD people that take it, have dramatically improved productivity.
raincole•36m ago
It's very easy to find studies that prove that Adderall (etc.) improve non-ADHD people's cognition ability. And it's equally easy to find studies that prove otherwise. The parent comment is very spot on. You can find evidence supporting anything nowadays.

https://pmc.ncbi.nlm.nih.gov/articles/PMC3489818/table/tbl1/

diarrhea•56m ago
Note that the study is just n=13 and on subjects without ADHD.
ChrisMarshallNY•47m ago
That’s the deal.

People without ADHD take it, believing that it makes them “super[wo]men.”

bdcravens•38m ago
I had a problem client that I ended up firing and giving money back to about 15 years ago. Lot of red flags, but the breaking point was when they offered me adderall so I could "work faster".

That said, I'll leave the conclusions about whether it's valuable for those with ADHD to the mental health professionals.

gobdovan•42m ago
Thanks again, diarrhea
joefourier•37m ago
I’m someone with ADHD who takes prescribed stimulants and they don’t make me work faster or smarter, they just make me work. Without them I’ll languish in an unfocused haze for hours, or zone in on irrelevant details until I realise I have an hour left in the day to get anything done. It could make me 20% less intelligent and it would still be worth it; this is obviously an extreme, but given the choice, I’d rather be an average developer that gets boring, functional code done on time than a dysfunctional genius who keeps missing deadlines and cannot be motivated to work on anything but the most exciting shiny new tech.
ChrisMarshallNY•33m ago
I have family that had ADHD, as a kid (they called it “hyperactivity,” back then). He is also dyslexic.

The ADHD was caught early, and treated, but the dyslexia was not. He thought he was a moron, for much of his early life, and his peers and employers did nothing to discourage that self-diagnosis.

Since he learned of his dyslexia, and started treating it, he has been an engineer at Intel, for most of his career (not that I envy him, right now).

Eikon•59m ago
Even though I don’t buy that LLMs are going to replace developers and quite agree with what is said, this is more of a critique of LLMs as English-to-code translators. LLMs are very useful for many other things.

Researching concepts, for one, has become so much easier, especially for things where you don’t know anything yet and would have a hard time to even formulate a search engine query.

ChrisMarshallNY•57m ago
I’ve found that ChatGPT and Perplexity are great tools for finding “that article I skimmed a year ago that talked about…”.
fleebee•44m ago
I agree. I think a better analogy than a compiler is a search engine that has an excellent grasp of semantics but is also drunk and schizophrenic.

LLMs are really valuable for finding information that you aren't able to formulate a proper search query for.

To get the most out of them, ask them to point you to reliable sources instead of explaining directly. Even then, it pays to be very critical of where they're leading you to. To make an LLM the primary window through which you seek new information is extremely precarious epistemologically. Personally, I'd use it as a last resort.

urbandw311er•58m ago
That was a really great read. Not saying I agree with it all, I’m maybe more in the camp that believes AI assisted coding is a time-saver but it’s refreshing (and overdue) to have a counterpoint to the deafening and repetitive drumbeat of the VC-backed hype machine.
m00dy•56m ago
he lagged behind that's why.
martini333•56m ago
I agree.

I use LLM to do things like brainstorm, explaining programming concepts and debug. I will not use it to write code. The output is not good enough, and I feel dumber.

I only see the worst of my programming collegues coding with AI. And the results are actual trash. They have no actual understanding of the code "they" are writing, and they have no idea how to actually debug what "they" made, if LLM is not helpful. I can smell the technical debt.

pydry•52m ago
Me too.

I used to be a bit more open minded on this topic but im increasingly viewing any programmers who use AI for anything other than brainstorming and looking stuff up/explaining it as simply bad at what they do.

KronisLV•37m ago
Is this fundamentally different from them copy pasting code from StackOverflow or random blog posts, without understanding it?

You know, aside from AI making it super easy and fast to generate this tech debt in whatever amounts they desire?

maplethorpe•24m ago
Even when copy-pasting an entire function from stack overflow, you generally still need to have some understanding of what the inputs and outputs are, even if it remains somewhat of a black box, so that you can plug it into your existing code.

AI removes that need. You don't need to know what the function does at all, so your brain devotes to energy towards remembering or understanding it.

viraptor•55m ago
There's a lot of complaining about current compilers / languages / codebase in similar posts, but barely any ideas for how to make them better. It doesn't seem surprising that people go for the easier problem (make the current process simpler with LLMs) than for the harder one (change the whole programming landscape to something new and actually make it better).
Earw0rm•54m ago
How do we resolve the observable tension here with the fact that self-driving cars are operating right now, relatively successfully, in ten or so major American cities?

Not a billion dollar business yet, maybe, but 300 cars generating five or six figures revenue per year each isn't far off.

(And I say this as someone who is skeptical that totally autonomous cars worldwide will ever be a thing, but you can get to £10Bn far, far before that point. Become the dominant mode of transport in just ONE major American city and you're most of the way there).

cycomanic•27m ago
> How do we resolve the observable tension here with the fact that self-driving cars are operating right now, relatively successfully, in ten or so major American cities?

Because geo fenced driving in a few select cities with very favourable conditions is not what was promised. That's the crux. They promised us that we have self drive anywhere at anytime at the press of a button.

> Not a billion dollar business yet, maybe, but 300 cars generating five or six figures revenue per year each isn't far off.

I'm not sure how you get to 6 figures revenue. Assuming the car makes $100 per hour for 24x7 52 weeks a year we still fall short of 1 million. But let's assume you're right $300M revenue (not profit, are they even operating at a plus even disregarding R&D costs?) on investment of >10 billion (probably more like 100), seems like the definition of hype.

> (And I say this as someone who is skeptical that totally autonomous cars worldwide will ever be a thing, but you can get to £10Bn far, far before that point. Become the dominant mode of transport in just ONE major American city and you're most of the way there).

What I don't understand with this argument, how are you proposing they become the dominant mode of transport. These services are competing with taxis, what do they offer over taxis that people suddenly switch on mass to self driving taxis? They need to become cost competitive (and convenience competitive) with driving your own car, which would significantly drive down revenue. Secondly if robotaxi companies take over transport, why would the public continue to build their infrastructure and not demand that these robotaxi companies start to finance the infrastructure they exclusively use?

Earw0rm•18m ago
I got to six figures by assuming that a human taxi driver makes maybe $30-40k at a guess, and an autonomous car can work 24/7. 6 figures is $100k minimum.

So yeah, right now they'd have to be at ten cities x 300 cars each to hit 300M revenue, but there's still plenty of room for growth. Or should be, assuming the Waymo model isn't maxed out supporting the current handful of cities.

But I'm not convinced they have to hit cost parity with personal cars, because the huge advantage is you can work and drive (or be driven). If NYC and LA rush-hour congestion time becomes productive time, there's your billions.

I drive but prefer to take transit for this reason - some of my colleagues are able to join work calls effectively while driving, but for whatever reason my brain doesn't allow that. Just paying attention to calls is enough, you want me to pay attention to the road AND the call?

ur-whale•53m ago
I agree that most natural languages are a very poor tool to write code specification in.

Specifically, natural language is:

   - ambiguous (LLMs solve this to a certain extent)

   - extremely verbose

   - doesn't lend itself well to refactoring

   - the same thing can be expressed in way too many different ways, which leads to instability in specs -> code -> specs -> code -> specs loops (and these are essential to do incremental work)
Having something at our disposal that you can write code specs in, that is as easy as natural language yet, more concise, easy to learn and most of all not so anal/rigid as typical code languages are would be fantastic.

Maybe LLMs can be sued to design such a thing ?

saejox•52m ago
> AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this?

True in the long run. Like a car with a high acceleration but low top speed.

AI makes you start fast, but regret later because you don't have the top speed.

net01•46m ago
This is shown in figure 5 of the paper. https://arxiv.org/pdf/2507.09089
isaacremuant•42m ago
People repeating articles or papers. I know myself. I know from my own experiences what the good and bad of practice A or database B is. I don't need to read a conclusion by some Muppet.

Chill. Interesting times. Learn stuff, like always. Iterate. Be mindful and intentional and don't just chase mirrors but be practical.

The rest is fluff. You know yourself.

demirbey05•51m ago
I started fully coding with Claude Code. It's not just vibe coding, but rather AI-assisted coding. I've noticed there's a considerable decrease in my understanding of the whole codebase, even though I'm the only one who has been coding this codebase for 2 years. I'm struggling to answer my colleagues' questions.

I am not defending we should drop AI, but we should really measure its effects and take actions accordingly. It's more than just getting more productivity.

numbers_guy•49m ago
This is the chief reason I don't use integrations. I just use chat, because I want to physically understand and insert code myself. Else you end up with the code overtaking your understanding of it.
pmg101•37m ago
Yes. I'm happy to have a sometimes-wrong expert to hand. Sometimes it provides just what I need, sometimes like with a human (who are also fallible), it helps to spur my own thinking along, clarify, converge on a solution, think laterally, or other productivity boosting effects.
apercu•19m ago
I wrote a couple python scripts this week to help me with a midi integration project (3 devices with different cable types) and for quick debugging if something fails (yes, I know there are tools out there that do this but I like learning).

I’m could have used an LLM to assist but then I wouldn’t have learned much.

But I did use an LLM to make a management wrapper to present a menu of options (cli right now) and call the scripts. That probably saved me an hour, easily.

That’s my comfort level for anything even remotely “complicated”.

ionwake•11m ago
I keep wanting to go back to using claudecode but I get worried about this issue. How best to use it to complement you, without it rewriting everything behidn the scenes? whats the best protocol? constnat commit requests and reviews?
ur-whale•47m ago
I do agree with many points in the article, but not about the last part, namely that coding with AI assist makes you slower.

Personal experience (data points count = 1), as a somewhat seasoned dev (>30yrs of coding), it makes me WAY faster. I confess to not read the code produced at each iteration other than skimming through it for obvious architectural code smell, but I do read the final version line by line and make a few changes until I'm happy.

Long story short: things that would take me a week to put together now take a couple of hours. The vast bulk of the time saved is not having to identify the libraries I need, and not to have to rummage through API documentation.

abdibrokhim•46m ago
i am not agree with your opinions @realGeorgeHotz

if you know the fundamentals really well. AI coding actually speeds up development process.

source to the article: https://geohot.github.io//blog/jekyll/update/2025/09/12/ai-c...

are you planning to stream it? @ThePrimeagen

X.com: https://x.com/abdibrokhim/status/1966815369441542381

bdcravens•45m ago
I'm almost 50, and have been writing code professionally since the late 90s. I can pretty much see projects in my head, and know exactly what to build. I also get paid pretty well for what I do. You'd think I'd be the prototype for anti-AI.

I'm not.

I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.

I liken AI development to a developer somewhere between junior and mid-level, someone I can given a paragraph or two of thought out instructions and have them bang out an hour of work. (The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern, but a separate problem to solve)

haute_cuisine•39m ago
Would love to see a project you built with the help of AI, can you share any links?
bdcravens•27m ago
Most of my work is for my employer, but the bigger point is that you wouldn't be able to tell my "AI work" from my other work because I primarily use it for the boring stuff that it labor-intensive, while I work on the actual business cases. (Most of my work doesn't fall under the category of "web application", but rather, backend and background-processing intensive work that just happens to have an HTML front-end)
onion2k•24m ago
I love AI for speed running through all the boring stuff and getting to the good parts.

In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.

Companies need to realise that AI to go faster is great, but there's still a cognitive impact on the people. A little respite from the hardcore stuff is genuinely useful sometimes. Taking all of that away will be bad for people.

That said, some devs hate the boring easy bits and will thrive. As with everything, individuals need to be managed as individuals.

pydry•18m ago
>AI changes the job to be a constant struggle with hard problems

I find this hilarious. From what I've seen watching people do it, it changes the job from deep thought and figuring out a good design to pulling a lever on a slot machine and hoping something good pops out.

The studies that show diminished critical thinking have matched what i saw anecdotally pairing with people who vibe coded. It replaced deep critical thinking with a kind of faith based gambler's mentality ("maybe if i tell it to think really hard it'll do it right next time...").

The only times ive seen a notable productivity improvement is when it was something not novel that didnt particularly matter if what popped out was shit - e.g. a proof of concept, ad hoc app, something that would naturally either work or fail obviously, etc. The buzz people get from these gamblers' highs when it works seems to make them happier than if they didnt use it at all though.

bdcravens•6m ago
Which was my original point. Not that the outcome is shit. So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Most of it is so basic and boilerplate you really can't look at it and know if it was machine- or human-generated. Why shouldn't that work get cranked out in seconds instead of hours? Then we can do the actual work we're paid to do.

To pair this with the comment you're responding to, the decline in critical thinking is probably a sign that there's many who aren't as senior as their paycheck suggests. AI will likely lead to us being able to differentiate between who the architects/artisans are, and who the assembly line workers are. Like I said, that's not a new problem, it's just that AI lays that truth bare. That will have an effect generation over generation, but that's been the story of progress in pretty much every industry for time eternal.

bdcravens•14m ago
> In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.

The issue of senior-juniors has always been a problem; AI simply means they're losing their hiding spots.

raincole•12m ago
> AI changes the job to be a constant struggle with hard problems.

Very true. I think AI (especially Claude Code) forced me to actually think about the problem at hand before implementing the solution. And more importantly, write down my thoughts before they fleet away from my feeble mind. A discipline that I wished I had before.

wwweston•20m ago
What’s the tooling you’re using, and the workflow you find yourself drawn to that boosts productivity?
bdcravens•4m ago
I've used many different ones, and find the result pretty similar. I've used Copilot in VS Code, Chat GPT stand-alone, Warp.dev's baked in tools, etc. Often it's a matter of what kind of work I'm doing, since it's rarely single-mode.
raincole•44m ago
> It’s why the world wasted $10B+ on self driving car companies that obviously made no sense.

Obviously... in what way? I feel the anti-ai pattern is clear.

Self-driving cars don't work in my city so the whole concept is a hoax. LLMs don't code my proprietary language so it's a bubble.

> From this study (https://arxiv.org/abs/2507.09089)

I can tell this is going to be the most misquoted study in blogs and pop-sci books after the 10,000-hour mastery study. And it's just a preprint!

vmg12•39m ago
I think this gets to a fundamental problem with the way the AI labs have been selling and hyping AI. People keep on saying that the AI is actually thinking and it's not just pattern matching. Well, as someone that uses AI tools and develops AI tools, my tools are much more useful when I treat the AI as a pattern matching next-token predictor than an actual intelligence. If I accidentally slip too many details into the context, all of a sudden the AI fails to generalize. That sounds like pattern matching and next token prediction to me.

> This isn’t to say “AI” technology won’t lead to some extremely good tools. But I argue this comes from increased amounts of search and optimization and patterns to crib from, not from any magic “the AI is doing the coding”

* I can tell claude code to crank out some basic crud api and it will crank it out in a minute saving me an hour or so.

* I need an implementation of an algorithm that has been coded a million times on github, I ask the AI to do it and it cranks out a correct working implementation.

If I only use the AI in its wheelhouse it works very well, otherwise it sucks.

KoolKat23•34m ago
I think this comes down to levels of intelligence. Not knowledge, I mean intelligence. We often underestimate the amount of thinking/reasoning that goes into a certain task. Sometimes the AI can surprise you and do something very thoughtful, this often feels like magic.
pityJuke•34m ago
> It’s why the world wasted $10B+ on self driving car companies that obviously made no sense. There’s a much bigger market for truths that pump bags vs truths that don’t.

Did geohot not found one of these?

eviluncle•25m ago
Yes. He mentions that in passing that saying people will accuse him of hating on it because he didn't profit from it. I think his point of view is that his company's attempt was smaller scale and not part of the $10B+ waste?

In any case I don't fully understand what he's trying to say other than negating the hype (which i generally agree with), but not offering any alternative thoughts of his own other than- we have bad tools and programming language. (why? how are they bad? what needs to change for them to be good?)

sMarsIntruder•32m ago
I stopped reading at this point:

> It’s why the world wasted $10B+ on self driving car companies that obviously made no sense. There’s a much bigger market for truths that pump bags vs truths that don’t.

This reeks of bias-dismissing massive investments as ‘obvious’ nonsense while hyping its own tinygrad as the ‘truth’ in AI coding.

Author is allowed to claim ‘most people do not care to find the truth’ but it’s hypocritical when the post ignores counterpoints, like PyTorch’s dominance in efficient coding benchmarks.

Author doesn’t seem to care about finding the full truth either, just the version that pumps its bag.

iammjm•28m ago
ok boomer. its silly to read such generalizations. ai is a tool, and as every other tool it needs the right job and the right user to be useful and productive.
joefourier•28m ago
Vibe coding large projects isn’t feasible yet, but as a developer here’s how I use AI to great effect, to the point where losing the tool greatly decreases my productivity:

- Autocomplete in Cursor. People think of AI agents first when they talk about AI coding but LLM-powered autocomplete is a huge productivity boost. It merges seamlessly with your existing workflow, prompting is just writings comments, it can edit multiple lines at once or redirect you to the appropriate part of the codebase, and if the output isn’t what you need you don’t waste much time because you can just choose to ignore it and write code as you usually do.

- Generating coding examples from documentation. Hallucination is basically a non-problem with Gemini Pro 2.5 especially if you give it the right context. This gets me up to speed on a new library or framework very quickly. Basically a stack overflow replacement.

- Debugging. Not always guaranteed to work, but when I’m stuck at a problem for too long, it can provide a solution, or give me a fresh new perspective.

- Self contained scripts. It’s ideal for this, like making package installers, cmake configurations, data processing, serverless micro services, etc.

- Understanding and brainstorming new solutions.

- Vibe coding parts of the codebase that don’t need deep integration. E.g. create a web component with X and Y feature, a C++ function that does a well defined purpose, or a simple file browser. I do wonder if a functional programming paradigm would be better when working with LLMs since by avoiding side effects you can work around their weaknesses when it comes to large codebases.

giveita•28m ago
I have a boring opinion. A cold take? served straight from the freezer.

He is right, however AI is still darn useful. He hints at why: patterns.

Writing a test suite for a new class when an existing one is in place is a breeze. It even can come up with tests you wouldnt have thought of or would have been too time pressed to check.

It also applies to non-test code too. If you have the structure it can knock a new one out.

You could have some lisp contraption that DRYs all the WETs so there is zero boilerplate. But in reality we are not crafting these perfect cosebases, we make readable, low-magic and boilerplatey code on tbe whole in our jobs.

piker•24m ago
Pretty much nailed it. Once you’re at about 40k LOC you can just turn off the autocomplete features and use Claude or GPT to evaluate specific high-level issues. My sense is 40k LOC is the point at which the suggestions are offset by the rabbit holes they sometimes send you down, but, more importantly by obscuring from you the complexity of the thing you’re building—temporarily.
huevosabio•21m ago
There is some amount of truth on the AI coding claims.

But, what's with the self driving hate? I take Waymos on a regular basis, and he is basing his credibility on the claim that they are not a thing. Makes him sound bitter more than insightful.

apercu•13m ago
I think some of the “hate” is the hype. We’re all tired of companies announcing ground breaking tech that isn’t readily available a decade later.

I don’t live in California (like most of the population of the planet) - Toronto for 18 years and now the American side of the Great Lakes.

Ice storms, snow, sleet, cold weather 5-6 months out of the year. Batteries suck in the cold, sensors fail or under-perform. Hell, door handles and windows struggle in this weather.

Waymo is not a thing in NY or Chicago or Minneapolis or Philadelphia (I could go on).

faangguyindia•17m ago
AI coding is working really good for us.

My teammate shared 3 phase workflow we are using on our team to deliver project at rapid phase.

It's shared on ClaudeCode subreddit https://www.reddit.com/r/ClaudeCode/s/iy058fH4sZ

I've been using it for months with great success

blinkingled•14m ago
I have been working on finding out ways to make use of AI a net-positive in my professional life as opposed to yet another thing I have to work around and have cognitive load of. Some notes so far in getting great benefits out of it on couple projects -

* Getting good results from AI forced me to think through and think clearly - up front and even harder.

* AI almost forces me to structure and break down my thoughts into smaller more manageable chunks - which is a good thing. (You can't just throw a giant project at it - it gets really far off from what you want if you do that.)

* I have to make it a habit of reading what code it has added - so I understand it and point to it some improvements or rarely fixes (Claude)

* Everyone has what they think are uninteresting parts of a project that they have to put effort into to see the bigger project succeed - AI really helps with those mundane, cog in the wheel things - it not only speeds things up, personally it gives me more momentum/energy to work on the parts that I think are important.

* It's really bad at reusability - most humans will automatically know oh I have a function I wrote to do this thing in this project which I can use in that project. At some point they will turn that into a library. With AI that amount of context is a problem. I found that filling in for AI for this is just as much work and I best do that myself upfront before feeding it to AI - then I have a hope of getting it to understand the dependency structure and what does what.

* Domain specific knowledge - I deal with Google Cloud a lot and use Gemini for understanding what features exist in some GCP product and how I can use it to solve a problem - works amazingly well to save me time. At the least optioning the solution is a big part of work it makes easier.

* Your Git habits have to be top notch so you can untangle any mess AI creates - you reach a point where you have iterated over a feature addition using AI and it's a mess and you know it went off the rails after some point. If you just made one or two commits now you have to unwind everything and hope the good parts return or try to get AI to deal with it which can be risky.

zkmon•13m ago
Ofcourse, there is some truth in what you say. But business is desperate for new tech where they can redefine the order (who is big and who is small). There are floating billions which chase short term returns. Fund managers will be fired if they are not jumping on the new fad in the town. CIO's and CEO's will be fired if they are not jumping on AI. It's just nuclear arms race. It's good for none. but the other guy is on it, so you need to be too.

Think about this. Before there were cars on roads, people were just as much happy. Cars came, cities were redesigned for cars with buildings miles apart, and commuting miles became the new norm. You can no longer say cars are useless because the context around them has changed to make the cars a basic need.

AI does same thing. It changes the context in which we work. Everyone expects you use AI (and cars). It becomes a basic need, though a forced one.

To go further, hardly anything produced by science or technology is a basic need for humans. The context got twisted, making them basic needs. Tech solutions create the problems which they claim to solve. The problem did not exist before the solution came around. That's core driving force of business.

CompoundEyes•11m ago
It takes time. There are cycles of “Oh wow!” “Oh wait...” “What if?” and “Aha!” Each of those has made me more effective and resulted in reliable benefits with less zig zagging back and forth.
lukaslalinsky•10m ago
AI coding is the one thing that got my back to programming. I got to the point in life, when my ability to focus is reducing, and I prefer to send the remaining energy elsewhere. I kind of gave up on programming, just doing architecture and occasionally doing very small programming tasks. It all changed when I discovered Claude Code and saw that the way it works, is kind of how I work. I also use a lot of grep to find my way through a new codebase, I also debug stuff by adding logs to see the context, I also rely on automated tests to tell me something is broken. I'm still very good at reading code, I'm good at architecture, and with these tools, I feel I can safely delegate the boring bits of writing code and debugging trivial things to AI. Yes, it's slower than if I focused on the task myself, but the point is that I'd not be able to focus on the task myself.
manx•9m ago
This pre-AI article makes a very similar argument: https://mortoray.com/programming-wont-be-automated-or-it-alr...

Once we realize that what we actually want is turning specifications into software, I think that English will become the base for a new, high level specification language.

mikewarot•6m ago
I've got a cognitive hammer that I tend to over-use, and that is seeing the world through the lens of a Ham Radio operator, and impedance matching. It's something that involves getting the voltages and currents right to make a circuit work, and transfer power efficiently, but with radio frequencies, there are 2 dimensions worth of voltages and currents instead of one. It's trickier as a result, but most of the time, a single value, VSWR is sufficient to tell how well things are matched.

That single number is adequate to know if a transmitter is going to work, but making sure it will work across a wide range of frequencies, yields at least a 3rd dimension. As time progresses, if you actually work with those additional dimensions, it slowly sinks in what works, and how, and what had previously seemed like magic, becomes engineering. For example, vacuum tube based transmitters have higher resistances that almost any antenna, transformers and coupling through elements that shift power back and forth between the two dimensions allow optimum transfer without losses at the cost of complexity. Semiconductor based transmitters tend to have the opposite problem, their impedances are lower, so different patters work for them, but most people still just see it as "antenna matching", and focus on the single number, ignoring the complexity.

{{Wow... this is a book, not an answer on HN... it'll get shorter after a few edits, I hope}}

I've done programming on and off through 4 decades of work. Most of my contemplation is as an enthusiast, instead of professional. As far as compilers and the broader areas of Computer Science I haven't formally studied, it seems to me that LLMS, especially the latest "agentic" versions, will allow me to explore things far easier than I might have otherwise done. LLMs have helped me to match my own thoughts across a much wider cognitive impedance landscape. (There's that analogy/hammer in use...)

Compilers are an impedance matching mechanism. Allowing a higher level of abstraction gives flexibility. One of the ideas I've had in the past for helping with better interaction between people and compilers is to allow compilers that also work backwards.[1] I'm beginning to suspect that with LLMs, I might actually be able to attempt to build this system, it's always seemed out of reach because of the levels of complexity involved.

I have several other ideas that might warrant a new attempt, now that I'm out of the job market, and have the required free time and attention.

[1] https://wiki.c2.com/?BidirectionalCompiler

matt3D•3m ago
This is a more extreme example of the general hacker news group think about AI.

Geohot is easily a 99.999 percentile developer, and yet he can’t seem to reconcile that the other 99.999 percent are doing something much more basic than he can ever comprehend.

It’s some kind of expert paradox, if everyone was as smart and capable as the experts, then they wouldn’t be experts.

I have come across many developers that behave like the AI. Can’t explain codebases they’ve built, can’t maintain consistency.

It’s like a aerospace engineer not believing that the person that designs the toys in an Kinder egg doesn’t know how fluid sims work.

YC still gives $500K to startups but everything went 10x since?

1•anonandwhistle•26s ago•0 comments

How 'overworked, underpaid' humans train Google's AI to seem smart

https://www.theguardian.com/technology/2025/sep/11/google-gemini-ai-training-humans
1•Brajeshwar•47s ago•0 comments

Are EVs causing car sickness – and what can be done?

https://www.theguardian.com/environment/2025/sep/13/extreme-nausea-are-evs-causing-car-sickness-a...
1•n1b0m•1m ago•0 comments

I Made a Mechanical Laptop

https://www.youtube.com/watch?v=kGHAUogFsYY
1•hashworks•3m ago•0 comments

Never steal a hacker's girlfriend's phone: A global network of thieves exposed

https://english.elpais.com/technology/2025-09-13/never-steal-a-hackers-girlfriends-phone-how-an-e...
1•mnmalst•6m ago•1 comments

Firing the Lorentz Plasma Cannon [video]

https://www.youtube.com/watch?v=lix-vr_AF38
1•luckys•8m ago•0 comments

Builtwithhare.org – various projects built with the Hare programming language

https://builtwithhare.org
1•TheWiggles•11m ago•0 comments

James Webb Space Telescope reveals something strange on comet 3I/ATLAS

https://www.livescience.com/space/comets/james-webb-telescope-images-reveal-theres-something-stra...
2•thunderbong•14m ago•0 comments

Show HN: AI Roast – Fun AI-generated roasts of websites

https://ai-roast.jamatrix.io/
1•happy_malone•14m ago•0 comments

Æthelstan 1100

https://diamondgeezer.blogspot.com/2025/09/thelstan-1100.html
1•zeristor•15m ago•1 comments

Improving state machine code generation

https://trifectatech.org/blog/improving-state-machine-code-generation/
1•tempodox•18m ago•0 comments

People are losing jobs due to social media posts about Charlie Kirk

https://www.npr.org/2025/09/13/nx-s1-5538476/charlie-kirk-jobs-target-social-media-critics-resign
1•Improvement•19m ago•1 comments

Why not analyzing a dataset as if it were an image (with brushes, erasers,)?

1•vinserello•25m ago•0 comments

Inventor says most humanoid robots today are 'terrifying'

https://www.theregister.com/2025/09/13/to_make_a_humanoid_robot/
2•YeGoblynQueenne•32m ago•0 comments

Lobsters Interview with Susam

https://lobste.rs/s/kltoas/lobsters_interview_with_susam
1•tempodox•32m ago•0 comments

Betrusted – Building transparent, secure computing from the silicon up

https://betrusted.io/
1•akyuu•37m ago•0 comments

Ace and DAZN Shut Down a Major Sports Piracy Site in a "DMCA Ignored" Country

https://torrentfreak.com/ace-dazn-shut-down-a-major-sports-piracy-site-in-dmca-ignored-country-25...
2•gslin•38m ago•0 comments

Evolutionary Software Quality

https://www.youtube.com/watch?v=babuZSG8zMI
1•mcp_•44m ago•0 comments

Gen-Z Protestors in Nepal Used Discord and TikTok to Overthrow Their Government

https://twitter.com/rachinkalakheti/status/1966314602251301138
2•3l3ktr4•47m ago•1 comments

We compared 6 AI models for coding (unscientific, but fun)

https://blog.kilocode.ai/p/choosing-the-right-ai-coding-model
2•rietie•50m ago•0 comments

New Zealand's plan to save birds? Kill invasive animals

https://www.npr.org/2025/09/08/nx-s1-5507110/new-zealand-conservation-experiment
1•rbanffy•1h ago•0 comments

Scrapy_cffi: Async-first, modular web scraping utilities

https://github.com/aFunnyStrange/scrapy_cffi
2•funnyStrange•1h ago•1 comments

Vimeo to be acquired by Bending Spoons for $1.38B

https://www.theverge.com/news/775701/vimeo-bending-spoons-acquisition
1•akyuu•1h ago•0 comments

Show HN: PromptGit – Git diff and rollback for LLM prompts

https://github.com/kagehq/promptgit
1•lexokoh•1h ago•0 comments

Seedream 4.0 – Professional AI Image Generator and Editor

https://seedream-4.app/
1•alexzn596•1h ago•0 comments

The Death of Charlie Kirk Is the Turning Point for Accountability in America [video]

https://www.youtube.com/watch?v=YkaRHzOjTyo
1•keepamovin•1h ago•2 comments

The Mass Shooters Are Performing for One Another

https://www.theatlantic.com/technology/archive/2025/09/minneapolis-church-shooting-influencers/68...
1•usrusr•1h ago•0 comments

Next-generation graph computing with electric current-based approaches

https://www.nature.com/articles/s41467-025-63494-z
1•bryanrasmussen•1h ago•1 comments

Pivotal Helix: A Quiet Leap in Emergency Response

https://spectrum.ieee.org/pivotal-helix
2•rbanffy•1h ago•0 comments

Migrating to TanStack Start

https://catalins.tech/migrating-to-tanstack-start/
1•cmpit•1h ago•0 comments