frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google is dead. Where do we go now?

https://www.circusscientist.com/2025/12/29/google-is-dead-where-do-we-go-now/
313•tomjuggler•2h ago•265 comments

ManusAI Joins Meta

https://manus.im/blog/manus-joins-meta-for-next-era-of-innovation
37•gniting•36m ago•19 comments

Stop Claude Code from forgetting everything

https://github.com/mutable-state-inc/ensue-skill
7•austinbaggio•29m ago•4 comments

Flame Graphs vs Tree Maps vs Sunburst (2017)

https://www.brendangregg.com/blog/2017-02-06/flamegraphs-vs-treemaps-vs-sunburst.html
75•gudzpoz•2d ago•17 comments

Static Allocation with Zig

https://nickmonad.blog/2025/static-allocation-with-zig-kv/
144•todsacerdoti•6h ago•74 comments

List of domains censored by German ISPs

https://cuiiliste.de/domains
237•elcapitan•4h ago•91 comments

All Delisted Steam Games

https://delistedgames.com/all-delisted-steam-games/
143•Bondi_Blue•3h ago•54 comments

Left Behind: Futurist Fetishists, Prepping and the Abandonment of Earth (2019)

https://www.boundary2.org/2019/08/sarah-t-roberts-and-mel-hogan-left-behind-futurist-fetishists-p...
23•naves•3h ago•16 comments

A production bug that made me care about undefined behavior

https://gaultier.github.io/blog/the_production_bug_that_made_me_care_about_undefined_behavior.html
76•birdculture•4h ago•50 comments

AI is forcing us to write good code

https://bits.logic.inc/p/ai-is-forcing-us-to-write-good-code
42•sgk284•3h ago•26 comments

Which Humans? (2023)

https://osf.io/preprints/psyarxiv/5b26t_v1
24•surprisetalk•3h ago•14 comments

When someone says they hate your product

https://www.getflack.com/p/responding-to-negative-feedback
50•jger15•3h ago•51 comments

Show HN: Aroma: Every TCP Proxy Is Detectable with RTT Fingerprinting

https://github.com/Sakura-sx/Aroma
48•Sakura-sx•4d ago•25 comments

Obelisk 0.32: Cancellation, WebAPI, Postgres

https://obeli.sk/blog/announcing-obelisk-0-32/
9•tomasol•3h ago•1 comments

AI Employees Don't Pay Taxes

https://alec.is/posts/ai-employees-dont-pay-taxes/
6•arm32•31m ago•1 comments

Show HN: Superset – Terminal to run 10 parallel coding agents

https://superset.sh/
50•avipeltz•6d ago•43 comments

GOG is getting acquired by its original co-founder

https://www.gog.com/blog/gog-is-getting-acquired-by-its-original-co-founder-what-it-means-for-you/
470•haunter•6h ago•266 comments

Libgodc: Write Go Programs for Sega Dreamcast

https://github.com/drpaneas/libgodc
188•drpaneas•9h ago•45 comments

Linux DAW: Help Linux musicians to quickly and easily find the tools they need

https://linuxdaw.org/
161•prmoustache•10h ago•79 comments

Kidnapped by Deutsche Bahn

https://www.theocharis.dev/blog/kidnapped-by-deutsche-bahn/
868•JeremyTheo•10h ago•806 comments

Pandas with Rows (2022)

https://datapythonista.me/blog/pandas-with-hundreds-of-millions-of-rows
6•fud101•3d ago•1 comments

High-performance C++ hash table using grouped SIMD metadata scanning

https://github.com/Cranot/grouped-simd-hashtable
36•rurban•5d ago•13 comments

Static Allocation for Compilers

https://matklad.github.io/2025/12/23/static-allocation-compilers.html
16•enz•6d ago•7 comments

Nvidia takes $5B stake in Intel under September agreement

https://www.reuters.com/legal/transactional/nvidia-takes-5-billion-stake-intel-under-september-ag...
167•taubek•5h ago•64 comments

You can't design software you don't work on

https://www.seangoedecke.com/you-cant-design-software-you-dont-work-on/
216•saikatsg•15h ago•76 comments

Binance's Trust Wallet extension hacked; users lose $7M

https://www.web3isgoinggreat.com/?id=trust-wallet-hack
71•ilamont•2h ago•7 comments

Show HN: Evidex – AI Clinical Search (RAG over PubMed/OpenAlex and SOAP Notes)

https://www.getevidex.com
28•amber_raza•5h ago•14 comments

Karpathy on Programming: "I've never felt this much behind"

https://twitter.com/karpathy/status/2004607146781278521
222•rishabhaiover•3d ago•207 comments

Why is calling my asm function from Rust slower than calling it from C?

https://ohadravid.github.io/posts/2025-12-rav1d-faster-asm/
90•gavide•2d ago•30 comments

Meta's ads tools started switching out top-performing ads with AI-generated ones

https://www.businessinsider.com/meta-ai-generating-bizarre-ads-advantage-plus-2025-10
106•zdw•3h ago•60 comments
Open in hackernews

The future of software development is software developers

https://codemanship.wordpress.com/2025/11/25/the-future-of-software-development-is-software-developers/
79•cdrnsf•3h ago

Comments

simonw•2h ago
I nodded furiously at this bit:

> The hard part of computer programming isn't expressing what we want the machine to do in code. The hard part is turning human thinking -- with all its wooliness and ambiguity and contradictions -- into computational thinking that is logically precise and unambiguous, and that can then be expressed formally in the syntax of a programming language.

> That was the hard part when programmers were punching holes in cards. It was the hard part when they were typing COBOL code. It was the hard part when they were bringing Visual Basic GUIs to life (presumably to track the killer's IP address). And it's the hard part when they're prompting language models to predict plausible-looking Python.

> The hard part has always been – and likely will continue to be for many years to come – knowing exactly what to ask for.

I don't agree with this:

> To folks who say this technology isn’t going anywhere, I would remind them of just how expensive these models are to build and what massive losses they’re incurring. Yes, you could carry on using your local instance of some small model distilled from a hyper-scale model trained today. But as the years roll by, you may find not being able to move on from the programming language and library versions it was trained on a tad constraining.

Some of the best Chinese models (which are genuinely competitive with the frontier models from OpenAI / Anthropic / Gemini) claim to have been trained for single-digit millions of dollars. I'm not at all worried that the bubble will burst and new models will stop being trained and the existing ones will lose their utility - I think what we have now is a permanent baseline for what will be available in the future.

thisoneisreal•1h ago
The first part is surely true if you change it to "the hardEST part," (I'm a huge believer in "Programming as Theory Building"), but there are plenty of other hard or just downright tedious/expensive aspects of software development. I'm still not fully bought in on some of the AI stuff—I haven't had a chance to really apply an agentic flow to anything professional, I pretty much always get errors even when one-shotting, and who knows if even the productive stuff is big-picture economical—but I've already done some professional "mini projects" that just would not have gotten done without an AI. Simple example is I converted a C# UI to Java Swing in less than a day, few thousand lines of code, simple utility but important to my current project for <reasons>. Assuming tasks like these can be done economically over time, I don't see any reason why small and medium difficulty programming tasks can't be achieved efficiently with these tools.
underdeserver•47m ago
Aren't they also losing money on the marginal inference job?
nrhrjrjrjtntbt•36m ago
Hardest part of programming is knowing wtf all the existing code does and why.
doug_durham•3m ago
And that is the super power of LLMs. In my experience LLMs are better a reading code than writing it. Have it annotate some code for you.
cmrdporcupine•36m ago
Indeed, while DeepSeek 3.2 or GLM 4.7 are not Opus 4.5 quality, they are close enough that I could _get by_ because they're not that far off, and are about where I was with Sonnet 3.5 or Sonnet 4 a few months ago.

I'm not convinced DeepSeek is making money hosting these, but it's not that far off from it I suspect. They could triple their prices and still be cheaper than Anthropic is now.

boogieknite•20m ago
maybe not the MOST valuable part of prompting an LLM during a task, but one of them, is defining the exact problem in precise language. i dont just blindly turn to an LLM without understanding the problem first, but i do find Claude is better than a cardboard cutout of a dog
mohsen1•2h ago
I really really want this to be true. I want to be relevant. I don’t know what to do if all those predictions are true and there is no need (or very little need) for programmers anymore.

But something tells me “this time is different” is different this time for real.

Coding AIs design software better than me, review code better than me, find hard-to-find bugs better than me, plan long-running projects better than me, make decisions based on research, literature, and also the state of our projects better than me. I’m basically just the conductor of all those processes.

Oh, and don't ask about coding. If you use AI for tasks above, as a result you'll get very well defined coding task definitions which an AI would ace.

I’m still hired, but I feel like I’m doing the work of an entire org that used to need twenty engineers.

From where I’m standing, it’s scary.

khalic•1h ago
I feel you, it's scary. But the possibilities we're presented with are incredible. I'm revisiting all these projects that I put aside because they were "too big" or "too much for a machine". It's quite exciting
belter•1h ago
>> From where I’m standing, it’s scary.

You are being fooled by randomness [1]

Not because the models are random, but because you are mistaking a massive combinatorial search over seen patterns for genuine reasoning. Taleb point was about confusing luck for skill. Dont confuse interpolation for understanding.

You can read a Rust book after years of Java, then go build software for an industry that did not exist when you started. Ask any LLM to write a driver for hardware that shipped last month, or model a regulatory framework that just passed... It will confidently hallucinate. You will figure it out. That is the difference between pattern matching and understanding.

[1] https://en.wikipedia.org/wiki/Fooled_by_Randomness

joefourier•1h ago
Have you used an LLM specifically trained for tool calling, in Claude Code, Cursor or Aider?

They’re capable of looking up documentation, correcting their errors by compiling and running tests, and when coupled with a linter, hallucinations are a non issue.

I don’t really think it’s possible to dismiss a model that’s been trained with reinforcement learning for both reasoning and tool usage as only doing pattern matching. They’re not at all the same beasts as the old style of LLMs based purely on next token prediction of massive scrapes of web data (with some fine tuning on Q&A pairs and RLHF to pick the best answers).

treespace8•1h ago
I'm using Claude code to help me learn Godot game programming.

One interesting thing is that Claude will not tell me if I'm following the wrong path. It will just make the requested change to the best of its ability.

For example a Tower Defence game I'm making I wanted to keep turret position state in an AStarGrid2D. It produced code to do this, but became harder and harder to follow as I went on. It's only after watching more tutorials I figured out I was asking for the wrong thing. (TileMapLayer is a much better choice)

LLMs still suffer from Garbage in Garbage out.

memoriuaysj•1h ago
before coding I just ask the model "what are the best practices in this industry to solve this problem? what tools/libraries/approaches people use?

after coding I ask it "review the code, do you see any for which there are common libraries implementing it? are there ways to make it more idiomatic?"

you can also ask it "this is an idea on how to solve it that somebody told me, what do you think about it, are there better ways?"

skydhash•59m ago
Do you also light candles and chant?
manmal•32m ago
Both the before and after are better done manually. What you are describing is fine for the heck of it (I‘ve vibe coded a whisper related rust port today without having any actual rust skills), but I’d never use fully vibed software in production. That’s irresponsible in multiple ways.
hansmayer•21m ago
> before coding I just ask the model "what are the best practices in this industry to solve this problem? what tools/libraries/approaches people use?

Just for the fun of it, and so you lose your "virginity" so to speak, next time when the magic machine gives you the answer about "what it thinks", tell it its wrong in a strict language and scold it for misleading you. Tell it to give you the "real" best practices instead of what it spat out. Then sit back and marvel at the machine saying you were right and that it had mislead you. Producing a completely, somewhat, or slightly different answer (you never know what you get on the slot machine).

jennyholzer3•19m ago
don't use LLMs for Godot game programming.

edit: Major engine changes have occurred after the models were trained, so you will often be given code that refers to nonexistent constants and functions and which is not aware of useful new features.

belter•53m ago
Ask a model to

"Write a chess engine where pawns move backward and kings can jump like nights"

It will keep slipping back into real chess rules. It learned chess, it did not understand the concept of "rules"

Or

Ask it to reverse a made up word like

"Reverse the string 'glorbix'"

It will get it wrong on the first try. You would not fail.

Or even better ask it to...

"Use the dxastgraphx library to build a DAG scheduler."

dxastgraphx is a non existing library...

Marvel at the results...tried in both Claude and ChatGPT....

manmal•30m ago
I‘ve just tried the dxastgraphx one in pi with Opus 4.5. This was its response:

    I couldn't find a library called dxastgraphx in either pip (Python) or npm (JavaScript) package registries. This library doesn't appear to exist.

    Did you perhaps mean one of these popular DAG/graph libraries?

    Python:
    - networkx - comprehensive graph library with DAG support
    - graphlib - Python standard library (3.9+) with TopologicalSorter
    - dask - parallel computing with DAG task scheduling

    JavaScript/TypeScript:
    - graphlib - graph data structures
    - dagre - DAG layout

    Would you like me to build a DAG scheduler using one of these existing libraries, or would you like me to implement one from scratch? Let me know which language and approach you prefer.
criticalfault•29m ago
just tried to reverse the string you provided using Gemini. it worked fine on the first try
eithed•7m ago
In gemini:

> The reverse of the string 'gssssyeekxxuoaqaaaiooo9aaa7zjx' is:

xjz7aaa9oooiiaaaqauxxkyeessssg

> Youve made a mistake

``` I see where I tripped up! Let's re-examine that long string carefully.

The original string you gave me was: `gssssyeekxxuoaqaaaiooo9aaa7zjx`

Let's look at the end of that string: `...aaa7zjx`. When we reverse it, it should start with: `xjz7aaa...`

Here is the precise character-by-character reversal:

*xjz7aaa9oooiiaaaqauxxkyeessssg*

Wait—I see it now. In my previous response, I included an extra *'i'* and missed one *'o'* in the middle section. Let's do a strict count to get it perfect:

* *Original:* `g` `ssss` `y` `ee` `k` `xx` `u` `o` `a` `q` `aaa` *`iooo`* `9` `aaa` `7` `z` `j` `x` * *Correct Reverse:* `x` `j` `z` `7` `aaa` `9` *`oooi`* `aaa` `q` `a` `o` `u` `xx` `k` `ee` `y` `ssss` `g`

The correct reversed string is: *xjz7aaa9oooiiaaaqauoxxkyeessssg*

Thank you for catching that. It’s easy for the eyes to glaze over when there are so many repeating vowels!

Since I've stumbled a bit on these, would you like to give me one more string to see if I can get it right on the first try? ```

So, even after correcting it didnt produce correct string

Verdex•1h ago
I've worked with a lot of interns, fresh outs from college, overseas lowest bidders, and mediocre engineers who gave up years ago. All over the course of a ~20 year career.

Not once in all that time has anyone PRed and merged my completely unrelated and unfinished branch into main. Except a few weeks ago. By someone who was using the LLM to make PRs.

He didn't understand when I asked him about it and was baffled as to how it happened.

Really annoying, but I got significantly less concerned about the future of human software engineering after that.

doug_durham•5m ago
Why would you expect an LLM or even a human to succeed in these cases? “Write a piece of code for a specification that you can’t possibly know about?” That’s why you have to do context engineering, just like you’d provide a reference to a new document to an engineer writing code.
ravenstine•1h ago
Yeah, it makes me wonder whether I should start learning to be a carpenter or something. Those who either support AI or thinks "it's all bullshit" cite a lack of evidence for humans truly being replaced in the engineering process, but that's just the thing; the unprecedented levels of uncertainty make it very difficult to invest one's self in the present, intellectually and emotionally. With the current state of things, I don't think it's silly to wonder "what's the point" if another 5 years of this trajectory is going to mean not getting hired as a software dev again unless you have a PhD and want to work for an AI company.

What doesn't help is that the current state of AI adoption is heavily top-down. What I mean is the buy-in is coming from the leadership class and the shareholder class, both of whom have the incentive to remove the necessary evil of human beings from their processes. Ironically, these classes are perhaps the least qualified to decide whether generative AI can replace swathes of their workforce without serious unforeseen consequences. To make matters worse, those consequences might be as distal as too many NEETs in the system such that no one can afford to buy their crap anymore; good luck getting anyone focused on making it to the next financial quarter to give a shit about that. And that's really all that matters at the end of the day; what leadership believes, whether or not they are in touch with reality.

btbuildem•1h ago
They do all those things you've mentioned more efficiently than most of us, but they fall woefully short as soon as novelty is required. Creativity is not in their repertoire. So if you're banging out the same type of thing over and over again, yes, they will make that work light and then scarce. But if you need to create something niche, something one-off, something new, they'll slip off the bleeding edge into the comfortable valley of the familiar at every step.

I choose to look at it as an opportunity to spend more time on the interesting problems, and work at a higher level. We used to worry about pointers and memory allocation. Now we will worry less and less about how the code is written and more about the result it built.

9dev•1h ago
I think your image of LLMs is a bit outdated. Claude Code with well-configured agents will get entirely novel stuff done pretty well, and that’s only going to get better over time.

I wouldn’t want to bet my career on that anyway.

skydhash•1h ago
> So if you're banging out the same type of thing over and over again, yes, they will make that work light and then scarce.

The same thing over and over again should be a SaaS, some internal tool, or a plugin. Computers are good at doing the same thing over and over again and that's what we've been using them for

> But if you need to create something niche, something one-off, something new, they'll slip off the bleeding edge into the comfortable valley of the familiar at every step.

Even if the high level description of a task may be similar to another, there's always something different in the implementation. A sports car and a sedan have roughly the same components, but they're not engineered the same.

> We used to worry about pointers and memory allocation.

Some still do. It's not in every case you will have a system that handle allocations and a garbage collector. And even in those, you will see memory leaks.

> Now we will worry less and less about how the code is written and more about the result it built.

Wasn't that Dreamweaver?

keyle•1h ago
Take food for example. We don't eat food made by computers even though they're capable of making it from start to finish.

Sure we eat carrots probably assisted by machines, but we are not eating dishes like protein bars all day every day.

Our food is still better enjoyed when made by a chef.

Software engineering will be the same. No one will want to use software made by a machine all day every day. There are differences in the execution and implementation.

No one will want to read books entirely dreamed up by AI. Subtle parts of the books make us feel something only a human could have put right there right then.

No one will want to see movies entirely made by AI.

The list goes on.

But you might say "software is different". Yes but no, in the abundance of choice, when there will be a ton of choice for a type of software due to the productivity increase, choice will become more prominent and the human driven software will win.

Even today we pick the best terminal emulation software because we notice the difference between exquisitely crafted and bloated cruft.

doug_durham•9m ago
You should look at other engineering disciplines. How many highway over passes have unique “chef quality” designs? Very few. Most engineering is commodity replications of existing designs. The exact same thing applies to software engineering. Most of us engineers are replicating designs that came earlier. LLMs are good at generating the rote designs that make up the bulk of software by volume. Who benefit from an artisanal REST interface? The best practices were codified over a decade ago.
63stack•1h ago
This reads like shilling/advertisement.. Coding AIs are struggling for anything remotely complex, make up crap and present it as research, write tests that are just "return true", and won't ever question a decision you make.

Those twenty engineers must not have produced much.

aspenmartin•38m ago
No it doesn’t read like shilling and advertisement, it’s tiring hearing people continually dismiss coding agents as if they have not massively improved and are driving real value despite limitations and they are only just getting started. I’ve done things with Claude I never thought possible for myself to do, and I’ve done things where Claude made the whole effort take twice as long and 3x more of my time. It’s not like people are ignoring the limitations, it’s that people can see how powerful the already are and how much more headroom there is even with existing paradigms not to mention the compute scaling happening in 26-27 and the idea pipeline from the massive hoarding of talent.
jayd16•35m ago
When prices go down or product velocity goes up we'll start believing in the new 20x developer. Until then, it doesn't align with most experiences and just reads like fiction.

You'll notice no one ever seems to talk about the products they're making 20x faster or cheaper.

hansmayer•28m ago
+1 - I wish at least one of these AI boosters had shown us a real commercialised product they've built.
aspenmartin•17m ago
AI boosters? Like people are planted by Sam Altman like the way they hire crowds for political events or something? Hey! Maybe I’m AI! You’re absolutely right!

In seriousness: I’m sure there are projects that are heavily powered by Claude, myself and a lot of other people I know use Claude almost exclusively to write and then leverage it as a tool when reviewing. Almost everyone I hear that has this super negative hostile attitude references some “promise” that has gone unfulfilled but it’s so silly: judge the product they are producing and maybe just maybe consider the rate of progress to _guess_ where things are heading

doug_durham•16m ago
You’ve never read Simon Willison’s blog? His repo is full of work that he’s created with LLM’s. He makes money off of them. There are plenty of examples you just need to look.
aspenmartin•16m ago
Who is saying anything about 20x? Sorry did I miss something here?
jayd16•14m ago
> work of an entire org that used to need twenty engineers.

From the OP. If you think that's too much then we agree.

hansmayer•29m ago
> I’ve done things with Claude I never thought possible for myself to do,

That's the point champ. They seem great to people when they apply them to some domain they are not competent it, that's because they cannot evaluate the issues. So you've never programmed but can now scaffold a React application and basic backend in a couple of hours? Good for you, but for the love of god have someone more experienced check it before you push into production. Once you apply them to any area where you have at least moderate competence, you will see all sorts of issues that you just cannot unsee. Security and performance is often an issue, not to mention the quality of code....

aspenmartin•21m ago
Seems fine, works, is fine, is better than if you had me go off and write it on my own. You realize you can check the results? You can use Claude to help you understand the changes as you read through them? I mean I just don’t get this weird “it makes mistakes and it’s horrible if you understand the domain that it is generating over” I mean yes definitely sometimes and definitely not other times. What happens if I DONT have someone more experienced to consult with or that will ignore me because they are busy or be wrong because they are also imperfect and not focused. It’s really hard to be convinced that this point of view is not just some knee jerk reaction justified post hoc
cmrdporcupine•17m ago
This is remarkably dismissive and comes across as arrogant. In reality they assist many people with expert skills in a domain in getting things done in areas they are competent in, without getting bogged down in tedium.

They need a heavy hand to police to make sure they do the right thing. Garbage in, garbage out.

The smarter the hand of the person driving them, the better the output. You see a problem, you correct it. Or make them correct it. The stronger the foundation they're starting from, the better the production.

It's basically the opposite of what you're asserting here.

davnicwil•34m ago
I would say while LLMs do improve productivity sometimes, I have to say I flatly cannot believe a claim (at least without direct demonstration or evidence) that one person is doing the work of 20 with them in december 2025 at least.

I mean from the off, people were claiming 10x probably mostly because it's a nice round number, but those claims quickly fell out of the mainstream as people realised it's just not that big a multiplier in practice in the real world.

I don't think we're seeing this in the market, anywhere. Something like 1 engineer doing the job of 20, what you're talking about is basically whole departments at mid sized companies compressing to one person. Think about that, that has implications for all the additional management staff on top of the 20 engineers too.

It'd either be a complete restructure and rethink of the way software orgs work, or we'd be seeing just incredible, crazy deltas in output of software companies this year of the type that couldn't be ignored, they'd be impossible to not notice.

This is just plainly not happening. Look, if it happens, it happens, 26, 27, 28 or 38. It'll be a cool and interesting new world if it does. But it's just... not happened or happening in 25.

jmogly•7m ago
I would say it varies from 0x to a modest 2x. It can help you write good code quickly, but, I only spent about 20-30% of my time writing code anyway before AI. It definitely makes debugging and research tasks much easier as well. I would confidently say my job as a senior dev has gotten a lot easier and less stressful as a result of these tools.

One other thing I have seen however is the 0x case, where you have given too much control to the llm, it codes both you and itself into pan’s labyrinth, and you end up having to take a weed wacker to the whole project or start from scratch.

to11mtm•14m ago
I'd be willing to give you access to the experiment I mentioned in a separate reply (have a github repo), as far as the output that you can get for a complex app buildout.

Will admit It's not great (probably not even good) but it definitely has throughput despite my absolute lack of caring that much [0]. Once I get past a certain stage I am thinking of doing an A-B test where I take an earlier commit and try again while paying more attention... (But I at least want to get where there is a full suite of UOW cases before I do that, for comparison's sake.)

> Those twenty engineers must not have produced much.

I've been considered a 'very fast' engineer at most shops (e.x. at multiple shops, stories assigned to me would have a <1 multiplier for points[1])

20 is a bit bloated, unless we are talking about WITCH tier. I definitely can get done in 2-3 hours what could take me a day. I say it that way because at best it's 1-2 hours but other times it's longer, some folks remember the 'best' rather than median.

[0] - It started as 'prompt only', although after a certain point I did start being more aggressive with personal edits.

[1] - IDK why they did it that way instead of capacity, OTOH that saved me when it came to being assigned Manual Testing stories...

heliumtera•1h ago
Stop freaking out. Seriously. You're afraid of something completely ridiculous.

It is certainly more eloquent than you regarding software architecture (which was a scam all along, but conversation for another time). It will find SOME bugs better than you, that's a given.

Review code better than you? Seriously? What you're using and what you consider code review? Assume I could identify one change broke production and you reviewed the latest commit. I am pinging you and you better answer. Ok, Claude broke production, now what? Can you begin to understand the difference between you and the generative technology? When you hop on the call, you will explain to me with a great deal of details what you know about the system you built, and explain decision making and changes over time. You'll tell about what worked and what didn't. You will tell about the risks, behavior and expectations. About where the code runs, it's dependencies, users, usage patterns, load, CPU usage and memory footprint, you could probably tell what's happening without looking at logs but at metrics. With Claude I get: you're absolutely right! You asked about what it WAS, but I told you about what it WASN'T! MY BAD.

Knowledge requires a soul to experience and this is why you're paid.

mywittyname•14m ago
We use code rabbit and it's better than practically any human I've worked with at a number of code review tasks, such as finding vulnerabilities, highlighting configuration issues, bad practices, etc. It's not the greatest at "does this make sense here" type questions, but I'd be the one answering those questions anyway.

Yeah, maybe the people I've worked with suck at code reviews, but that's pretty normal.

Not to say your answer is wrong. I think the gist is accurate. But I think tooling will get better at answering exactly the kind of questions you bring up.

Also, someone has to be responsible. I don't think the industry can continue with this BS "AI broke it." Our jobs might devolve into something more akin to a SDET role and writing the "last mile" of novel code the AI can't produce accurately.

anonymars•9m ago
> Review code better than you? Seriously?

Yes, seriously (not OP). Sometimes it's dumb as rocks, sometimes it's frighteningly astute.

I'm not sure at which point of the technology sigmoid curve we find ourselves (2007 iPhone or 2017 iPhone?) but you're doing yourself a disservice to be so dismissive

dataviz1000•57m ago
I was a chef in Michelin-starred restaurants for 11 years. One of my favorite positions was washing dishes. The goal was always to keep the machine running on its 5-minute cycle. It was about getting the dishes into racks, rinsing them, and having them ready and waiting for the previous cycle to end—so you could push them into the machine immediately—then getting them dried and put away after the cycle, making sure the quality was there and no spot was missed. If the machine stopped, the goal was to get another batch into it, putting everything else on hold. Keeping the machine running was the only way to prevent dishes from piling up, which would end with the towers falling over and breaking plates. This work requires moving lightning fast with dexterity.

AI coding agents are analogous to the machine. My job is to get the prompts written, and to do quality control and housekeeping after it runs a cycle. Nonetheless, like all automation, humans are still needed... for now.

deadbabe•55m ago
Where the hell was all this fear when the push for open source everything got fully underway? When entire websites were being spawned and scaffolded with just a couple lines of code? Do we not remember all those impressive tech demos of developers doing massive complex thing with "just one line of code"? How did we not just write software for every kind of software problem that could exist by now?

How has free code, developed by humans, become more available than ever and yet somehow we have had to employ more and more developers? Why didn't we trend toward less developers?

It just doesn't make sense. AI is nothing but a snippet generator, a static analyzer, a linter, a compiler, an LSP, a google search, a copy paste from stackoverflow, all technologies we've had for a long time, all things developers used to have to go without at some point in history.

I don't have the answers.

scellus•53m ago
Perfect economic substitution in coding doesn't happen for a long time. Meanwhile, AI appears as an amplifier to the human and vice versa. That the work will change is scary, but the change also opens up possibilities, many of them now hard to imagine.
Herring•41m ago
Maybe have your engineers pick up some product work. Clients do NOT want to talk to bots.
jayd16•37m ago
My experience with these tools is far and away no where close to this.

If you're really able to do the work of a 20 man org on your own, start a business.

to11mtm•26m ago
It's definitely scary in a way.

However I'm still finding a trend even in my org; better non-AI developers tend to be better at using AI to develop.

AI still forgets requirements.

I'm currently running an experiment where I try to get a design and then execute on an enterprise 'SAAS-replacement' application [0].

AI can spit forth a completely convincing looking overall project plan [1] that has gaps if anyone, even the AI itself, tries to execute on the plan; this is where a proper, experienced developer can step in at the right steps to help out.

IDK if that's the right way to venture into the brave new world, but I am at least doing my best to be at a forefront of how my org is using the tech.

[0] - I figured it was a good exercise for testing limits of both my skills prompting and the AI's capability. I do not expect success.

foxygen•25m ago
I think I've been using AI wrong. I can't understand testimonies like this. Most times I try to use AI for a task, it is a shitshow, and I have to rewrite everything anyway.
doug_durham•19m ago
I don’t know about right/wrong. You need to use the tools that make you productive. I personally find that in my work there are dozens of little scripts or helper functions that accelerate my work. However I usually don’t write them because I don’t have the time. AI can generate these little scripts very consistently. That accelerates my work. Perhaps just start simple.
weakfish•17m ago
Same. Seems to be the never ending theme of AI.
bdangubic•15m ago
how much time/effort have you put in to educate yourself about how they work, what they excel at, what they suck at, what is your responsibility when you use them…? this effort is directly proportional to how well they will serve you
d_silin•1h ago
In aviation safety, there is a concept of "Swiss cheese" model, where each successful layer of safety may not be 100% perfect, but has a different set of holes, so overlapping layers create a net gain in safety metrics.

One can treat current LLMs as a layer of "cheese" for any software development or deployment pipeline, so the goal of adding them should be an improvement for a measurable metric (code quality, uptime, development cost, successful transactions, etc).

Of course, one has to understand the chosen LLM behaviour for each specific scenario - are they like Swiss cheese (small numbers of large holes) or more like Havarti cheese (large number of small holes), and treat them accordingly.

heliumtera•1h ago
Interesting concept, but as of now we don't apply this technologies as a new compounding layer. We are not using them after the fact we constructed the initial solution. We are not ingesting the code to compare against specs. We are not using them to curate and analyze current hand written tests(prompt: is this test any good? assistant: it is hot garbage, you are inferring that expected result equals your mocked result). We are not really at this phase yet. Not in general, not intelligently. But when the "safe and effective" crowd leave technology we will find good use cases for it, I am certain (unlike uml, VB and Delphi)
kgwxd•1h ago
LLMs are Kraft Singles. Stuff that only kind of looks like cheese. Once you know it's in there, someone has to inspect, and sign-off on, the entire wheel for any credible semblance of safety.
tomlue•39m ago
how sure are you that an llm won't be better at reviewing code for safety than most humans, and eventually, most experts?
hansmayer•14m ago
It will only get better at generating random slop and other crap. Maybe helping morons who are unable to eat and breathe without consulting the "helpful assistant".
hansmayer•16m ago
> One can treat current LLMs as a layer of "cheese" for any software development or deployment pipeline

It's another interesting attempt at normalising the bullshit output by LLMs, but NO. Even with the entshittified Boeing, the aviation industry safety and reliability records, are far far far above deterministic software (know for a lot of un-reliability itself), and deterministic, B2C software to LLMs in turn is what Boeing and Airbus software and hardware reliablity are for the B2C software...So you cannot even begin to apply aviation industry paradigms to the shit machines, please.

d_silin•9m ago
I understand the frustration, but factually it is not true.

Engines are reliable to about 1 anomaly per million flight hours or so, current flight software is more reliable, on order of 1 fault per billion hours. In-flight engine shutdowns are fairly common, while major software anomalies are much rarer.

I used LLMs for coding and troubleshooting, and while they can definitely "hit" and "miss", they don't only "miss".

berdon•1h ago
There is a guaranteed cap on how far LLM based AI models can go. Models improve by being trained on better data. LLMs being used to generate millions of lines of sloppy code will substantially dilute the pool of good training data. Developers moving over to AI based development will cease to grow and learn - producing less novel code.

The massive increase in slop code and loss of innovation in code will establish an unavoidable limit on LLMs.

9dev•58m ago
That is a naive assumption. Or rather multiple naive assumptions: Developers mostly don’t move over to AI development, but integrate it into their workflow. Many of them will stay intellectually curious and thus focus their attention elsewhere; I’m not convinced they will just suddenly all stagnate.

Also, training data isn’t just crawled text from the internet anymore, but also sourced from interactions of millions of developers with coding agents, manually provided sample sessions, deliberately generated code, and more—there is a massive amount of money and research involved here, so that’s another bet I wouldn’t be willing to make.

AlexCoventry•53m ago
I think most of the progress is training by reinforcement learning on automated assessments of the code produced. So data is not really an issue.
cmrdporcupine•30m ago
But they're not just training off code and its use, but off a corpus general human knowledge in written form.

I mean, in general not only do they have all of the crappy PHP code in existence in their corpus but they also have Principia Mathematica, or probably The Art of Computer Programming. And it has become increasingly clear to me that the models have bridged the gap between "autocomplete based on code I've seen" to some sort of distillation of first order logic based on them just reading a lot of language... and some fuzzy attempt at reasoning that came out of it.

Plus the agentic tools driving them are increasingly ruthless at wringing out good results.

That said -- I think there is a natural cap on what they can get at as pure coding machines. They're pretty much there IMHO. The results are usually -- I get what I asked for, almost 100%, and it tends to "just do the right thing."

I think the next step is actually to actually make it scale and make it profitable but also...

fix the tools -- they're not what I want as an engineer. They try to take over, and they don't put me in control, and they create a very difficult review and maintenance problem. Not because they make bad code but because they make code that nobody feels responsible for.

aizk•12m ago
This time it actually is different. HN might not think so, but HN is really skewed towards more senior devs, so I think they're out of touch with what new grads are going through. It's awful.