frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

What is going on right now?

https://catskull.net/what-the-hell-is-going-on-right-now.html
278•todsacerdoti•8h ago

Comments

dartharva•2h ago
What's going on is that your org's management is filled with morons. This is not in your control, and there is nothing you can do about it other than moving out.
SideburnsOfDoom•2h ago
Sadly, this time the nonsense is colonising orgs that were too sensible to fall for the last round of tech-scamming (Blockchain, Cryptocurrency, NTFs etc).

I don't want to move again and it's a terrible time to try, partly because of this nonsense. So where we are.

therobots927•2h ago
LLMs are already unleashing more chaos within tech companies than a vigilante hacker ever could.
stanrivers•2h ago
I’m scared for what happens ten years from now when none of the junior folk ever learned to write code themselves and now think they are senior engineers…
6LLvveMx2koXfwn•2h ago
Dude, you're basically describing my career - no LLMs necessary!
pydry•2h ago
Hopefully theyll all become plumbers or schoolteachers or something.

There's a glut of junior dev talent and not enough real problems out there which a junior can apply themselves to.

This means that most of them simply arent able to get the kind of experience which will push them into the next skill bracket.

It used to be that you could use them to build cheap proofs of concept or self contained scripting but those are things AI doesnt actually suck too badly at. Even back then there were too many juniors and not enough roles though.

bigfishrunning•1h ago
There's a glut of "talent", but most of them are attracted by the inflated paycheck and aren't actually talented. In the past, they would either get promoted to middle management where they can't do any damage, or burn out and find another career. Now they can fake it long enough to sink your company, and then move on (with experience on their resume!) to their next victim. Things are gonna be really ugly in 10 years.
slipperydippery•43m ago
I'm fairly confident there's going to be no shortage of work for programmers who actually halfway know what they're doing for the remainder of my career (another 20ish years, probably), at least. So that's nice.

Though cleaning up garbage fires isn't exactly fun. Gonna need to raise my rates.

jihadjihad•2h ago
Ten years? They'll be staff, obviously. Three years of experience is senior now, did you get that memo?
kykat•2h ago
that's of course because nobody wants to hire junior and every job posting wants senior, so now everyone is "senior"
bmurphy1976•1h ago
This trend started long before AI. Everybody needs 10+ years experience to get a job anywhere. As an industry we've been terrible at up-leveling the younger generations.

I've been fighting this battle for years in my org and every time we start to make progress we go through yet another crisis and have to let some of our junior staff go. Then when we need to hire again it's an emergency and we can only hire more senior staff because we need to get things done and nobody is there to fill the gaps.

It's been a vicious cycle to break.

popcorncowboy•1h ago
I can second this cycle. Agentic code AI is an accelerant to this fire that sure looks like it's burning the bottom rungs of the ladder. Game theory suggests anyone already on the ladder needs to chop off as much of the bottom of the ladder as fast as possible. The cycle appears to only be getting.. vicious-er.
discordance•1h ago
Vibe coding as a concept started out last December. That was only 9 months ago. I doubt any people will be writing or maintaining code in a couple of years.
ivanjermakov•1h ago
There will always be software written by people who know what they are doing. Unless LLM generated code is perfect, there will always be a demand for high quality code.
patrickwalton•2h ago
I worked with an accounting firm that assigned me a new controller. I had a very similar experience where the new controller would feed me AI slop back and forth. I called them out, asked them to stop, and was treated with a brief period of what seemed to be terse, resentful responses before they went back to AI responses. We found another accounting firm after that.
aeon_ai•2h ago
AI is a change management problem.

Using it well requires a competent team, working together with trust and transparency, to build processes that are designed to effectively balance human guidance/expertise with what LLM's are good at. Small teams are doing very big things with it.

Most organizations, especially large organizations, are so far away from a healthy culture that AI is amplifying the impact of that toxicity.

Executives who interpret "Story Points" as "how much time is that going to take" are asking why everything isn't half a point now. They're so far removed from the process of building maintainable and effective software that they're simply looking for AI to serve as a simple pass through to the bottom line.

The recent study showing that 95% of AI pilots failed to deliver ROI is a case study in the ineffectiveness of modern management to actually do their jobs.

grey-area•2h ago
Or maybe it's just not as good as it's been sold to be. I haven't seen any small teams doing very big things with it, which ones are you thinking of?
sim7c00•2h ago
you are not wrong. the only 'sane' approaches ive seen with vibe coding is making a PoC to see if some concept works. then rewrite it entirely to make sure its sound.

besides just weird or broken code, anything exposed to user input is usually severly lacking sanity checks etc.

llms are not useless for coding. but imho letting llms do the coding will not yield production grade code.

bbarnett•1h ago
Koko the gorilla understood language, but most others of her ilk simlpy make signs because a thing will happen.

Move hand this way and a human will give a banana.

LLMs have no understanding at all of the underlying language, they've just seen that a billion times a task looks like such and such, so have these tokens after them.

SirHumphrey•1h ago
What does it matter if they have understanding of the underlying language or not? Heck, do humans even have the "understanding of the underlying language". What does that even mean?

It's a model. It either predicts usefully or not. How it works is mostly irrelevant.

Piskvorrr•1h ago
In which case...what good is a model that predicts semi-randomly? Oh.

("But it works - when it works" is a tautology, not a useful model)

anuramat•34m ago
What does "semi-random" even mean? Are humans not "semi-random" in the same sense?
anuramat•27m ago
Nobody knows what intelligence is, yet somehow everyone has a strong opinion on what it isn't; after all, how could piecewise affine transformations/markov chains/differential equations EVER do X?
sim7c00•14m ago
interesting take. i dont know a lot about grammarz yet in my own language i can speak fairly ok...

all i know about these LLMs is that even if they understand language or can create it, they know nothing of the subjects they speak of.

copilot told me to cast an int to str to get rid of an error.

thanks copilot, it was on kernel code.

glad i didnt do it :/. just closed browser and opened man pages. i get nowhere with these things. it feels u need to understand so much its likely less typing to write the code. code is concise and clear after all, mostly unambiguous. language on the other hand...

i do like it as a bit of a glorified google, but looking at what code it outputs my confidence it its findings lessens every prompt

A4ET8a8uTh0_v2•1h ago
POC approach seems to work for me lately. It still takes effort to convince manager that it makes sense to devote time to polishing it afterwards, but some of the initial reticence is mitigated.

edit: Not a programmer. Just a guy who needs some stuff done for some of the things I need to work on.

datadrivenangel•1h ago
I've seen small teams of a few people write non-trivial software services with AI that are useful enough to get users and potentially viable as a business.

We'll see how well they scale.

bubblyworld•1h ago
As always, two things can be true. Ignore both the hucksters and the people loudly denigrating everything LLM-related, and somewhere in between you find the reality.

I'm in a tiny team of 3 writing b2b software in the energy space and claude code is a godsend for the fiddly-but-brain-dead parts of the job (config stuff, managing cloud infra, one-and-done scripts, little single page dashboards, etc).

We've had much less success with the more complex things like maintaining various linear programming/neural net models we've written. It's really good at breaking stuff in subtle ways (like removing L2 regularisation from a VAE while visually it still looks like it's implemented). But personally I still think the juice is worth the squeeze, mainly I find it saves me mental energy I can use elsewhere.

michaeldoron•1h ago
A team of 9 people made Base44, a product for vibe-coding apps, and sold it for $80M within 6 months.

https://techcrunch.com/2025/06/18/6-month-old-solo-owned-vib...

piva00•1h ago
That's just an example of surfing on the incestuous hype, they created a vibe-coded tool that was bought by Wix to help vibe-code other stuff.

Is there any example of successful companies created mostly/entirely by "vibe coding" that isn't itself a company in the AI hype? I haven't seen any, all examples so far are similar to yours.

davedx•2h ago
I saw that study, it was indeed about pilots. When do you ever expect a pilot to immediately start driving big revenue increases? The whole thing is a strawman
dingdingdang•1h ago
This so many times over, using/introducing AI in an already managerially dysfunctional organisation is like giving automatic weapons to a band of vikings - it will with utmost certitude result in a quickening of their demise.

A demise that in the case of a modern dysfunctional organisation would otherwise often be arriving a few years later as a results of complete and utter bureaucratic failure.

My experience is that all attempts to elevate technology to a "pivotal force" for the worse is always missing the underlying social and moral failure of the majority (or a small, but important, managerial minority) to act for the common good rather than egotistic self-interest.

deepburner•1h ago
I'm rather tired of this ai apologism bit where every downside is explained away as "it would've happened anyways". AI destroying people's brains and causing paychosis? They would've gone psychotic anyways! AI causing company culture problems? The company was toxic anyways!

Instruments are not inculpable as you think they are.

anuramat•17m ago
What's your point? We should be more careful with it? This is the "trial and error" part
rickreynoldssf•2h ago
I think a lot of the impressions of AI generating slop is a case of garbage in/garbage out. You need to learn HOW to ask for things. Just asking "write code to do X" is wrong in most cases. You have to provide some specifications and expectations just like working with a junior engineer. You also can't ask "write me a module that does X". You need to design the module yourself and maybe ask AI for help with each specific individual endpoint.

These juniors you're complaining about are going to get better in making these requests of AI and blow right past all the seniors who yell at clouds running AI.

thunderbong•2h ago
Reminds me of the early Google days when everyone was finding ways to search better!
ptsneves•2h ago
> These juniors you're complaining about are going to get better in making these requests of AI and blow right past all the seniors who yell at clouds running AI.

I agree with your comment up to a point, but this is pretty similar to pilots and autopilots. At the end of the day you still need a pilot to figure out non standard issues and make judgement calls.

The junior blowing past is as good as how long he will take to fix this issue that all the credits/prompts in the world are not solving. If the impact is long and costs enough your vibe coders will have good instantaneous speed but never reach the end.

I am optimist about AI usage as a tool to enhance productivity, but the workflows are still being worked out. It currently is neither fire all devs, nor No LLM allowed. It is definitely an exciting time to be a senior though :)

jon-wood•2h ago
How will these juniors get better at making those requests when it sounds like they're not interested in understanding what's happening and the implications of it? That requires a degree of introspection which doesn't appear to be taking place if they're just copy/pasting stuff back and forth to LLMs.
ElatedOwl•2h ago
This will get written off as victim blaming, but there’s some truth here.

I don’t use Claude code for everything. I’ve fallen off the bike enough times to know when I’ll be better off writing the changes myself. Even in these cases, though, I still plan with Claude, rubber duck, have it review, have it brainstorm ideas (“I need to do x, I’m thinking about doing it such and such way, can you brainstorm a few more options?”)

NitpickLawyer•2h ago
> I think a lot of the impressions of AI generating slop is a case of garbage in/garbage out.

I've been coding for 25 years and what I feel reading posts & comments like in this thread is what I felt in the first few days of that black-blue/white-gold dress thing. I legitimately felt like half the people were trolling.

It's the same with LLM assisted coding. I can't possibly be getting such good results when all the rest are getting garbage, right? Impostor syndrome? Are they trolling?

But yeah, I agree fully with you. You need to actively try everything yourself, and this is what I recommend to my colleagues and friends. Try it out. See what works and what doesn't. Focus on what works, and put it in markdown files. Avoid what doesn't work today, but be ready because tomorrow it might work. Use flows. Use plan / act accordingly. Use the correct tools (context7 is a big one). Use search before planning. Search, write it to md files, add it in the repo. READ the plans carefully. Edit before you start a task. Edit, edit edit. Use git trees, use tools that you'd be using anyway in your pipelines. Pay attention to the output. Don't argue, go back to step1, plan better. See what works for context, what doesn't work. Add things, remove things. Have examples ready. Use examples properly. There's sooo much to learn here.

OutOfHere•2h ago
Although some of the author's concerns are valid, the author seems completely biased against LLMs, which makes their arguments trashworthy. The author is not seeking any sensible middle ground, only a luddite ground.
citizenkeen•2h ago
This person understands the future of microwaves.
bccdee•2h ago
The author is giving an account of his experience with LLMs. If those experiences were enough to thoroughly bias him against them, then that's hardly his fault. "Sensible middle ground" is what people appeal to when they are uncomfortable engaging with stark realities.

If someone told me that their Tesla's autopilot swerved them into a brick wall and they nearly died, I'm not going to say, "your newfound luddite bias is preventing you from seeking sensible middle ground. Surely there is no serious issue here." I'm going to say, "wow, that's fucked up. Maybe there's something deeply wrong with Tesla autopilot."

mexicocitinluez•1h ago
> If someone told me that their Tesla's autopilot swerved them into a brick wall and they nearly died, I'm not going to say, "your newfound luddite bias is preventing you from seeking sensible middle ground. Surely there is no serious issue here." I'm going to say, "wow, that's fucked up. Maybe there's something deeply wrong with Tesla autopilot."

What a horrible metaphor for a tool that can translate pdfs to text. lol. The anti-AI arguments are just as, if not more, absurd than the "AI can do everything" arguments.

bccdee•1h ago
Per the original post, it's a tool that will waste massive amounts of your time on slop. Don't pretend that there are no negative consequences to the proliferation of AI tools.
anthonylevine•1h ago
> Per the original post, it's a tool that will waste massive amounts of your time on slop.

I'm gonna need you to know that just because some random dev who wrote a blog said something doesn't make it true. You know that, right?

> Don't pretend that there are no negative consequences to the proliferation of AI tools.

Wait, what? Who said otherwise?

I love how you compare this tool to a Tesla's Autopilot mode, then get called out on it, and are like "Are you saying these tools are perfect" lol.

bccdee•18m ago
> Wait, what? Who said otherwise?

> What a horrible metaphor for a tool that can translate pdfs to text. lol.

You didn't say otherwise explicitly, but you're definitely downplaying the issues discussed in the blog post.

> I'm gonna need you to know that just because some random dev who wrote a blog said something doesn't make it true.

That's not really a satisfying response. If you disagree with the post, you'll have to mount a more substantial rebuttal than, "well what if he's wrong, huh? Have you considered that?"

orangecat•1h ago
I'm going to say, "wow, that's fucked up. Maybe there's something deeply wrong with Tesla autopilot."

Sure, and that's very different from "the idea of self-driving cars is a giant scam that will never work".

bccdee•1h ago
If, four years on, the primary thing a tool has done for me is waste my time, I think it's time to start looking at it through the lens of a scam. Even if it does have good use cases, that is not the main thing it does, at least not in the current market.
mdale•1h ago
The poorly named "Autopilot" is a good analogy. The LLMs can definitely help with the drudgery of stop and go traffic with little risk; but take your eye off the road for one second when your moving too fast and your dead.
OutOfHere•1h ago
The fact is that LLMs are extremely useful tools for code generation, but using them correctly for this purpose requires expertise. It's not for those with underdeveloped brains, and most definitely not for those who aren't even open to it due to a personal vendetta. Such people will rapidly find themselves without a job.
mensetmanusman•2h ago
Not being a programmer, I have a question.

Can any program be broken down into functions and functions of functions that have inputs and outputs so that they can be verified if they are working?

cyberpunk•2h ago
Pretty much, yes. But what i think you’re talking about (formal verification of code) is a bit of a dark art and barely makes it out of very specialised stuff like warhead guidance computers and maybe some medical stuff etc.
timschmidt•2h ago
Most people don't bother with formal verification because it costs extra labor and time. LLMs address both. I've been enjoying working with an LLM on Rust projects, especially for writing tests, which aren't the same as formal verification, but it's in the same ballpark.
cryptonym•1h ago
Vibe-coding tests is nowhere near formal verification.
drdrey•1h ago
not even close to being in the same ballpark
jon-wood•2h ago
In theory, yeah. In many ways that's what test driven development is, you keep breaking down a problem into a function that you can write a unit test for, write the test, write the implementation, move on. In practice writing the functions and verifying their inputs and outputs isn't the hard bit.

The hard bit is knowing which functions to write, and what "valid" means for inputs and outputs. Sometimes you'll get a specification that tells you this, but the moment you try to implement it you'll find out that whoever was writing that spec didn't really think it through to its conclusion. There will be a host of edge cases that probably don't matter, and will probably never be hit in the real world anyway, but someone needs to make that call and decide what to do when (not if) they get hit anyway.

mfenniak•2h ago
Not really.

If a program is built with strong software architecture, then a lot of it will fit that definition. As an analogy, electricity in your home is delivered by electrical outlets that are standardized -- you can have high confidence that when you buy a new electrical appliance, it can plug into those outlets and work. But someone had to design that standard and apply it universally to the outlets and the appliances. Software architecture within a program is about creating those standards on how things work and applying them universally. If you do this well, then yes, you can have a lot of code that is testable and verifiable.

But you'll always have side-effects. Programs do things -- they create files, they open network connections, they communicate with other programs, they display things on the screen. Some of those side-effects create "state" -- once a file is created, it's still present. These things are much harder to test because they're not just a function with an input and an output -- their behavior changes between the first run and the second run.

Arainach•2h ago
Not without extraordinary cost that no one (save NASA, perhaps) is willing to pay.

Even if you can formally verify individual methods, what you're actually looking for is if we can verify systems. Because systems, even ones made of of pieces that are individually understood, have interactions and emergent behaviors which are not expected.

taco_emoji•2h ago
Long story short: no.

Long story: yes, but it'd take centuries to verify all possible inputs, at least for any non-trivial programs.

black_knight•2h ago
Proofs of correctness is a thing. If you prove something correct you don’t have to test every input. It just takes a big effort to design the program this way. And must be done from the beginning.
black_knight•2h ago
No. Not every program can be broken down so. If you want that kind of certainty, this consideration needs to be part of the development process from the very beginning. This is what functional programming is all about.
yodsanklai•1h ago
There are many implications to this question! TLDR; in theory yes, in practice no.

Can a function be "verified", this can mean "tested", "reviewed", "proved to be correct". What does correct even mean?

Functions in code are often more complex than just having input and output, unlike mathematical functions. Very often, they have side effects, like sending packets on network or modifying things in their environment. This makes it difficult to understand what they do in isolation.

Any non-trivial piece of software is almost impossible to fully understand or test. These things work empirically and require constant maintenance and tweaking.

pentamassiv•1h ago
In theory you cannot even say for all programs and all inputs if the program will finish the calculation [0]. In practice you often can break it down but the number of combinations of input is what makes it impossible to test everything. Most developers try to keep individual functions as small as possible to understand them easier. You can use math to do formal verification, but that gets difficult with real programs too.

[0] https://en.wikipedia.org/wiki/Halting_problem

mkleczek•1h ago
No, it is not possible, not only in practice but - more importantly - in theory as well:

https://pron.github.io/posts/correctness-and-complexity

grey-area•2h ago
The heart of the article is this conclusion, which I think is correct from first-hand experience with these tools and teams trying to use them:

So what good are these tools? Do they have any value whatsoever?

Objectively, it would seem the answer is no.

potsandpans•2h ago
I don't think you understand what the word "objectively" means.
boesboes•2h ago
It's from the post. And I agree, the author has no clue what objectively means.

Just another old-man-shouting-at-cloud blog post. you company culture sucks and the juniors need to be managed better. Don't blame the tools.

catskull•1h ago
FWIW I’m 34 :)
danielbln•1h ago
Take it from a 40 year old, to someone <30, 34 is old. To someone <20 you're basically walking dead.
allknowingfrog•11m ago
I became an old man well before I reached my 30's. :)
dlachausse•2h ago
AI tools absolutely can deliver value for certain users and use cases. The problem is that they’re not magic, they’re a tool and they have certain capabilities and limitations. A screwdriver isn’t a bad tool just because it sucks at opening beer bottles.
ptx•1h ago
So what use cases are those?

It seems to me that the limitations of this particular tool make it suitable only in cases where it doesn't matter if the result is wrong and dangerous as long as it's convincing. This seems to be exclusively various forms of forgery and fraud, e.g. spam, phishing, cheating on homework, falsifying research data, lying about current events, etc.

dlachausse•1h ago
I personally use it as a starting point for research and for summarizing very long articles.

I’m a mostly self taught hobbyist programmer, so take this with a grain of salt, but It’s also been great for giving me a small snippet of code to use as a starting point for my projects. I wouldn’t just check whatever it generates directly into version control without testing it and figuring out how it works first. It’s not a replacement for my coding skills, but an augmentation of them.

barbazoo•1h ago
Extracting structured data from unstructured text at runtime. Some models are really good at that and it’s immensely useful for many businesses.
Piskvorrr•54m ago
Except when they "extract" something that wasn't in the source. And now what, assuming you can even detect the tainted data at all?

How do you fix that, when the process is literally "we throw an illegible blob at it and data comes out"? This is not even GIGO, this is "anything in, synthetic garbage out"

disgruntledphd2•17m ago
> Except when they "extract" something that wasn't in the source. And now what, assuming you can even detect the tainted data at all?

I mean, this is much less common than people make it out to be. Assuming that the context is there it's doable to run a bunch of calls and take the majority vote. It's not trivial but this is definitely doable.

barbazoo•12m ago
> Except when they "extract" something that wasn't in the source. And now what, assuming you can even detect the tainted data at all?

You gotta watch for that for sure but no that's not a issue we worry about anymore, at least not for how we're using it for here. The text that's being extracted from is not a "BLOB". It's plain text at that point and of a certain, expected kind so that makes it easier. In general, the more isolated and specific the use case, the bigger the chances of the whole thing working end to end. Open ended chat is just a disaster. Operating on a narrow set of expectations. Much more successful.

disgruntledphd2•18m ago
> So what use cases are those?

I think that as software/data people, we tend to underestimate the number of business processes that are repetitive but require natural language parsing to be done. Examples would include supply chain (basically run on excels and email). Traditionally, these were basically impossible to automate because reading free text emails and updating some system based on that was incredibly hard. LLMs make this much, much easier. This is a big opportunity for lots of companies in normal industries (there's lots of it in tech too).

More generally, LLMs are pretty good at document summarisation and question answering, so with some guardrails (proper context, maybe multiple LLM calls involved) this can save people a bunch of time.

Finally, they can be helpful for broad search queries, but this is much much trickier as you'd need to build decent context offline and use that, which (to put it mildly) is a non-trivial problem.

In the tech world, they are really helpful in writing one to throw away. If you have a few ideas, you can now spec them out and get sortof working code from an LLM which lowers the bar to getting feedback and seeing if the idea works. You really do have to throw it away though, which is now much, much cheaper with LLM technology.

I do think that if we could figure out context management better (which is basically decent internal search for a company) then there's a bunch of useful stuff that could be built, but context management is a really, really hard problem so that's not gonna happen any time soon.

mexicocitinluez•1h ago
I need you to tell me how when I just fed Claude a 40 page Medicare form and asked it to translate it to a print-friendly CSS version and uses Cottle for templating "objectivtely" was of no value to me?

What about 20 minuets ago when I threw a 20-line Typescript error in and it explained it in English to me? What definition of "objective" would that fall under?

Or get this, I'm building off of an existing state machine library and asked it to find any potential performance issues and guess what? It actually did. What universe do you live in where that doesn't have objective value?

Am I going to need to just start sharing my Claude chat history to prove to people who live under a rock that a super-advanced pattern matcher that can compose results can be useful???

Go ahead, ask it to write some regex and then tell me how "objectively" useless it is?

NoMoreNicksLeft•1h ago
>Am I going to need to just start sharing my Claude chat history to prove to people

I think we'll all need at least 3 of your anecdotes before we change our minds and can blissfully ignore the slow-motion train wreck that we all see heading our way. Sometime later in your life when you're in the hospital hooked up to a machine the firmware of which was vibe-coded and the FDA oversight was ChatGPT-checked, you may have misgivings but try not to be alarmed, that's just the naysayers getting to you.

anthonylevine•1h ago
>Sometime later in your life when you're in the hospital hooked up to a machine the firmware of which was vibe-coded and the FDA oversight was ChatGPT-checked, you may have misgivings but try not to be alarmed, that's just the naysayers getting to you.

This sentence is proof you guys are some of the most absurd people on this planet.

swader999•1h ago
And a slew of tests too...
thinkingtoilet•1h ago
The main benefit I've gotten from AI that I see no one talking about is it dramatically lessens the mental energy required to work on a side project after a long day of work. I code during the day, it's hard to find motivation to code at night. It's a lot easier to say "do this", have the AI generate shitty code, then say, "you duplicated X function, you over complicated Y, you have a bug at Z" then have it fix it. On a good day I get stuff done quicker, on an average day I don't think I do. However, I am getting more done because the it takes a huge chunk out of the mental load for me and requires significantly less motivation to get something done on my side project. I think that is worth it to me. That said, I am just about to ban my junior engineers from using it at work because I think it is detrimental to their growth.
Cyan488•1h ago
I agree with the side-project thing, where the code is only incidental to working on the real project. I recently wanted to organize thousands of photos my family had taken over decades and sprawled on a network drive, and in 5 minutes vibe-coded a script to recursively scan, de-dupe, rename with datetime and hash, and organize by camera from the EXIF data.

I could have written it myself in a few hours, with the Python standard docs open on one monitor and coding and debugging on the other etc, but my project was "organize my photos" not "write a photo organizing app". However, often I do side projects to improve my skills, and using an AI is antithetical to that goal.

nyargh•51s ago
I've found a lot of utility in this. Small throw away utility apps where I just want to automate some dumb thing once or twice and the task is just unusual enough that I can't just grab something off the shelf.

I reached for claude code to just vibe a basic android ux to drive some rest apis for an art project as the existing web UI would be a PITA to use under the conditions I had. Worked well enough and I could spend my time finishing other parts of the project. It would not have been worth the time to write the app myself and I would have just suffered with the mobile web UI instead. Would I have distributed that Android app? God no, but it did certainly solve the problem I had in that moment.

piva00•1h ago
Very much the same for me. I use some LLMs to do small tasks at work where I know they can be useful, it's about 5-10% of my coding work which itself is about 20% of my time.

Outside of work though it's been great to have LLMs to dive into stuff I don't work with, which would take me months of learning to start from scratch. Mostly programming microcontrollers for toy projects, or helping some artists I know to bring their vision to life.

It's absurdly fun to get kickstarted into a new domain without having to learn the nitty-gritty first but I eventually need to learn it, it just lowers the timeframe to when the project becomes fun to work with (aka: it barely works but does something that can be expanded upon).

donperignon•2h ago
this is a symptom of larger issue. some people are just proxies to other entities. before it was stackoverflow, now they channel the responses from the LLM. this people will be the ones losing their jobs, they are simply not adding any value, for me is not that developers will suffer, but this type of developer will be gone soon, and let me tell you, if i want to ask chatgpt i go to openai, i dont need anyone to type for me and act as a human but replying with a bullet point list full of emojis.
bfrog•2h ago
AI is hot garbage being used as if its a hammer and everything is a nail. The sooner this house of cards collapses the sooner real value can be gained from it by using it as a tool to appropriately assist.

But we're in the mega hype phase still (1998/1999) and haven't quite crested over to the reality phase.

bfrog•2h ago
The question to ask is... why have the junior be a chatgpt interface if this is the case.
jerf•1h ago
That's a question every current junior should be asking themselves.

If you want to be well-paid, you need to be able to distinguish yourself in some economically-useful manner from other people. That was true before AI and AI isn't going to make it go away. It may even in some sense sharpen it.

In another few years there's going to be large numbers of people who can be plopped down in front of a code base and just start firing prompts at an AI. If you're just another one of the crowd, you're going to get mediocre career results, potentially including not having a career at all.

However, here in 2025 I'm not sure what that "standing out in the crowd" will be. It could well be "exceptional skill in prompting". It could be that deeper understanding of what the code is really doing. It could be the ability to debug deeply yourself with an old-school debugger when something goes wrong and the AI just can't work. It could be non-coding skills entirely. In reality it'll be more than just one thing anyhow and the results will vary. I don't know what to tell you juniors except to keep your eyes peeled for whatever this will be, and when you think you have an idea, don't let the cognitively-lazy appeal of just letting the AI do everything stop you from pursuing it. I don't know specifically what this will be, but you don't have to be right the first time, you have time to get several licks at this.

But I do know that we aren't going to need very many people who are only capable of firing prompts at an AI and blindly saying "yes" to whatever it says, not because of the level of utility that may or may not have, but because that's not going to distinguish you at all.

If all you are is a proxy to AI, I don't need you. I've got an AI of my own, and I've got lower latency and higher bandwidth to it.

Correspondingly, if you detect that you are falling into the pattern of being on the junior programmer end of what this article is complaining about, where you interact with your coworkers as nothing but an AI proxy, you need to course correct and you need to course correct now. Unfortunately, again, I don't have a recipe for that correction. Ask me in 2030.

"Just a proxy to an AI" may lead to great things for the AI but it isn't going to lead you anywhere good!

RyanOD•2h ago
For me, AI is just a tool. I'm not a high-level developer, but when I'm coding out a personal project and I'm stuck. I present my ideas to AI and ask it for feedback. Then, I take that feedback and move forward. What I do NOT do is ask AI to write code for me. Again, these are my own projects so I can develop them any way I like.

Having AI write code for me (other than maybe simple boilerplate stuff) goes entirely against why I write code in the first place which is the joy of problem solving, building things, and learning.

Edit: Typo

busssard•1h ago
for my last project i could not have finished it without AI doing coding. it set up the entire repo for me, it wrote bad code and the PoC worked. I dont have experience in Django JS or webdev. now i have a working thing that i can slowly go through, improve and understand.
RyanOD•40m ago
That's one way. The approach I take is more of a "Hey, AI, here is what I'm trying to accomplish and here is the approach I plan to take. What do you think of this approach? Does it support my broader goals? Etc."

But, much of what I spend my time on are already solved problems (building retro video game clones) so AI has a LOT of high-quality content to draw upon. I'm not trying to create new, original products.

esafak•2h ago
There is an attribution problem because we don't see the interaction the engineer had with the AI before submitting the work. Maybe the prompt was good, and there was a back-and-forth as the user found mistakes and asked for corrections. Maybe not. We are left to guess. A user who does not steer the AI adds no value at best, and likely makes things worse by creating work for others. There is a product opportunity here for coding agents to check in this work with something like `git notes`. This helps users claim the value they add.
shusson•2h ago
The majority of software engineers today, (mostly in big tech) are not interested in software engineering. They studied it to make money. This happened before LLMs. Add the fact that software development isn’t deterministic. And you have a perfect storm of chaos.

But our discipline has been through similar disruptions in the past. I think give it a few years then maybe we’ll settle on something sane again.

I do think the shift is permanent though. Either you adapt to use these LLMs well or you will struggle to be competitive (in general)

suddenlybananas•1h ago
>I do think the shift is permanent though. Either you adapt to use these LLMs well or you will struggle to be competitive (in general)

That certainly isn't true if what this article suggests is true.

anymouse123456•2h ago
AI has been great for UX prototypes.

Get something stood up quickly to react to.

It's not complete, it's not correct, it's not maintainable. But it's literal minutes to go from a blank page to seeing something clickable-ish.

We do that for a few rounds, set a direction and then throw it in the trash and start building.

In that sense, AI can be incredibly powerful, useful and has saved tons of time developing the wrong thing.

I can't see the future, but it's definitely not generating useful applications out of whole cloth at this point in time.

cck9672•2h ago
Can you elaborate on your process and tools here? This use case may actually be valuable for me and my team.
waterproof•1h ago
Tools that can build you a quick clickable prototype are everywhere. Replit, claude code, cursor, ChatGPT Pro, v0.app, they're all totally capable.

From there it's the important part: discussing, documenting, and making sure you're on the same page about what to actually build. Ideally, get input from your actual customers on the mockup (or multiple mockups) so you know what resonates and what doesn't.

criddell•1h ago
For me it's useful in those areas I don't venture into very often. For example I needed a powershell script recently that would create a little report of some registry settings. Claude banged out something that worked perfectly for me and saved me an hour of messing around.
humpty-d•1h ago
It’s useful for almost any one-off script I write. It can do the work much faster than me and produce nicer looking output than I’d ever bother to spend time to write myself. It can also generate cli args and docs I’d never "waste time" on myself, which I’d later waste even more time fumbling without

They’re insanely useful. I don’t get why people pretend otherwise, just because they aren't fulfilling the prophesies of blowhards and salesmen

SeasonalEnnui•1h ago
Yes, totally agree. The 2nd thing I found it great for was to explain errors, it either finds the exact solution, or sparked a thought that lead to the answer.
mexicocitinluez•1h ago
It's the height of absurdity to me that this is possible and devs will still say outrageous shit like "These tools have no use"
mexicocitinluez•2h ago
> So what good are these tools? Do they have any value whatsoever?

> Objectively, it would seem the answer is no. But at least they make a lot of money, right?

Wait, what? Does the author know what the word "objectively" means?

I'd kill for someone to tell me how feeding a pdf into Claude and asking it to provide a print-friendly version for a templating language has "objectively" no value?

What about yesterday when I asked Claude to look write some reflection-heavy code for me to traverse a bunch of classes and register them in DI?

Or the hundreds (maybe thousands) of times I've thrown a TS error and it explained it in English to me?

I'm so over devs thinking they can categorically tell everyone else what is and isn't helpful in a field as big as this.

Also, and this really, really needs repeated: When you say "AI" and don't specify exactly what you mean you sound like a moron. "AI", that insanely general phrase, happens to cover a wide, wide array of different things you personally use day to day. Anytime you do speech-to-text you're relying on "AI".

morkalork•1h ago
I feel that even though I'm getting older, LLMs make me feel younger. There's things I learned in university 10 years ago that I only hazily remember but I can easily interrogate an AI and refresh myself way faster than opening old books. Just as a device for recall alone that's been trained on every power point slide that's been uploaded on lecturers websites, it's useful.
OtherShrezzing•2h ago
I call this "flood the zone driven development".
ascendantlogic•2h ago
> Here’s the thing - we want to help. We want to build good things. Things that work well, that make people’s lives easier. We want to teach people how to do software engineering!

This is not what companies want. Companies want "value" that customers will pay for as quickly and cheaply as possible. As entities they don't care about craftsmanship or anything like that. Just deliver the value quickly and cheaply. Its this fundamental mismatch between what engineers want to do (build elegant, well functioning tools) and what businesses want to do (the bare minimum to get someone to give them as much money as possible) that is driving this sort of pulling-our-hair-out sentiment on the engineering side.

gjsman-1000•1h ago
Right; I discovered at the new company I joined, they want velocity more than anything. The sloppy code, risk of mistakes, it’s all priced in to the risk assessment of not gaining ground first. So… I’m shooting out AI-written code left and right and that’s what they want. My performance? Excellent. Will it be a problem in the future? Well, either the startup fails, or AI might be able to rewrite it in the future.

It’s not what I want… but at the same time, how many of our jobs do what we want? I could easily end up being the garbage man. I’m doing what I’m paid to do and I’m paid well to do it.

armada651•1h ago
While this is true, the push-pull between sales and engineering resulted in software that is built well enough to last without being over-engineered. However if both sales and the engineers start chasing quick short term gains over long term viability that'll result in a new wave of shitty low-quality software being released.

AI isn't good enough yet to generate the same quality of software as human engineers. But since AI is cheaper we'll gladly lower the quality bar so long as the user is still willing to put up with it. Soon all our digital products will be cheap AI slop that's barely fit for purpose, it's a future I dread.

gjsman-1000•1h ago
Well, in such a future, when people have been burned by countless vibecoded projects, congratulations - FAANG wins again! Who is going to risk one penny on your rapidly assembled startup?

Any startup that can come to the table saying “All human engineers; SOC 2 Type 2 certified; dedicated Q/A department” will inherit the earth.

Workaccount2•1h ago
>AI isn't good enough yet to generate the same quality of software as human engineers

The software I have vibecoded for myself totally obliterates anything available on the market. Imagine a program that operates without any lag or hicupps. Opens and runs instantly. A program that can run without an internet connection, without making an account, without somehow being 12GB in size, without totally unintuitive UI, without having to pay $20/mo for static capabilities, without persistent bugs that are ignored for years, without any ability to customize anything.

I know you are incredulous reading this is, but hear me out

Bespoke narrow scope custom software is incredibly powerful, and well within the wheelhouse of LLMs. Modern software is written to be the 110-tool swiss army knife feature pack to capture as large of an audience as possible. But if I am just using 3 of those tools, an LLM can write a piece of software that is better for me in every single way. And that's exactly what my experience has been so far, and exactly the direction I see software moving in the future.

DrillShopper•2m ago
I'll believe it when I see it with my own eyes, otherwise these words read more like sales copy than technological discovery.
Lauris100•1h ago
“The only way to go fast, is to go well.” Robert C. Martin

Maybe spaghetti code delivers value as quickly as possible in the short term, but there is a risk that it will catch up in the long term - hard to add features, slow iterations - ultimately losing customers, revenue and growth.

gjsman-1000•1h ago
Or, you can be like many modern CTOs: AI will likely get better and eventually be capable of mostly cleaning up its own mess today. In which case, YOLO - your startup dies, or AI is sufficiently advanced enough by the time it succeeds. The objections about quality only matter if you think it’s going to plateau.
SoftTalker•1h ago
If the AI gets that good, what value does your startup add?
Piskvorrr•1h ago
That is, literally, faith-based business management. "We suck, sure - but wait, a miracle will SURELY happen in version 5. Or 6. Or 789. It will happen eventually, have faith and shovel money our way."
ascendantlogic•1h ago
Anecdotally I'm already seeing this on a small scale. People who vibe coded a prototype to 1 mil ARR are realizing that the velocity came at the cost of immense technical debt. The code has reached a point where it is essentially unmaintainable and the interest payments on that technical debt are too expensive. I think there's going to be a lot of money to be made over the next few years un-fucking these sort of things so these new companies can continue to scale.
busssard•1h ago
if i have 1mil ARR, i can hire some devs to remake my product from scratch. and use the Vibecoded Example as a design mockup.

If i manage to vibecode something alone that takes off, even without technical expertise, then you validated the AI usecase...

Before Claude i had to make a paper prototype or a figma, now i can make Slop that looks and somehow functions the way i want. i can make preliminary tests, and even get to some proof of concept. in some cases even 1million $ annual revenue...

ascendantlogic•1h ago
Yes, this is exactly where AI shines: PoCs and validating ideas. The problems come when you're ready to scale. And the "I can hire some devs to remake my product from scratch" part is the exact money making scenario some of my consulting friends are starting to see take shape in the market.
Workaccount2•1h ago
This is where the missmatch is, the future is not in scaled apps, the future is in everyone being able to make their own app.

You don't have to feature pack if you are making a custom app for your custom use case, and LLMs are great with slim narrow purpose apps.

I don't think LLMs will replace developers, but I am almost certain they will radically change how end users use computers, even if the tech plateaus right now.

const_cast•1h ago
But people say this about technology in software engineering time and time again.

VB? VBA macros in Excel? Delphi? Uhh... Wordpress? Python as a language?

Well you see these are just for prototypes. These are just for making an MVP. They're not the real product.

But they are the real product. I've almost never seen these been successfully used as just for prototyping or MVPs. It always becomes the real codebase and it's a hot fucking mess 99% of the time.

disqard•50m ago
You're not wrong about that.

What ends up happening is that humans get "woven" into the architecture/processes, so that people with pagers keep that mess going even though it really should not be running at that scale.

"Throw one away" rarely happens.

mrkeen•1h ago
> if i have 1mil ARR, i can hire some devs to remake my product from scratch

This assumes a pool of available devs who haven't already drunk the Koolaid.

To put it another way: the 2nd wave of devs will also vibe code. Or 'focus on the happy path'. Or the 'MVP', whatever it's called these days.

From their point of view, it will be faster and cheaper to get v2 out sooner, and 'polish' it later.

Does anyone in charge actually know what 'building it right' actually means? Is it in their vocabulary to say those words?

jcgrillo•1h ago
You would only be able to hire me to do that job if you gave me every last dollar of that ARR. And I still might turn you down tbh..
occz•1h ago
I guess that depends on how you get that ARR-figure. If more than all of it goes to paying your AI bills, then you can't really afford that much engineering investment.
_DeadFred_•1h ago
So basically the new version of the 1990's people's projects that grew to high ARR based on their random Visual Basic codebase? That's how software companies have been starting for 30 years.
ascendantlogic•1h ago
Time is a flat circle and what is old is new again.
const_cast•1h ago
This is true, but what I've come to realize is companies only prioritize the short term, no matter what, no exceptions. They take everything on as debt.

They don't care about losing customers 10 years later because they're optimizing for next quarter. But they do that every quarter.

Does this eventually blow up? Uh, yeah, big time. Look at GE, Intel, Xerox, IBM, you name it.

But you can get shockingly far only thinking about tomorrow over and over again. Sometimes, like, 100 years far. Well by then we're all dead anyway so who cares.

Piskvorrr•1h ago
By then, the startup will have folded, and the C-levels will have moved on to the next Idée Du Jour.
Quarrelsome•1h ago
the fundamental issue remains that there is no objective and reliable measure of developer productivity. So those who experience it (developers) and the business who are isolated from it; end up with different perspectives. This IMHO is going to be the most important factor that fuels "AI first" stories like these, that could dominate our industry over the coming decade.

I don't think the chasm is unbridgable, because ultimately everybody wants the same thing (for the company to prosper) but they fail to entirely appreciate the perspective of the other. Its up to a healthy company organisation to productively address the conflict between the two perspectives. However, I have yet to encounter such a culture of mutal respect and resource allocation.

I fear that agentic AI could erase all the progress we've made on culture in the past 25 years (e.g. agile) and drag us back towards 80s tech culture.

gjsman-1000•1h ago
Progress? Agile, and the aftermath (the MVP!), it’s how we got here in the first place!
Quarrelsome•1h ago
Seems like you don't remember the 80s, 90s or even early 2000s. Agile was a movement specifically designed to help represent the interests of development in organisations. Obviously business corrupted it over time but the industry before it was considerably worse.

MVPs exist to force business into better defining their requirements. Prior to Agile we'd spend years building something and then we'd deliver it, only for business to then "change their mind", because they've now just realised (now that they have it), that what they asked for was stupid.

jmclnx•2h ago
To me, it this whole vibe/LLM/AI thing looks like yet another method pushed by MBAs to speed up development, save money and make things "easier". But in reality slows things down to a crawl.

Over the decades I have seen many of these things, this iteration seems to me a push on steroids. I stopped paying attention around the mid 90s and did things "my way". The sad thing is, seems these days a developer cannot hide in the shadows.

justlikereddit•1h ago
Young coders have finally simply adopted the mentality of the corporate and political leadership.

Congratulations everyone, you finally got what you deserve instead of what you need.

xnorswap•1h ago
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

BitwiseFool•1h ago
>"It didn't help that the LLM was confidently incorrect."

Has anyone else ever dealt with a somewhat charismatic know-it-all who knows just enough to give authoritative answers? LLM output often reminds me of such people.

bigfishrunning•1h ago
If those people are wrong enough times, they are either removed from the organization or they scare anyone competent away from the organization, which then dies. LLMs seem to be getting a managerial pass (because the cost is subsidized by mountains of VC money and thus very low (for now)) so only the latter outcome is likely.
XxiXx•1h ago
There's even a name for such person: Manager
SoftTalker•1h ago
Yes, they have been around forever, they are known as bullshitters.

The bullshitter doesn't care whether what he says is correct or not, as long as it's convincing.

https://en.wikipedia.org/wiki/On_Bullshit

pmarreck•1h ago
Sounds like every product manager I've ever had, lol (sorry PM's!)
DamnInteresting•1h ago
Colloquially known as "bullshitters."[1]

[1] https://dictionary.cambridge.org/us/dictionary/english/bulls...

SamBam•1h ago
That’s a great question — and one that highlights a subtle misconception about how LLMs actually work.

At first glance, it’s easy to compare them to a charismatic “know-it-all” who sounds confident while being only half-right. After all, both can produce fluent, authoritative-sounding answers that sometimes miss the mark. But here’s where the comparison falls short — and where LLMs really shine:

(...ok ok, I can't go on.)

ryandrake•40m ago
Most of the most charismatic, confident know-it-alls I have ever met have been in the tech industry. And not just the usual suspects (founders, managers, thought leaders, architects) but regular rank-and-file engineers. The whole industry is infested with know-it-alls. Hell, HN is infested with know-it-alls. So it's no surprise that one of the biggest products of the decade is an Automated Know-It-All machine.
fluoridation•38m ago
I'm pretty sure I'm that guy on some topics.
HankStallone•1h ago
It's annoying when it apologizes for a "misunderstanding" when it was just plain wrong about something. What would be wrong with it just saying, "I was wrong because LLMs are what they are, and sometimes we get things very wrong"?

Kinda funny example: The other day I asked Grok what a "grandparent" comment is on HN. It said it's the "initial comment" in a thread. Not coincidentally, that was the same answer I found in a reddit post that was the first result when I searched for the same thing on DuckDuckGo, but I was pretty sure that was wrong.

So I gave Grok an example: "If A is the initial comment, and B is a reply to A, and C a reply to B, and D a reply to C, and E a reply to D, which is the grandparent of C?" Then it got it right without any trouble. So then I asked: But you just said it's the initial comment, which is A. What's the deal? And then it went into the usual song and dance about how it misunderstood and was super-sorry, and then ran through the whole explanation again of how it's really C and I was very smart for catching that.

I'd rather it just said, "Oops, I got it wrong the first time because I crapped out the first thing that matched in my training data, and that happened to be bad data. That's just how I work; don't take anything for granted."

redshirtrob•39m ago
Ummm, are you saying that C is the grandparent of C, or do you have a typo in your example? Sure, the initial comment is not necessarily the grandparent, but in your ABCDE example, A is the grandparent of C, and C is the grandparent of E.

Maybe I'm just misreading your comment, but it has me confused enough to reset my password, login, and make this child comment.

whalesalad•1h ago
A notion comment on a story the other day started with, "you're absolutely right" and that is when I had to take a moment outside for myself.
gjsman-1000•1h ago
I swear that in 3 years, managers are going to realize this constant affirmation… causes staff to lose mental tolerance for anything not clappy-happy. Same with schools.
wiseowise•1h ago
Already is. Anything that isn’t cheerful, fake salesman is interpreted as hostile.
slipperydippery•1h ago
Yeah, the safest professional tone to adopt is now something like what you'd use talking to someone else's very-stupid dog than to a co-worker you respect. It's gross.
ivanjermakov•1h ago
> I was getting back chat GPT output

I would ask them for an apple pie recipe and report to HR

japhyr•1h ago
I get that this is a joke, but the bigger issue is that there's no easy fix for this because other humans are using AI tools in a way that destroys their ability to meaningfully work on a team with competent people.

There are a lot of people reading replies from more knowledgeable teammates, feeding those replies into LLMs, and pasting the response back to their teammates. It plays out in public on open source issue threads.

It's a big mess, and it's wasting so much of everyone's time.

ivanjermakov•1h ago
As with every other problem with no easy fix, if it is important - it should be regulated. It should not be hard for a company to prohibit LLM-assisted communication, if management believes that it is inherently destructive (e.g. feeding generated messages into message summarizers).
jjice•1h ago
It's so upsetting to see people take the powerful tool that is an LLM and pretend like it's a solution for everything. It's not. They're awesome at a lot of things, but they need a user that has context and knowledge to know when to apply or direct it in a different way.

The amount of absolutely shit LLM code I've reviewed at work is so sad, especially because I know the LLM could've written much better code if the prompter did a better job. The user needs to know when the solution is viable for an LLM to do or not, and a user will often need to make some manual changes anyway. When we pretend an LLM can do it all, it creates slop.

I just had a coworker a few weeks ago produce a simple function that wrapped a DB query in a function (normal so far), but wrote 250 lines of tests for it. All the code was clearly LLM generated (the comments explaining the most mundane of code was the biggest give away). The tests tested nothing. It mocked the ORM and then tested the return of the mock. We were testing that the mocking framework worked? I told him that I don't think the tests added much value since the function was so simple and that we could remove them. He said he thought they provided value, with no explanation, and merged the code.

Now fast forward to the other day and I run into the rest of the code again and now it's sinking in how bad the other LLM code was. Not that it's wrong, but it's poorly designed and full of bloat.

I have no issue with the LLM - they can do some incredible things and they're a powerful tool in the tool belt, but they are to be used in conjunction with a human that knows what they're doing (at least in the context of programming).

Kind of a rant, but I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash that now only an LLM can even understand. And the thing that will decide which direction a code base goes in will be the engineers involved.

SoftTalker•1h ago
> I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash

Technical debt at a payday loan interest rate.

pmarreck•1h ago
This is why [former Codeium] Windsurf's name is so genius.

Windsurfing (the real activity) requires multiple understandings:

1) How to sail in the first place

2) How to balance on the windsurfer while the wind is blowing on you

If you can do both of those things, you can go VERY fast and it is VERY fun.

The analogy to the first thing is "understanding software engineering" (to some extent). The analogy to the second thing is "understanding good prompting while the heat of deadlines is on you". Without both, you are just creating slop (falling in the water repeatedly and NOT going faster than either surfing or sailing alone). Junior devs that are leaning too hard on LLM assistance right off the bat are basically falling in the water repeatedly (and worse, without realizing it).

I would at minimum have a policy of "if you do not completely understand the code written by an LLM, you will not commit it." (This would be right after "you will not commit code without it being tested and the tests all passing.")

siva7•58m ago
That's why some teams have the rule that the PR author isn't allowed to merge but only one of the approvers
clutchdude•1h ago
I'm seeing the worst of both worlds where a human support engineer just blindly copies and pastes whatever internal LLM spit out.
clickety_clack•1h ago
Ugh. I worked with a PM who used AI to generate PRDs. Pretty often, we’d get to a spot where we were like “what do you mean by this” and he’d respond that he didn’t know, the AI wrote it. It’s like he just stopped trying to actually communicate an idea, and replaced it with performative document creation. The effect was to basically push his job of understanding requirements down to me, and I didn’t really want to interact with someone who couldn’t be bothered figuring out his own thoughts before trying to put me to work implementing them so I left the team.
siva7•1h ago
What the heck, the universal job description of a PM is to genuinely understand the requirements of their product. I'm always baffled how such people stay in those roles without getting fired.
seethishat•32m ago
Ignorant confidence is the best kind of confidence ;)
ryandvm•20m ago
I had an experience earlier this week that was kind of surreal.

I'm working with a fairly arcane technical spec that I don't really understand so well so I ask Claude to evaluate one of our internal proposals on this spec for conformance. It highlights a bunch of mistakes in our internal proposal.

I send those off to someone in our company that's supposed to be an authority on the arcane spec with the warning that it was LLM generated so it might be nonsense.

He feeds my message to his LLM and asks it to evaluate the criticisms. He then messages me back with the response from his LLM and asks me what I think.

We are functionally administrative assistants for our AIs.

If this is the future of software development, I don't like it.

xeonmc•16m ago
In your specific case, I think it’s likely an intentionally pointed response to your use of LLM.
popcorncowboy•1h ago
This ends in Idiocracy. The graybeards will phase out, the juniors will become staff level, except.. software will just be "more difficult". No-one really understands how it works, how could they? More importantly WHY should they? The Machine does the code. If The Machine gets it wrong it's not my fault.

The TRUE takeaway here is that as of about 12 months ago, spending time investing in becoming a god-mode dev is not the optimal path for the next phase of whatever we're moving into.

probably_wrong•1h ago
I think that's only true if you assume that the AI bubble will never burst.

Bitcoin didn't replace cash, Blockchain didn't replace databases and NoSQL didn't make SQL obsolete. And while I have been wrong before, I'm optimistic that AI will only replace programmers the same way copy-pasting from StackOverflow replaced programmers back in the day.

1970-01-01•28m ago
We've already seen the plateau forming[1]. GPT4.X vs GPT5 isn't exactly a revolution. It will become much cheaper, much faster, but not much better.

[1] https://news.ycombinator.com/item?id=44979107

ivanjermakov•1h ago
I'm afraid we already in the phase where regular devs have no idea how things work under the hood. So many web devs fail on the simple interview question "what happens when user enters a url and presses enter?" I would understand not knowing the details of DNS protocol, but not understanding the basics of what browser/OS/CPU is doing is just unprofessional.

And LLM assisted coding apparently makes this knowledge even less useful.

const_cast•1h ago
Met a dev who couldn't understand the difference between git, the program, and github, the remote git frontend.

I explained it a few times. He just couldn't wrap his head around that there were files on his computer and also on a different computer over the internet.

Now, I will admit distributed VCS can be tricky if you've never seen it before. But I'm not kidding - he legitimately did not understand the division of local vs the internet. That was just a concept that he never considered before.

He also didn't know anything about filesystems but that's a different story.

phba•16m ago
"Low code quality keeps haunting our entire industry. That, and sloppy programmers who don't understand the frameworks they work within. They're like plumbers high on glue." (https://simple.wikiquote.org/wiki/Theo_de_Raadt)

This phase has been going on for decades now. It's a shame, really.

SeasonalEnnui•1h ago
Good blog post, I recognise much of that.

The positions of both evangelists and luddites seems mad to me, there's too much emotion involved in those positions for what amounts to another tool in the toolbox that should only be used in appropriate situations.

ifyoubuildit•1h ago
It's simple: it's just a multiplier, like power tools or heavy equipment. You can use a giant excavator to do a ton of work quickly. You can also knock the house over by accident.

People probably said the same things when the first steam shovels came around. I for one like things that make me have to shovel less shit. But you'd also have the same problems if you put every person in the company behind the controls of a steam shovel.

fidotron•1h ago
About 15 years ago I was introduced to an environment where approximately a hundred developers spent their lives coaxing a classic style expert system ( https://en.wikipedia.org/wiki/Expert_system ) into controlling a build process to adjust the output for thousands of different output targets. I famously described the whole process as "brain damaging", demonstrated why [1], and got promoted for it.

People that spend their lives trying to get the LLMs to actually write the code will find it initially exhilarating, but in the long run they will hate it, learn nothing, and end up doing something stupid like outputting thousands of different output targets when you only need about 30.

If you use them wisely though they really can act as multipliers. People persist in the madness because of the management dream of making all the humans replaceable.

[1] All that had happened was the devs had learned how to recognize very simple patterns in the esoteric error messages and how to correct them. It was nearly trivial to write a program that outperformed them at this.

roncesvalles•1h ago
I love how OP doesn't reach the most obvious conclusion that (record scratch, freeze frame) he just landed himself at a trash-tier company.

The only thing LLMs are doing here is surfacing the fact that your company hired really bad talent.

And no, basing your team in Arkansas or Tbilisi or whatever, not doing Leetcode interviews, and pinky-promising you're the Good Guys™ (unlike evil Meta and Google *hmmphh*) doesn't exempt you from the competitive forces of the labor market that drive SWE salaries well into the mid six figures, because tbh most people don't really mind grinding Leetcode, and most people don't really mind moving to the Bay Area, and most people don't really mind driving in to work 5 days a week, and they definitely don't give a shit whether their company's mission is to harvest every newborn's DNA to serve them retina-projected cigarette ads from the age of 10, or if it's singing privacy-respecting kumbaya around the FOSS bonfire.

You only get what you pay for, not a lick more.

LLMs are going to put these shitty companies in a really bad state while the spade-sellers laugh their way to the bank. I predict within 24 months you're going to see at least one company say they're enforcing a total ban on LLM-generated code within their codebase.

boesboes•1h ago
What is going on? We are blaming poor management and coaching/training on a tool again. It doesn't work, just as much as tools are never the answer to cultural problems. But blaming (or fixing) the tech is easy, that's why devops never really became more then increasingly complex shell scripting instead of having a real discussion on collaboration, shared goals and culture.

But it's a natural part of the cycle i think. Assembly language, compilers, scripting languages, application development frameworks... All lead to a new generation of programmers that "dont' understand anything!" and "it's just useful for the lazy!"..

I call BS. This is 100% a culture and management problem. I'd even go so far as to say, it is our responsibility as seniors to coach this new generation into producing quality and value with the tools they have. Don't get me wrong, I love shouting at clouds; i even mumble angrily at people in the streets sometimes and managers are mostly idiots; but we are the only ones that can guide them to the light so to speak.

Don't blame the tool, fix the people.

yanis_t•1h ago
Hypothetically, AI is going to move us from development to validation. Think about writing more unit tests, integration tests, e2e tests. Spend more time verifying, really carefully reading these pull requests.

Development is moving towards quality assurance. Because that's what matter eventually. You have a product that works reliably and fast, and you can quickly get it to the market. You don't really care how the code is written.

Of course some people will continue to write "better software" than AI, more readable, or more elegant, bringing some diminishing marginal value to the table, the market doesn't really care about.

I don't think AI is there yet, but realistically speaking, it's gonna get there in 5 to 10 years. Some of us will adjust, some not. Reaction is real.

xtracto•1h ago
When LLMs write 100% of the code and we humans are only tasked with validating and verifying its function, programming languages won't be needed (prog langs are for people).

I wonder if at some point we will have an LLM that basically understands English and say, Java bytecode or V8 bytecode. So in goes English descriptions and comments and out goes program bytecode implementing the required functionality.

Also for LRMs.. why use English for the reasoning part? Could there be a more succinct representation? Like Prolog?

calvinmorrison•1h ago
disagree. Programming languages are useful at minimizing context for humans as well as AI. Much easier to call preg_replace rather than implement a regex engine.
const_cast•56m ago
The next evolution is you don't need applications at all. Applications are for automation speed, nothing else.

Prior to computers, processes were completed by human to human communication. Hard to scale, impossible to automate. So then we had applications, which force fairly strict processes into a funnel.

But they're extremely restrictive and hard to make.

If you already have God AI, you just don't need an application. I don't go to an airlines website and book a flight. No, I ask my assistant to book me a flight, and then I have a flight.

The assistant might talk to the airline, or maybe hundreds of other AI. But it gets it done instantly, and I don't have to interface with a webpage. The AI have a standard language amongst themselves. It might be English, it might not be.

franciscop•1h ago
This is probably why I love using Zed for my hobby dev, it doesn't try to be too clever about AI. It's still there and when I do want some AI stuff it can be seamlessly prompted, but for normal day-to-day the AI steps back and I can just code. In contrast, using AI at work with VSCode I feel like the tools get too much in the way, particularly in 2 categories:

- Fragile interaction. There's popups on VSCode everywhere, and they are clickable. Too often I try to hover on a particular place and end up clicking on one of those. The AI autocomplete also feels way too intrusive, press the wrong key combination and BAM I get a huge amount of code I didn't intend to get.

- Train of thought disruption. Since some times the AI long auto-complete is useful (~1/3th of times), I do end up reading it and getting distracted from my original "building up" thinking and now change to "explore thinking", which kind of dismantles the abstraction castle.

I haven't seen either of those issues on Zed. It really brought me back the joy of programming on my free time. I also think both of these issues are about the implementation more than the actual feature.

twodave•1h ago
Ironically, this post reads like it was written with an LLM.
arnorhs•1h ago
Not at all, imo
blibble•1h ago
> In a recent company town-hall, I watched as a team of junior engineers demoed their latest work

> Championing their “success”, a senior manager goaded them into bragging about their use of “AI” tools to which they responded “This is four thousand lines of code written by Claude”. Applause all around.

this is no different to applauding yourself for outsourcing your own role to a low cost location

wonder if they'll still be clapping come layoffs

stpedgwdgfhgdd•1h ago
An AI tool like CC requires lots of experience to be used effectively. With today’s technology it still & also requires a lot of old fashioned coding experience. In the hands of a power user it can give a big productivity boost.

That said, I wonder how many dormant bugs get introduced at this moment by the less talented. Just a matter of time…

thomgo•1h ago
> Engineers are burning out. Orgs expect their senior engineering staff to be able to review and contribute to “vibe-coded” features that don’t work.

It’s not just engineering folks that are being asked to do more vibe-coding with AI. Designers, product folks, project managers, marketers, content writers are all being asked to vibe code prototypes, marketing websites, internal tools, repros, etc. I’m seeing it first hand at the company I work at and many others. Normal expectations of job responsibilities have been thrown out of the window.

Teams are stretched thin as a result, because every company is thinking that if you’re not sprinting towards AI you’ll be left behind. And the truth is that these folks actually deliver impact through their AI usage.

slipperydippery•34m ago
My wife's seen this at multiple non-tech companies, and it's a disaster every single time. Most (like... 95+% of) folks can't use these things very well, being forced to use them kills morale because they're frustrating as fuck (and managers largely have no idea what they're actually capable of doing productively, so expectations are all over the place and often fantastical) and dealing with the output of co-workers who can't use them well is even more frustrating than the LLMs themselves are. It's sometimes coupled with attempts to realize the increased "efficiency" before it's even proven it exists, by firing significant chunks of staff at the same time as adopting "AI processes", further stressing out remaining employees and ruining ability to actually get things done.

It's trashing whole departments. The come-down from this high (which high is being experienced pretty much only by the C-suite and investors) is gonna be rough.

tschellenbach•1h ago
AI is currently a very effective army of junior/senior engineers. It requires the oversight of a staff level engineer with some product understanding. When properly applied it can speed up development pace ~100-200x. Things that help achieve this outcome: staff engineer reviewing, typed language (it is so much better at Go compared to Python for instance), well structured project/code, solid testing best practices.

It sounds like the author's company doesn't have this setup well.

If you use AI wrong you get AI slop :)

chiliada•1h ago
I have to admit it comes across as deeply ironic to see a SWE complain of feeling 'violated' when they reach out expecting human help/interaction and are instead met with some thoughtless automation which actually wastes more of their time. The very abominations SWEs have helped inflict on customers everywhere for decades. Maybe things are just coming full circle ;)
enraged_camel•1h ago
Why was the title changed, and the word "hell" removed?
littlecranky67•1h ago
Feel like inventing my own programming language and have AI built a compiler for it, so that in every new project we have a new programming language that the AI knows shit about.
ChrisMarshallNY•1h ago
> So what good are these tools? Do they have any value whatsoever?

In my case, yes, but I think I use it differently from the way that most do.

First, for context, I'm a pretty senior developer, and I've been doing code since 1983, but I'm currently retired, and most of my work is alone (I've found that most young folks don't want to work with people my age, so I've calibrated my workflow to account for that).

I have tried a number of tools, and have settled on basically just using ChatGPT and Perplexity. I don't let them write code directly, but I often take the code they give me, and use it as a starting point for implementation. Sometimes, I use it wholesale, but usually, I do a lot of modification (often completely rewriting).

I have found that they can get into "death spirals," where their suggestions keep getting worse and worse. I've learned to just walk away, and try something else, when that happens. I shudder to think of junior engineers, implementing the code that comes from these.

The biggest value, to me, is that I can use them as an "instant turnaround" StackOverflow, without the demeaning sneers. I can ask the "stupidest" question; one that I could easily look up, myself, but it's faster to use ChatGPT, and I'll usually get a serviceable answer, almost instantly. That's extremely valuable, to me.

I recently spent a few weeks, learning about implementing PassKeys in iOS. I started off "cold," with very little knowledge of PKs, in general, and used what ChatGPT gave me (server and client) verbatim, then walked through the code, as I learned. That's usually how I learn new tech. It's messy, but I come out of it, with a really solid understanding. The code I have now, is almost unrecognizable, from what I started with.

TuringNYC•1h ago
I read your post nodding in agreement at times. Keep assured the market takes care of these things. In the medium term, productive uses allow winners to win in the marketplace. Unproductive uses cause losers to lose money and fall behind.
josefritzishere•1h ago
I was recently told that a team intended to use automated AI to update user documentation in tandem with code updates. I am terrified of how badly that is going to go after seeing how bandly AI writes code.
tossandthrow•1h ago
If I go into a PR that requires a lot of feedback, I will usually stop after 5 - 10 pieces of feedback and let the author know that it is not ready for review, and needs to be further developed.

I am OK with the author using AI heavily. But if I continue to see slop, I will continue to review less and send it back.

In the end, if the engineer is fiddling around for too long, they don't get any work in, which is a performance issue.

I am always available to help out the colleague to write understand the system and write code.

For me, the key is to not accept to review AI slop just like I do not accept reviewing other types of slop.

If something is recognized as slop, it is not ready to be reviews.

This puts an upwards pressure on developers to deliver better code.

jackdoe•1h ago
> Here’s an experiment for you: stop using “AI”. Try it for a day. For a week. For a month.

I did that, now I have at least 2-3 days a week where I dont use any AI, and its much better

there is something very strange when using AI a lot, and it is affecting me a lot and I cant quite put my finger on it

I wrote https://punkx.org/jackdoe/misery.html some time ago to try to explain it, but honestly recently I relized that I really hate reading tokens.

wiseowise•1h ago
> I’m not sure why, but I felt violated. It felt wrong.

I usually avoid this, but what a snowflake.

So many people get their knickers in a twist over nothing.

Here’s a simple flowchart:

Junior gives you ai slop and ignores your remarks? a) ignore them b) send their manager a message

If neither works -> leave.

Also, get a life.

donatj•1h ago
I was reviewing a coworkers code recently. It was this convoluted multidimensional array manipulation that was shuffling, sorting, and filtering all at the same time. It had a totally generic name like "prepareData". I asked for an explanation of what the function was doing and he snapped back that I should ask an LLM instead of wasting his time.

It's been a couple weeks, but I am still irritated.

I am asking you, the person who supposedly wrote this, what it does. When you submit your code for review, that's among the most basic questions you should be prepared to answer.

tossandthrow•59m ago
Ask the LLM, attribute the LLMs response to him, and just post the feedback you have after 20 messages back and forth on the PR.

When he gets back and don't understand the feedback, then you can conveniently ask him to ask an LLM and not waste your time.

KronisLV•1h ago
> Here’s an experiment for you: stop using “AI”. Try it for a day. For a week. For a month.

Sooner or later, this will be akin to asking people to stop be able to do their job, the same way how I might not recall how to work with a Java enterprise codebase purely through the CLI, because all of the run profiles in JetBrains IDEs have rotten my own brain, except apply it to developing any software at all.

Maybe not the best analogy because running things isn't too hard when you can reference a Dockerfile, but definitely stuff around running tests and passing the plethora of parameters that are needed to just stand the damned thing up correctly with the right profiles and so on.

That's also like asking people to stop using Google for a while, upon which I bet most of us are reliant to a pretty large degree. Or StackOverflow, for finding things that people wouldn't be able to discover themselves (whether due to a lack of skills, or enough time).

I think the reliance on AI tools will only go upwards.

Here's my own little rant about it from a bit back: https://blog.kronis.dev/blog/ai-artisans-and-brainrot

PaulHoule•1h ago
It's kinda funny that my experience is the opposite but then again I'm senior and working at a different scale.

I don't think my LLM-assisted programming is much faster than my unassisted programming but I think I write higher quality code.

What I find is that my assistant sometimes sees a simple solution that I miss. Going back and forth with an assistant I am likely to think things through in more detail than I would otherwise. I like to load projects I depend on into IntelliJ IDEA and use Junie to have a conversation with the code that helps me get a more thorough and complete understanding of it than I would get looking at it myself more quickly.

As a pro maintenance programmer there is always a certain amount of "let sleeping dogs lie", if something works but you don't understand it you may decide to leave it alone. With an assistant I feel empowered to put more effort into really understanding things that I can get away without understanding, fix minor problems I wouldn't have fixed otherwise, etc.

One bit of advice is that as the context grows assistants seem to break bad. Often I ask Junie to write something, then give it some feedback about things it got wrong, and early in the session I'm blown away. You might think you should keep the session going so it will remember the history but actually it doesn't have the ability to realize some of the history is stale because it is about the way the code was before [1] and the result is eventually it starts going in loops. It makes sense then to start a new session and possibly feed it some documentation about the last session that tells it what it needs to know going forward.

[1] a huge problem with "the old AI", ordinary propositional logic sees the world from a "god's eye" point of view where everything happens at once, in our temporal world we need temporal if not bitemporal logic -- which is a whole can of worms. On top of that there is modal logic ("it is possible that X is true", "it is necessary that X is true") and modeling other people's belief "John thinks that Mary thinks that X is true") It's possible to create a logic which can solve specific problems but a general-purpose commonsense logic is still beyond state of the art.

yanis_t•1h ago
Here’s an experiment for you: stop using the word "AI".

Let's just all call it LLM models, which they are. It's just a tool, that increase your productivity when used right.

pxtail•1h ago
I think what's going on is a paradigm shift to treat code as completely throwaway piece, not meticulously analyzed, inspected and manually reviewed but completely replaceable without caring too much about internals - if outputs are correct for set of inputs, perf is ok then its passed as "good enough". This approach is still evolving but some developers are already picking it up, and since students and new developers are learning to work like this then future for new generation of coders in inevitable - new position is coming to life: vibe coding engineer
alexpotato•1h ago
Recent fascinating experience with hiring and AI.

- DevOps role

- Technical test involves logging into a server (via sadservers.com who are awesome)

- We tell the candidates: "The goal is to see if you can work through a problem on a Linux based system. It's expected you'll see some things you may never have seen before. Using Google and ChatGPT etc is fine if you get stuck. We just ask that you share your screen so we can see your search and thought processes."

- Candidate proceeds to just use ChatGPT for EVERY SINGLE STEP. "How do I list running processes?", "How do I see hidden files?", copy and pasted every error message into ChatGPT etc

Now, I had several thought about this:

1. Is this common?

2. As one coworker joked "I knew the robots were coming, just not today"

3. We got essentially zero signal on his Linux related debugging skills

4. What signal was he trying to send by doing this? e.g. I would assume he would realize "oh, they are hiring for people who are well versed in Linux and troubleshooting"

5. I know some people might say "well, he probably eventually got to the answer" but the point is that ChatGPT doesn't always have the answer.

1970-01-01•33m ago
LLMs are the duct tape of the world. A little here and there does help and works great as a short term solution, but using it everywhere or for a permanent fix is a recipe for disaster.
mikewarot•32m ago
It's all about bullshitting.

So I'm reading the article which initially appears to be a story about an engineering organization and I'm wondering where they came up with someone silly enough to trust an LLM with people's lives.

Then it dawned on me that it wasn't an engineering organization at all, just a bunch of programmers with title inflation. It follows as a consequence that since they and their management don't respect the title and skillset of an Engineer, they likely wouldn't respect or appreciate their programming staff either.

The meta of this is that an audience comfortable with giving the title Engineer to programmers shouldn't really be surprised by this outcome.

>We can say our prayers that Moore’s Law will come back from the dead and save us.

I'm working on that very problem. I expect results with 2 years.

yencabulator•5m ago
If you, a human, copy-paste me an LLM response without any "cover letter" about the content, I will not talk to you after that and instead ask the LLM directly.

In my mind, anyone willing to opt out of providing value is welcome to do so.

FFmpeg 8.0

https://ffmpeg.org/index.html#pr8.0
73•gyan•24m ago•15 comments

Io_uring, kTLS and Rust for zero syscall HTTPS server

https://blog.habets.se/2025/04/io-uring-ktls-and-rust-for-zero-syscall-https-server.html
357•guntars•11h ago•85 comments

Launch HN: Inconvo (YC S23) – AI agents for customer-facing analytics

18•ogham•2h ago•11 comments

LabPlot: Free, open source and cross-platform Data Visualization and Analysis

https://labplot.org/
87•turrini•6h ago•14 comments

What about using rel="share-url" to expose sharing intents?

https://shkspr.mobi/blog/2025/08/what-about-using-relshare-url-to-expose-sharing-intents/
37•edent•3h ago•16 comments

Making LLMs Cheaper and Better via Performance-Efficiency Optimized Routing

https://arxiv.org/abs/2508.12631
23•omarsar•1h ago•4 comments

DeepSeek-v3.1

https://api-docs.deepseek.com/news/news250821
645•wertyk•20h ago•201 comments

Thunderbird Pro August 2025 Update

https://blog.thunderbird.net/2025/08/tbpro-august-2025-update/
106•mnmalst•1h ago•30 comments

Everything is correlated (2014–23)

https://gwern.net/everything
195•gmays•13h ago•88 comments

Control shopping cart wheels with your phone (2021)

https://www.begaydocrime.com/
227•mystraline•14h ago•92 comments

All managers make mistakes; good managers acknowledge and repair

https://terriblesoftware.org/2025/08/22/the-management-skill-nobody-talks-about/
169•matheusml•2h ago•59 comments

VHS-C: When a lazy idea stumbles towards perfection [video]

https://www.youtube.com/watch?v=HFYWHeBhYbM
103•surprisetalk•4d ago•62 comments

Code formatting comes to uv experimentally

https://pydevtools.com/blog/uv-format-code-formatting-comes-to-uv-experimentally/
306•tanelpoder•19h ago•202 comments

Go is still not good

https://blog.habets.se/2025/07/Go-is-still-not-good.html
312•ustad•6h ago•362 comments

An interactive guide to SVG paths

https://www.joshwcomeau.com/svg/interactive-guide-to-paths/
390•joshwcomeau•4d ago•40 comments

How Not to Buy a SSD

https://andrei.xyz/post/how-not-to-buy-a-ssd/
116•speckx•3d ago•102 comments

Weaponizing image scaling against production AI systems

https://blog.trailofbits.com/2025/08/21/weaponizing-image-scaling-against-production-ai-systems/
453•tatersolid•1d ago•127 comments

4chan will refuse to pay daily online safety fines, lawyer tells BBC

https://www.bbc.co.uk/news/articles/cq68j5g2nr1o
146•donpott•5h ago•142 comments

Crimes with Python's Pattern Matching (2022)

https://www.hillelwayne.com/post/python-abc/
229•agluszak•20h ago•93 comments

From GPT-4 to GPT-5: Measuring progress through MedHELM [pdf]

https://www.fertrevino.com/docs/gpt5_medhelm.pdf
116•fertrevino•16h ago•85 comments

How does the US use water?

https://www.construction-physics.com/p/how-does-the-us-use-water
209•juliangamble•1d ago•156 comments

1981 Sony Trinitron KV-3000R: The Most Luxurious Trinitron [video]

https://www.youtube.com/watch?v=jHG_I-9a7FY
82•ksec•1d ago•60 comments

Building AI products in the probabilistic era

https://giansegato.com/essays/probabilistic-era
170•sdan•21h ago•95 comments

Being “Confidently Wrong” is holding AI back

https://promptql.io/blog/being-confidently-wrong-is-holding-ai-back
114•tango12•3h ago•169 comments

AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'

https://www.theregister.com/2025/08/21/aws_ceo_entry_level_jobs_opinion/
1527•JustExAWS•1d ago•646 comments

Show HN: OS X Mavericks Forever

https://mavericksforever.com/
377•Wowfunhappy•3d ago•169 comments

How well does the money laundering control system work?

https://www.journals.uchicago.edu/doi/10.1086/735665
266•PaulHoule•1d ago•317 comments

My other email client is a daemon

https://feyor.sh/blog/my-other-email-client-is-a-mail-daemon/
179•aebtebeten•1d ago•24 comments

Beyond sensor data: Foundation models of behavioral data from wearables

https://arxiv.org/abs/2507.00191
224•brandonb•1d ago•48 comments

Using Podman, Compose and BuildKit

https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/
290•LaSombra•1d ago•107 comments