frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
233•theblazehen•2d ago•68 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
6•AlexeyBrin•59m ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•555 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
67•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
54•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
36•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•124 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
386•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
10•__natty__•3h ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
425•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
264•i5heu•18h ago•216 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
64•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

AI and the ironies of automation – Part 2

https://www.ufried.com/blog/ironies_of_ai_2/
256•BinaryIgor•1mo ago

Comments

z_•1mo ago
This is a thought provoking piece.

“But at what cost?”

We’ve all accepted calculators into our lives as being faster and correct when utilized correctly (Minus Intel tomfoolery), but we emphasize the need to know how to do the math in educational settings.

Any post education adult will confirm when confronted with an irregular math problem (or a skill) that there is a wait time to revive the ability.

Programming automation having the potential skill decay AND being critical path is … worth thinking about.

xorcist•1mo ago
Comparisons with deterministic tools such as calculators will always lead astray. There is no comparable situation where faced with a new problem the AI will just give up. If there is the need for an expert, the need is always there, because there is no indication external to the process that the process will fail.
alex989•1mo ago
I disagree about how calculators and math are deterministic a in real world scenarios where you use math at work. When you compute a formula in your calculator or in a fancy design software, it will always give you answer but it doesn't mean you asked the right question. If you use the wrong units in your input or if you make a typo, if you used the wrong formula, etc., the calculator/software will blindly give you an answer and only an experienced engineer will spot it a first glance. As soon as there is a human in the loop, things get messy.

For exemple, if your calculator tells you that a 15m long W200x31 steel beam can resist 215kN•m in bending moment, I know at first glance its at least 4x too much for that length, but how many people reading my comment could? A civil engineer fresh out of college would not.

singpolyma3•1mo ago
Calculators don't do math, they do calculating. Which is to say, they don't think for you. There's not much value in being able to quickly compute some expression in a world with calculators. But there's a huge value in knowing how to know which numbers to feed into the calculation.
kurthr•1mo ago
The biggest problem with calculators (rather than slide rules), was that because calculations with big numbers (large mantissa) were so easy, people got used to doing them that way without consideration.

Using a slide rule meant inherently knowing order-of-magnitude, rounding, and precision. Once calculators make it easy they enable both new kinds of solutions and new kinds of errors (that you have to separately teach to avoid).

At the same time, I basically agree. Humans are very bad calculators and we've needed tools (abacus) for millennia.

bitwize•1mo ago
I derive tremendous value from being able to calculate taxes, tips, and so forth in my head, or right on the receipt, without having to reach for my phone and launch Droid48. (I know some of y'all are also Droid48 bros.) It's even more profound a convenience than knowing how to drive Emacs with just the keyboard and not having to reach for the goddamn mouse.
agumonkey•1mo ago
we need to form a group of intrisics

people who enjoyed knowing and learning in depth, not just apply to sell something

eastbound•1mo ago
We already have generational programming decay. At 25 years old, kids fresh out of uni can’t write a string.contains() routine. They all use .stream() in Java. Matter of generation, fashion and skills to learn. And concerning the programming of C drivers, Apple is the last company to write a filesystem and they already can’t find anyone able to do it.
nuancebydefault•1mo ago
The article discusses basically 2 new problems with using agentic AI:

- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.

- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.

Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.

I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.

DiscourseFan•1mo ago
That's how it tends to go, automation removes some parts of the work but creates more complexity. Sooner or later that will also be automated away, and so on and so forth. AGI evangelists ought to read Marx's Capital.
jennyholzer2•1mo ago
I seriously doubt that there is even one "AGI evangelist" who has the intellectual capacity to read books written for adult audiences.
ctoth•1mo ago
Hi. I am not an evangelist -- I'm quite certain it's going to kill us all! But I would like to think that I'm about the closest thing to an AI booster you might find here, given that I get so much damn utility out of it. I'm interested in reading, I probably read too much! would you like to suggest a book we can discuss next week? I'd be happy to do this with you.
wizzwizz4•1mo ago
If you're "quite certain it's going to kill us all", then you are extremely foolish to not be opposing it. Do you think there's some kind of fatalistic inevitability? If so… why? Conjectures about the inevitable behaviour of AI systems only apply once the AI systems exist.
ctoth•1mo ago
You're on a plane. The plane is going to crash. You aren't a pilot. There's WiFi on the plane. Do you use the WiFi before it crashes?
bitwize•1mo ago
Marxists have the tendency to think that the Venn diagram of "people who have read and understand Marx" and "Marxists" is a circle. There are plenty of AGI evangelists who are smart enough to read Marx, and many of them probably have. The problem is that, being technolibertarians and that, they think Marx is the enemy.
DiscourseFan•1mo ago
That seems patently absurd, considering that the debate is not between marxists and non-marxists but accelerationists and orthodox marxists, who are both readers of marx, its just that the former is in alignment with technolibertarianism.
delaminator•1mo ago
I used to be a maintenance data analyst in a welding plant welding about 1 million units per month.

I was the only person in the factory who was a qualified welder.

asielen•1mo ago
The way you put that makes be think of the current challenge younger generations are having with technology in general. Kids who were raised on touch screen interfaces vs kids in older generations who were raised on computers that required more technical skill to figure out.

In the same way, when everything just works, there will be no difference, but when something goes wrong, the person who learned the skills before will have a distinct advantage.

The question is if AI gets good enough that slowing down occasionally to find a specialist is tenable. It doesn't need to be perfect, it just needs to be predicably not perfect.

Expertw will always be needed, but they may be more like car mechanics, there to fix hopefully rare issues and provide a tune up, rather than building the cars themselves.

jeffreygoesto•1mo ago
Car mechanics face the same problem today with rare issues. They know the mechanical standard procedures and that they can not track down a problem but only try to flash over an ECU or try swapping it. They also don't admit they are wrong, at least most of the time...
c0balt•1mo ago
> only try to flash over an ECU or try swapping it.

To be fair, they have wrenches thrown in their way there as many ECUs and other computer-driven components are fairly locked down and undocumented. Especially as the programming software itself is not often freely distributed (only for approved shops/dealers).

grvdrm•1mo ago
Your first problem doesn’t feel new at all. Reminded me of a situation several years ago. What was previous Excel report was automated into PowerBI. Great right? Time saved. Etc.

But the report was very wrong for months. Maybe longer. And since it was automated, the instinct to check and validate was gone. And tracking down the problem required extra work that hadn’t been part of the Excel flow

I use this example in all of my automation conversations to remind people to be thoughtful about where and when they automate.

all2•1mo ago
Thoughtfulness is sometimes increased by touch time. I've seen various examples of this over time; teachers who must collate and calculate grades manually showed improved outcomes for their students, test techs who handle hardware becoming acutely aware of the many failure modes of the hardware, and so on.
grvdrm•1mo ago
Said another way: extra touch might mean more accountable thinking.

Higher touch: "I am responsbile for creating this report. It better be right" Automated touch: "I sent you the report, it's right because it's automated"

Mistakes possible either way. But I like higher-touch in many situations.

Curious if you have links to examples you mention?

all2•1mo ago
The teacher example was from one of those pop-psych books on being more efficient with one's time. I can't remember the title off the top of my head. Another example in the book applied the author's model of thinking to a plane crash in the Pacific. I'm sorry, man. It's been a long time.
caughtinthought•1mo ago
Basically every AWS migration is this example
grvdrm•1mo ago
Yup.

And Excel to, well, not-Excel in my experience.

layer8•1mo ago
They also made the point that the less frequent failures become, the more tedious it is for the human operator to check for them, giving the example of AI agents providing verbose plans of what they intend to do that are mostly fine, but will occasionally contain critical failures that the operator is supposed to catch.
sublimefire•1mo ago
Good discussion of the paper and the observations and ironies. A thing to note is that we do have software factories already, with a bunch of automation in place and folks being trained to deal with incidents. The pools of agents just elevate what we currently have but the tools are still lacking severely. IMO the tools need to improve for us to move forward as it is difficult to observe the decisions of agents when they fall apart.

Also, by and large the current AI tools are not in the critical path yet, well except those drones that lock on targets to eliminate them in case of interference, and even then it is ML. Agents can not be in that path due to predictability challenges yet.

wesammikhail•1mo ago
Our of curiosity, does anyone know of a good writeup / blog post made by someone in the industry that revolves around reducing orchestration error rates? Would love to read some more about the topic and I'm looking for a few good resources.
dloranc•1mo ago
What do you mean by orchestration?
everdrive•1mo ago
I can feel the skill atrophy creeping in. My very first instinct is go use the LLM. I think much like forcing yourself to exercise, eat right, and avoid social media / distractions, this will be a new modern skillset; do you have the discipline to avoid becoming useless without an LLM? A small few will be great at this, the middle of the bell curve will do "well enough," and you know the story for the rest.
andy99•1mo ago
I’ve been using LLMs to code for some time and I look at it differently.

I ask myself if I need to understand the code, and if the answer is yes I don’t use an LLM. It’s not a matter of discipline, it’s a sober view of what the minimal amount of work for me is.

layer8•1mo ago
The only time one doesn’t need to understand the code is when it doesn’t matter if the code is correct, or when it can be tested exhaustively for all possible inputs. Both are pretty rare for me.
kaffekaka•1mo ago
I largely agree. But sometimes the program is not destructive and you only need to test for inputs that may/will actually occur. The LLM wrote a script to do some processing? Just test it, if the processing is fine, done.

I have many LLM-written scripts and tools to do some semi simple jobs where I have barely even looked at the code because I could see immediately that the job I wanted to do got done.

delaminator•1mo ago
I haven't written any code in 6 months. But I can still remember how to code in 6502 machine code from the 1980s.
zeroonetwothree•1mo ago
How can you be sure you remember if you aren’t actually doing it?
kaffekaka•1mo ago
This is an important question I think. Gradually losing a skill to atrophy is not something you notice consciously.
delaminator•1mo ago
Come on, I've been coding for 45 years. I don't forget so quickly.
kaffekaka•1mo ago
I am sure you can still code, but there is no question that there are certain small bits that you no longer remember as well (or at all) as just 6 months ago.

I believe - but cannot prove - that the atrophy follows an S curve (decreasing with time), so that in the beginning not much happens but with time the rate of forgetting things increases.

delaminator•1mo ago
There's a difference between forgetting the minutiae and still having the skills.

One doesn't forget how to learn. It's just that you've switched to learning something else.

vips7L•1mo ago
This just sounds like addiction to the dopamine of instant gratification.
jason_oster•1mo ago
I have wasted too much time wishing I could find the motivation to work on coding projects. And there are times that I was able to force myself to just get started. Spin up the flywheel and let momentum carry me.

But I'm talking about a consistent problem for more than 25 years. AI agents didn't do this to me. At least in my anecdotal case, this isn't atrophy. It's just the way it has always been. Now I actually have much less friction in getting a project going. I can just type a few of my thoughts at an agent and away it goes. The momentum is almost free, now.

ripe•1mo ago
I really like this author's summary of the 1983 Bainbridge paper about industrial automation. I have often wondered how to apply those insights to AI agents, but I was never able to summarize it as well as OP.

Bainbridge by itself is a tough paper to read because it's so dense. It's just four pages long and worth following along:

https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...

For example, see this statement in the paper: "the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have."

This summarizes the first irony of automation, which is now familiar to everyone on HN: using AI agents effectively requires an expert programmer, but to build the skills to be an expert programmer, you have to program yourself.

It's full of insights like that. Highly recommended!

startupsfail•1mo ago
The same argument was there about needing to be an expert programmer in assembly language to use C, and then same for C and Python, and then Python and CUDA, and then Theano/Tensorflow/Pytorch.

And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it. And even talks back coherently sometimes.

gipp•1mo ago
Those are completely deterministic systems, of bounded scope. They can be ~completely solved, in the sense that all possible inputs fall within the understood and always correctly handled bounds of the system's specifications.

There's no need for ongoing, consistent human verification at runtime. Any problems with the implementation can wait for a skilled human to do whatever research is necessary to develop the specific system understanding needed to fix it. This is really not a valid comparison.

startupsfail•1mo ago
There are enormous microcode, firmware and drivers blobs everywhere on any pathway. Even with very privileged access of someone at Intel or NVIDIA, ability to have a reasonable level of deterministic control of systems that involve CPU/GPU/LAN were long gone, almost for a decade now.
gipp•1mo ago
I think we're using very different senses of "deterministic," and I'm not sure the one you're using is relevant to the discussion.

Those proprietary blobs are either correct or not. If there are bugs, they fail in the same way for the same input every time. There's still no sense in which ongoing human verification of routine usage is a requirement for operating the thing.

wasabi991011•1mo ago
No, that is a terrible analogy. High level languages are deterministic, fully specified, non-leaky abstractions. You can write C and know for a fact what you are instructing the computer to do. This is not true for LLMs.
ben_w•1mo ago
I was going to start this with "C's fine, but consider more broadly: one reason I dislike reactive programming is that the magic doesn't work reliably and the plumbing is harder to read than doing it all manually", but then I realised:

While one can in principle learn C as well as you say, in practice there's loads of cases of people getting surprised by undefined behaviour and all the famous classes of bug that C has.

Bootvis•1mo ago
Maybe, but buffer overflows would occur written in assembler written by experts as well. C is a fine portable assembler (could probably be better with the knowledge we have now) but programming is hard. My point: you can roughly expect an expert C programmer to produce as many bugs per unit of functionality as an expert assembly programmer.

I believe it to be likely that the C programmer would even writes the code faster and better because of the useful abstractions. An LLM will certainly write the code faster but it will contain more bugs (IME).

layer8•1mo ago
There is still the important difference that you can reason with precision about a C implementation’s behavior, based on the C standard and the compiler and library documentation, or its source or machine code when needed. You can’t do that type of reasoning for LLMs, or only to a very limited extent.
the_snooze•1mo ago
>And yet here we are, able to talk to a computer, that writes Pytorch code that orchestrates the complexity below it.

It writes something that that's almost, but not quite entirely unlike Pytorch. You're putting a little too much value on a simulacrum of a programmer.

yannyu•1mo ago
I think it's even more pernicious than the paper describes as cultural outputs, art, and writing aren't done to solve a problem, they're expressions that don't have a pure utility purpose. There's no "final form" for these things, and they change constantly, like language.

All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment, which means that general purpose models are starting to run out of contemporary, low-cost training data.

So either training data is going to get more expensive because you're going to have to pay creators, or these models will slowly drift away from the contemporary cultural reality.

We'll see where it all lands, but it seems clear that this is a circular problem with a time delay, and we're just waiting to see what the downstream effect will be.

hannasanarion•1mo ago
> All of these AI outputs are both polluting the commons where they pulled all their training data AND are alienating the creators of these cultural outputs via displacement of labor and payment

No dispute on the first part, but I really wish there were numbers available somehow to address the second. Maybe it's my cultural bubble, but it sure feels like the "AI Artpocalypse" isn't coming, in part because of AI backlash in general, but more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.

I think a similar idea might be persisting in AI programming as well, even though it seems like such a perfect use case. Anthropic released an internal survey a few weeks ago that was like, the vast majority, something like 90% of their own workers AI usage, was spent explaining allnd learning about things that already exist, or doing little one-off side projects that otherwise wouldn't have happened at all, because of the overhead, like building little dashboards for a single dataset or something, stuff where the outcome isn't worth the effort of doing it yourself. For everything that actually matters and would be paid for, the premier AI coding company is using people to do it.

kurthr•1mo ago
I guess I'm in a bubble, because it doesn't feel that way to me.

When AI tops the charts (in country music) and digital visual artists have to basically film themselves working to prove that they're actually creating their art, it's already gone pretty far. It feels like the even when people care (and they great mass do not) it creates problems for real artists. Maybe they will shift to some other forms of art that aren't so easily generated, or maybe they'll all just do "clean up" on generated pieces and fake brush sequences. I'd hate for art to become just tracing the outlines of something made by something else.

Of course, one could say the same about photography where the art is entirely in choosing the place, time, and exposure. Even that has taken a hit with believable photorealistic generators. Even if you can detect a generator, it spoils the field and creates suspicion rather than wonder.

Ianjit•1mo ago
Is AI really topping the charts in country music?

https://youtu.be/rGremoYVMPc?si=EXrmyGltrvo2Ps8E

smj-edison•1mo ago
I'd distinguish between physical art and digital art tbh. Physical art has already grappled with being automated away with the advent of photography, but people still buy physical art because they like the physical medium and want to support the creator. Digital art (for one off needs), however, is a trickier place since I think that's where AI is displacing. It's not making masterpieces, but if someone wanted a picture of a dwarf for a D&D campaign, they'd probably generate it instead of contracting it out.
hannasanarion•1mo ago
Right, but the question then is, would it actually have been contracted out?

I've played RPGs, I know how this works: you either Google image search for a character you like and copy/paste and illegally print it, or you just leave that part of the sheet blank.

So it's analogous to the "make a one-off dashboard" type uses from that programming survey: the work that's being done with AI is work that otherwise wouldn't have been done at all.

musicale•1mo ago
> people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator

Businesses which don't want to pay money strongly prefer AI.

sureglymop•1mo ago
Yeah but if they, for example use AI to do their design or marketing materials then the public seems to dislike that. But again, no numbers that's just how it feels to me.
solumunus•1mo ago
After enough time, exposure and improvement of the technology I don’t think the public will know or care. There will be generations born into a world full of AI art who know no better and don’t share the same nostalgia as you or I.
heavyset_go•1mo ago
Then they get a product that legally isn't theirs and anyone can do anything with it. AI output isn't anyone's IP, it can't be copyrighted.
windexh8er•1mo ago
What's hilarious is that, for years, the enterprise shied away from open source due to the legal considerations they were concerned about. But now... With AI, even though everyone knows that copyright material was stolen by every frontier provider, the enterprise is now like: stolen copyright that can potentially allow me to get rid of some pesky employees? Sign us up!
heavyset_go•1mo ago
Yup, there's this angle that's been a 180, but I'm referring to the fact that the US Copyright Office determined that AI output isn't anyone's IP.

Which in itself is an absurdity, where the culmination of the world's copyrighted content is compiled and used to then spit out content that somehow belongs to no one.

semi-extrinsic•1mo ago
No difference from e.g. Shutterstock, then?

I think most businesses using AI illustrations are not expecting to copyright the images themselves. The logos and words that are put on top of the AI image are the important bits to have trademarked/copyrighted.

heavyset_go•1mo ago
I guess I'm looking at it from a software perspective, where code itself is the magic IP/capital/whatever that's crucial to the business, and replacing it with non-IP anyone can copy/use/sell would be a liability and weird choice.
clickety_clack•1mo ago
Art is political more than it is technical. People like Banksy’s art because it’s Banksy, not because he creates accurate images of policemen and girls with balloons.
majormajor•1mo ago
I think "cultural" is a better word there than "political."

But Banksy wasn't originally Banksy.

I would imagine that you'll see some new heavily-AI-using artists pop up and become name brands in the next decade. (One wildcard here could be if the super-wealthy art-speculation bubble ever pops.)

Flickr, etc, didn't stop new photographers from having exhibitions and being part of the regular "art world" so I expect the easy availability of slop-level generated images similarly won't change that some people will do it in a way that makes them in-demand and popular at the high end.

At the low-to-medium end there are already very few "working artists" because of a steady decline after the spread of recorded media.

Advertising is an area where working artists will be hit hard but is also a field where the "serious" art world generally doesn't consider it art in the first place.

irishcoffee•1mo ago
> I think "cultural" is a better word there than "political."

Oh. What is the difference?

simonra•1mo ago
I’d say in this context that politics concerns stated preferences, while culture labels the revealed preferences. Also makes the statement «culture eats policy for breakfast» make more sense now that I’ve thought about it this way.
ehnto•1mo ago
Not often discussed is the digital nature of this all as well. An LLM isn't going to scale a building to illegally paint a wall. One because it can't, but two because the people interested in performance art like that are not bound by corporate. Most of this push for AI art is going to come from commercial entities doing low effort digital stuff for money not craft.

Musicians will keep playing live, artists will keep selling real paintings, sculptors will keep doing real sculptures etc.

The internet is going to suffer significantly for the reasons you point out. But the human aspect of art is such a huge component of creative endeavours, the final output is sometimes only a small part of it.

Uehreka•1mo ago
Mentioning people like Banksy at all is missing the point though. It makes it sound like art is about going to museums and seeing pieces (or going to non-museums where people like Banksy made a thing). I feel like, particularly in tech circles, people don’t recognize that the music, movies and TV shows they consume are also art, and that the millions of people who make those things are very legitimately threatened by this stuff.

If it were just about “the next Banksy” it would be less of a big deal. Many actors, visual artists, technical artists, etc make their living doing stock image/video and commercials so they can afford rent while keeping their skills sharp enough to do the work they really believe in (which is often unpaid or underpaid). Stock media companies and ad agencies are going to start pumping out AI content as soon as it looks passable for their uses (Coca Cola just did this with their yearly Christmas ad). Suddenly the cinematographers who can only afford a camera if it helps pay the bills shooting commercials can’t anymore.

Entire pathways to getting into arts and entertainment are drying up, and by the time the mainstream understands that it may be too late, and movie studios will be going “we can’t find any new actors or crew people. Huh. I guess it’s time to replace our people with AI too, we have no choice!”

crooked-v•1mo ago
> more specifically because people who are willing to pay money for art seem to strongly prefer that their money goes to an artist, not a GPU cluster operator.

Look at furniture. People will pay a premium for handcrafted furniture because it becomes part of the story of the result, even when Ikea offers a basically identical piece (with their various solid-wood items) at a fraction of the price and with a much easier delivery process.

Of course, AI art also has the issue that it's effectively impossible to actually dictate details exactly like you want. I've used it for no-profit hobby things (wargames and tabletop games, for example), and getting exact details for anything (think "fantasy character profile using X extensive list of gear in Y specific visual style") takes extensive experimentation (most of which can't be generalized well since it depends on quirks of individual models and sub-models) and photoshopping different results together. If I were doing it for a paid product, just commissioning art would probably be cheaper overall compared to the person-hours involved.

vkou•1mo ago
> So either training data is going to get more expensive because you're going to have to pay creators, or these models will slowly drift away from the contemporary cultural reality.

Nah, more likely is that contemporary cultural reality will just shift to accept the output of the models and we'll all be worse off. (Except for the people selling the models, they'll be better off.)

You'll be eating nothing but the cultural equivalent of junk food, because that's all you'll be able to afford. (Not because you don't have the money, but because artists can't afford to eat.)

patcon•1mo ago
> AND are alienating the creators of these cultural outputs via displacement of labor and payment

YES. Thank you for these words. It's a form of ecological collapse. Thought to be fair, the creative ecology has always operated at the margins.

But it's a form of library for challenges in the world, like how a rainforest is an archive of genetic diversity, with countless application like antibiotics. If we destroy it, we lose access to the library, to the archive, just as the world is getting even more treacherous and unstable and is in need of creativity

TeMPOraL•1mo ago
> I think it's even more pernicious than the paper describes as cultural outputs, art, and writing aren't done to solve a problem, they're expressions that don't have a pure utility purpose. There's no "final form" for these things, and they change constantly, like language.

Being utilitarian and having a "final form" are orthogonal concepts. Individual works of art do usually have a final form - it's what you see in museums, cinemas or buy in book stores. It may not be the ideal the artist had in mind, but the artist needs to say "it's done" for the work to be put in front of an audience.

Contrast that with the most basic form of purely utilitarian automation: a thermostat. A thermostat's job is never done, it doesn't even have a definition of "done". A thermostat is meant to control a dynamic system, it's toiling forever to keep the inputs (temperature readings) within given envelope by altering the outputs (heater/cooler power levels).

I'd go as far as saying that of the two kinds, the utilities that are like thermostats are the more important ones in our lives. People don't appreciate, or even recognize, the dynamic systems driving their everyday lives.

BinaryIgor•1mo ago
Yes! One could argue that we might end up with programmers (experts) going through a training of creating software manually first, before becoming operators of AI, and then also spending regularly some of their working time (10 - 20%?) on keeping these skills sharp - by working on purely education projects, in the old school way; but it begs the question:

Does it then really speeds us up and generally makes things better?

andoando•1mo ago
This is a pedantic point no longer worth fighting for but "begs the question" means something is a circular argument, and not "this raises the question"

https://en.wikipedia.org/wiki/Begging_the_question

amrocha•1mo ago
No it doesn’t. The meaning of that phrase has changed. Almost nobody uses the original meaning anymore. Update your dictionary.
frabonacci•1mo ago
The author's conclusion feels even more relevant today: AI automation doesn’t really remove human difficulty—it just moves it around, often making it harder to notice and more risky. And even after a human steps in, there’s usually a lot of follow-up and adjustment work left to do. Thanks for surfacing these uncomfortable but relevant insights
bitwize•1mo ago
Sanchez's Law of Abstraction comes to mind: https://news.ycombinator.com/item?id=22601623
Legend2440•1mo ago
>the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have.

But we are in the later generation now. All the 1983 operators are now retired, and today's factory operators have never had the experience of 'doing it by hand'.

Operators still have skills, but it's 'what to do when the machine fails' rather than 'how to operate fully manually'. Many systems cannot be operated fully manually under any conditions.

And yet they're still doing great. Factory automation has been wildly successful and is responsible for why manufactured goods are so plentiful and inexpensive today.

gmueckl•1mo ago
It's not so simple. The knowledge hasn't been transferred to future operators, but to process engineers who are kow in charge of making the processes work reliably through even more advanced automation that requires more complex skills and technology to develop and produce.
Legend2440•1mo ago
No doubt, there are people that still have knowledge of how the system works.

But operator inexperience didn't turn out to be a substantial barrier to automation, and they were still able to achieve the end goal of producing more things at lower cost.

fuzzfactor•1mo ago
>skills, which later generations of operators cannot be expected to have.

You can't ring more true than this. For decades now.

For a couple years there I was able to get some ML together and it helped me get my job done, never came close to AI, I only had kilobytes of memory anyway.

By the time 1983 rolled around I could see the writing on the wall, AI was going to take over a good share of automation tasks in a more intelligent way by bumping the expert systems up a notch. Sometimes this is going to be a quantum notch and it could end up like "expertise squared" or "productivity squared" [0]. At the rarefied upper bound. Using programmable electronics to multiply the abilities of the true expert whilst simultaneously the expert utilized their abilities to multiply the effectiveness of the electronics. Maybe only reaching the apex when the most experienced domain expert does the programming, or at least runs the show.

Never did see that paper, but it was obvious to many.

I probably mentioned this before, but that's when I really bucked down for a lifetime of experimental natural science across a very broad range of areas which would be more & more suitable for automation. While operating professionally within a very narrow niche where personal participation would remain the source of truth long enough for compounding to occur. I had already been a strong automation pioneer in my own environment.

So I was always fine regardless of the overall automation landscape, and spent the necessary decades across thousands of surprising edge cases getting an idea how I would make it possible for someone else to even accomplish some of these difficult objectives, or perhaps one day fully automate. If the machine intelligence ever got good enough. Along with the other electronics, which is one of the areas I was concentrating on.

One of the key strategies did turn out to be outliving those who had extensive troves of their own findings, but I really have not automated that much. As my experience level becomes less common, people seem to want me to perform in person with greater desire every decade :\

There's related concepts for that too, some more intelligent than others ;)

[0] With a timely nod to a college room mate who coined the term "bullshit squared"

Animats•1mo ago
> By the time 1983 rolled around

That early? There were people claiming that back then, but it didn't really work.

fuzzfactor•1mo ago
>people claiming that back then, but it didn't really work.

Roger. You could also say that's true today.

Seems like there was always some consensus about miracles just around the corner, but a whole lot wider faith has built by now.

I thoroughly felt like AI was coming fast because I knew what I would do if I had all that computer power. But to all appearances I ran the other way since that was absurdly out-of-reach, while at the same time I could count on those enthusiasts to carry the ball forward. There was only a very short time when I had more "desktop" (benchtop) computing power to dedicate than almost any of my peers. I could see that beginning to reverse as the IBM PC began to take hold.

Then it became plain to see the "brain drain" from natural science as the majority of students who were most capable logically & mathematically, gravitated to computer science of some kind instead. That was one of the only growth opportunities during the Reagan Recession so I did't blame them. For better or worse I wasn't a student any more and it was interesting to see the growth money rain down on them, but I wasn't worried and stuck with what I had a head start in. Mathematically, there was going to be a growing number of professionals spending all their time on computers who would have otherwise been doing it with natural science, with no end in sight. Those kind of odds were in my favor if I could ante up long enough to stay in the game.

I had incredible good fortune coming into far more tonnes of scientific electronics than usual, so my hands were full simply concentrating on natural science efforts, by that time I figured if that was going to come together with AI some day, I would want to be ready.

In the '90's the neural-net people had some major breakthroughs, after I had my own company they tried to get a fit, but not near the level of perfection needed. I knew how cool it would be though. I even tried a little sophomore effort myself after I had hundreds of megabytes but there was an unfortunate crash that had nothing to do with it.

One of the most prevalent feelings the whole time is I hope I live long enough to see the kind of progress I would want :\

While far more people than me have always felt that it already arrived.

In the mean time, whether employed or as an entrepreneur, doing the math says it would have been more expensive to automate rather than do so much manual effort over the decades.

But thousands of the things I worked on, the whole world could automate to tremendous advantage, so I thought it would be worth it to figure out how, even if it took decades :)

agumonkey•1mo ago
I kinda fear that this is an economic plane stall, we're tilting upward so much, the underlying conditions are about to dissolve

And I'd add, that recent LLMs magic (i admit they reached a maturity level that is hard to deny) is also a two edged sword, they don't create abstraction often, they create a very well made set of byproducts (code, conf, docs, else) to realize your demand, but people right now don't need to create new improved methods, frameworks, paradigms because the LLM doesn't have our mental constraints.. (maybe later reasoning LLMs will tackle that, plausibly)

naveen99•1mo ago
I mean how did you get an expert programmer before ? Surely it can’t be harder to learn to program with ai than without ai. It’s written in the book of resnet.

You could swap out ai with google or stackoverflow or documentation or unix…

jinwoo68•1mo ago
"Most companies are efficiency-obsessed."

But what most of them do is not to be more efficient but to be shown to be more efficient. The main reason they are so obsessed with AI is because they want to send the signal that they are pursuing to be more efficient, whether they succeed or not.

theologic•1mo ago
Peter Drucker popularized the phrase "Efficiency is doing things right; effectiveness is doing the right things."

Being a credibly efficient at doing the wrong things, turns out to be a massive issue inside of most companies. What's interesting is I do think that AI gives opportunity to be massively more effective because if you have the right LLM, that's trained right, you can explore a variety of scenarios much faster than what you can do by yourself. However, we hear very little about this as a central thrust of how to utilize AI into the work space.

jjk166•1mo ago
In my experience plenty of places are quite inefficient at doing the wrong things as well. You might think this reduces the number of wrong things done, but somehow it doesn't.
theologic•1mo ago
It's almost comical isn't it, but it actually turns out that this is a big foundation behind behavioral economics. In essence you can get trapped in an upper level heuristic and never stop for a moment and thinks things through.

Another one of my favorite examples is that there is some research out of Harvard that basically suggested that if people would take and spend 15 minutes a day reviewing what they had done and what was important, they increased their productivity 22%. Now you would think that this is so obvious and so dramatic you would have variety of Fortune 500 companies saying "oh my goodness we want all of our workers to be 22% more productive" and so they would simply send out a memo or an email or some sort of process to force people to do some reflecting.

I would also suggest that Microsoft had a unique advantage based out of the idea that people should have their own enclosed workspace to do coding. This was deeply entrenched when Bill was running the company day-to-day. And I'm sure as somebody that was a coding phenomenon, it simply made sense to him. But academically, it also makes sense.

Microsoft has reversed this policy, but as far as I can tell, it doesn't have anything to do with the research. It has to do with statements about working together efficiently. or AI productivity. If there's real research then it's great.

My problem is it just doesn't appear to be any real research behind it. Yet I'm sure many managers at Microsoft thinks that it's very efficient. Of course, if you do know anybody at Microsoft that codes, they have their own opinion, and rather than me repeating hearsay, it would be fantastic to have somebody anonymously post what's really going on here. I'll betcha a nickel that 90% of them are not reporting that they feel a lot more effective.

jiehong•1mo ago
This irony of automation has been dealt with in the aviation industry for pilot for years: auto pilots can actually land the plane in many cases, and do fly the plane on most of the cruise.

Yet, pilots are constantly trained on actual scenarios, and are expected to land airplanes manually monthly (and during take off too).

This ensures pilots maintain their skills, while the auto pilot helps most of the time.

On top of that, plane commands often are half automatic already, aka they are assisted (but not by LLMs!), so it’s a complex comparison.

libraryofbabel•1mo ago
Yes, but (to write the second half of your post for you!) regulation and incentives are very different in the aviation industry, because safety and planning for long-tail risks is paramount. Therefore airlines can afford to have their pilots spend thousands of hours training on manual control in various scenarios. By contrast, I don’t think the average software development org will encourage its engineers to hand-roll a sizable proportion of their code, if (still a big if) there are major productivity costs in doing so. Rushing the Next Big Feature out the door will almost always beat out long-term investment in dev training, unfortunately.

Don’t get me wrong - manual practice is in some sense the correct solution, and I plan to try and do it myself in the next decade to make sure my skills stay sharp. But I don’t see the industry broadly encouraging it, still less making it mandatory as aviation does.

Addendum: as you probably know, even in aviation, this is hard to get right. (This is sometimes called the “children of the magenta” problem, but it’s really Bainbridge again.) The most famous example is perhaps Air France Flight 447[0], where the pilots put the plane into a stall at 35,000ft when they reacted poorly after the autopilot disconnecting, and did not even realize they had stalled the plane. Of course, that crash itself led to more regulations around training in manual scenarios too.

[0] https://admiralcloudberg.medium.com/the-long-way-down-the-cr...

justincormack•1mo ago
In most industry now you can't make the things by hand any more, there is no fallback. Once things get designed for automation there is no way back.
steveBK123•1mo ago
I think for most non-coding tasks we are still in the "convincing liar" stage, and not even at the "its right 99.9% of the time and humans need to quickly detect the 0.1% errors" problem. I think a lot of the HN crowd misses this because they are programmers using it for programming.

I work at a firm that has given AI tooling to non-developer data analyst type people who otherwise live & die in excel. Much of their day job involves reading PDFs. I occasionally will use some of the firms AI tooling for PDF summarizing/parsing/interrogation/etc type tasks and remain consistently underwhelmed.

Stuff like taking 10 PDFs each with a simple 30 row table per PDF, with the same title in each file, it ends up puking on 3-4 out of 10 with silent failures. Row drops, duplicating data, etc. When you point out its missed rows, it goes back and duplicates rows to get to the correct row count.

Using it to interrogate standard company filings PDfs that it has been specially trained on and it gave very convincing answers which were wrong because it has silently truncated its search context to only recent year financial filings. Nowhere did it show this limitation to the user. It only became apparent after researching the 4th or 5th company when it decided to caveat its answer with its knowledge window. This invalidated the previous answers as questions such as "when was the first X" or "have they ever reported Y" were operating on incomplete information.

Most users of these tool are not that technical, and are going to be much more naive in taking the answers for fact without considering the context.

Terr_•1mo ago
I'm convinced the best use of these systems will be an explicit two-phase process where they just help people prototype and see and learn how to command regular software.

For example, imagine describing what files you want to find, and getting back a command-line string of find/grep piping. It doesn't execute anything without confirmation, it doesn't "summarize" the results, it's just a narrow tutor to help people in a translation step. A tool for learning that, ideally, eventually puts itself out of a job.

Returning to your PDF scenario: The LLM could help people weave together regular tools of "find regions with keywords" and "extract table as spreadsheet" and "cross-reference two spreadsheets using column values", etc.

throwaway613745•1mo ago
If your process is shit, you're just automating shit at lightning speed.

If you're bad at your job, you're automating it at lightning speed.

You need have good business process and be good at your job without AI in order to have any chance in hell of being successful with it. The idea that you can just outsource your thinking to the AI and don't need to actually understand or learn anything new anymore is complete delusion.

demorro•1mo ago
These observations were made 40 years ago. I suspect we have solved many of these problems now and have close to fully automated manufacturing and flight systems, or close enough that the training trade-off is worth it.

However, this took 40 years and actual fatalities. We should keep that in mind when we're pushing the AI acceleration pedal down ever harder.

analog8374•1mo ago
I spent years creating automated drawing machines. But I can still draw better than any of them with my hand. Not as quickly tho.
dsjoerg•1mo ago
> Typically, before people are put in a leadership role directing humans, they will get a lot of leadership training teaching them the skills and tools needed to lead successfully.

I question this.

didibus•1mo ago
A good read, but it reminds me that people see the programmer as being there to identify when the AI makes an error or a mistake.

But in my use of AI agents as a programmer and also for other work. I would say that, while yes, you also have to look for mistakes or errors, most of the time I spend is on programming the AI still.

The AI agent has no idea what it must produce, what it's meant to do, when it can alter something existing to enable something new, etc.

And this is true for both functional and non-functional requirements.

Unlike in traditional manufacturing, you've already built your manufacturing pipeline for a precise output, you've got your CAD designs done, you ran your simulations, you've calibrated everything already for what you want.

So most of the work remains that of programming the machine.

Animats•1mo ago
There are a few issues here.

It's useful to think about AI-driven coding assistants in terms of the SAE levels of automation for automatic driving.

- Level 0 - totally manual

- Level 1 - a bit of assistance, such as cruise control

- Level 2 - speed and steering control that requires constant supervision by a human driver. This is where most of the commercial systems are now.

- Level 3 - Level 2, but reliable enough that the human driver doesn't need to supervise constantly. Able to bring the vehicle to a safe stop by itself. Mercedes -Benz Drive Pilot is supposedly level 3. Handoff between computers and human remains a problem. Human still liable for accidents.

- Level 4 - Full automation, but not under all conditions. Waymo is Level 4. Human just selects the destination.

- Level 5 - Full automation, at least as capable as human drivers under all conditions. Not yet seen.

What we're looking at with the various programming assistance AI systems ls Level 2 or Level 3 competence. These are the most troublesome levels. Who's in charge? Who's to blame?

The need for such programming assistance systems may be transient, as it clearly is in automotive. Eventually, everybody in automotive will get to Level 4 or better, or drop out due to competitive pressure.

Animats•1mo ago
Bainbridge [1] is interesting, but dated. A more useful version of that discussion from slightly later is "Children of the Magenta", [2] an airline chief pilot talking to his pilots about cockpit automation and how to use it. Requires a basic notion of aviation jargon.

There's been progress since then. Although the details are not widely publicized, enough pilots of the F-22, F-35, or the Gripen have talked about what modern fighter cockpit automation is like. The real job of fighter pilots is to fight and win battles, not drive the airplane. A huge amount of effort has been put into simplifying the airplane driving job so the pilot can focus on killing targets. The general idea today is that the pilot puts the pointy end in the right direction and the control systems take care of the details. An F-22 pilot has been quoted as saying that the F-22 is far less fussy than a Cessna as a flying machine.

For the F-35, which has a VTOL configuration (B) and a carrier-landing configuration (C), much effort was put into making VTOL landing and carrier landing easy. Not because pilots can't learn to do it, but because training tended to obsess on those tasks. The hard part of Harrier (the only previous successful VTOL fighter) was learning to land the unstable beast without crashing. There were still a lot of Harrier crashes.

The hard part of Naval aviator training is landing on a carrier deck. Neither of these tasks has anything to do with the real job of taking a bite out of the enemy, but they consumed most of the training time. So, for the F-35, both of those tasks have enough computer-added stability to make them much easier. One of the stranger features of the F-35 is that it has two main controls, called "inceptors", which correspond to throttle and stick. In normal flight, they mostly work like throttle and stick. But in low-speed hover, the "throttle" still controls speed while the "stick" controls attitude, even though the "stick" is affecting engine speed and the "throttle" is affecting control surfaces in that mode. So the pilot doesn't have to manage the strange transitions of a VTOL craft directly.

This refocuses pilot training on using the sensors and weapons to do something to the enemy. Classic training is mostly about the last few minutes of getting home safely.

As AI for programming advances, we should expect to devote more user time to analyzing the tactical problem, rather than driving the bus.

[1] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...

[2] https://www.youtube.com/watch?v=5ESJH1NLMLs

bdangubic•1mo ago
> “ If it does not work properly, you need better prompts” is the usual response if someone struggles with directing agents successfully

so much this!

abrookewood•1mo ago
TROJAN WARNING:

My AV is reporting issues with this link: 15/12/2025 2:59:56 PM;HTTP filter;file;https://cdn.jsdeliver.net/npm/mathjax@3.2.2/es5/tex-chtml.js... trojan;connection terminated;

wizzwizz4•1mo ago
Ooh, that should be jsdelivr.com. Good catch!
alexgotoi•1mo ago
The automation irony: we build AI to reduce human workload, but end up creating systems that need constant human supervision anyway. Classic.

What's interesting is this mirrors every automation wave. We thought assembly lines would eliminate human work - instead they just changed what work meant. AI's doing the same, just at software speed instead of industrial speed.

Long-term I'm optimistic - automation creates more than it destroys, always has. Short-term though? Messy transition for anyone whose job is 'being the interface layer.

Will include this thread in my next issue of https://hackernewsai.com/