frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I wrote to Flock's privacy contact to opt out of their domestic spying program

https://honeypot.net/2026/04/14/i-wrote-to-flocks-privacy.html
150•speckx•1h ago•51 comments

Spain to expand internet blocks to tennis, golf, movies broadcasting times

https://bandaancha.eu/articulos/telefonica-consigue-bloqueos-ips-11731
226•akyuu•1h ago•194 comments

Rare concert recordings are landing on the Internet Archive

https://techcrunch.com/2026/04/13/thousands-of-rare-concert-recordings-are-landing-on-the-interne...
332•jrm-veris•5h ago•97 comments

40% of lost calories globally are from beef, needing 33 cal of feed per 1 cal

https://iopscience.iop.org/article/10.1088/2976-601X/ae4f6b
43•randycupertino•43m ago•39 comments

Claude Code Routines

https://code.claude.com/docs/en/routines
134•matthieu_bl•2h ago•83 comments

5NF and Database Design

https://kb.databasedesignbook.com/posts/5nf/
68•petalmind•2h ago•22 comments

Modifying FileZilla to Workaround Bambu 3D Printer's FTP Issue

https://lantian.pub/en/article/modify-computer/modify-filezilla-workaround-bambu-3d-printer-ftp-i...
23•speckx•1h ago•12 comments

Turn your best AI prompts into one-click tools in Chrome

https://blog.google/products-and-platforms/products/chrome/skills-in-chrome/
19•xnx•1h ago•5 comments

Let's Talk Space Toilets

https://mceglowski.substack.com/p/lets-talk-space-toilets
52•zdw•20h ago•10 comments

A new spam policy for “back button hijacking”

https://developers.google.com/search/blog/2026/04/back-button-hijacking
741•zdw•15h ago•437 comments

OpenSSL 4.0.0

https://github.com/openssl/openssl/releases/tag/openssl-4.0.0
50•petecooper•1h ago•5 comments

Show HN: LangAlpha – what if Claude Code was built for Wall Street?

https://github.com/ginlix-ai/langalpha
51•zc2610•4h ago•15 comments

guide.world: A compendium of travel guides

https://guide.world/
9•firloop•5d ago•2 comments

Backblaze has stopped backing up OneDrive and Dropbox folders and maybe others

https://rareese.com/posts/backblaze/
780•rrreese•10h ago•481 comments

Carol's Causal Conundrum: a zine intro to causally ordered message delivery

https://decomposition.al/zines/
20•evakhoury•4d ago•2 comments

jj – the CLI for Jujutsu

https://steveklabnik.github.io/jujutsu-tutorial/introduction/what-is-jj-and-why-should-i-care.html
411•tigerlily•8h ago•354 comments

Show HN: Kontext CLI – Credential broker for AI coding agents in Go

https://github.com/kontext-dev/kontext-cli
47•mc-serious•5h ago•14 comments

Show HN: Kelet – Root Cause Analysis agent for your LLM apps

https://kelet.ai/
30•almogbaku•2h ago•16 comments

The Mouse Programming Language on CP/M

https://techtinkering.com/articles/the-mouse-programming-language-on-cpm/
20•PaulHoule•3d ago•2 comments

Introspective Diffusion Language Models

https://introspective-diffusion.github.io/
191•zagwdt•11h ago•38 comments

Show HN: A memory database that forgets, consolidates, and detects contradiction

https://github.com/yantrikos/yantrikdb-server
12•pranabsarkar•3h ago•6 comments

The M×N problem of tool calling and open-source models

https://www.thetypicalset.com/blog/grammar-parser-maintenance-contract
101•remilouf•5d ago•34 comments

The Fediverse deserves a dumb graphical client

https://adele.pages.casa/md/blog/the-fediverse-deserves-a-dumb-graphical-client.md
52•speckx•3h ago•11 comments

Nucleus Nouns

https://ben-mini.com/2026/nucleus-nouns
36•bewal416•4d ago•10 comments

DaVinci Resolve – Photo

https://www.blackmagicdesign.com/products/davinciresolve/photo
972•thebiblelover7•16h ago•249 comments

Franklin's bad ads for Apple II clones and the beloved impersonator they depict

https://buttondown.com/suchbadtechads/archive/franklin-ace-1000/
110•rfarley04•3d ago•64 comments

Show HN: Plain – The full-stack Python framework designed for humans and agents

https://github.com/dropseed/plain
6•focom•1h ago•0 comments

The acyclic e-graph: Cranelift's mid-end optimizer

https://cfallin.org/blog/2026/04/09/aegraph/
48•tekknolagi•4d ago•13 comments

Lean proved this program correct; then I found a bug

https://kirancodes.me/posts/log-who-watches-the-watchers.html
359•bumbledraven•18h ago•163 comments

The future of everything is lies, I guess: Work

https://aphyr.com/posts/418-the-future-of-everything-is-lies-i-guess-work
208•aphyr•3h ago•168 comments
Open in hackernews

The future of everything is lies, I guess: Work

https://aphyr.com/posts/418-the-future-of-everything-is-lies-i-guess-work
208•aphyr•3h ago

Comments

hoppp•3h ago
Unavailable Due to the UK Online Safety Act
ura_yukimitsu•3h ago
Archived at https://archive.is/DY9F3
basilikum•3h ago
https://web.archive.org/web/20260414151754/https://aphyr.com...
bbg2401•3h ago
The author appears to be under the misaprehnsion that a personal blog with a comment section is impacted by the act.
Devasta•2h ago
Why wouldn't it be?
monooso•2h ago
For the reasons given in my comment, above [1].

[1]: https://news.ycombinator.com/item?id=47767650

MarkusQ•2h ago
Misapprehension? If so, they aren't the only one.

https://www.theregister.com/2025/02/06/uk_online_safety_act_...

monooso•2h ago
Yes, misapprehension.

According to the Ofcom regulation checker [1] (linked to by The Register article), the Online Safety Act does not apply to this content.

Here's the most pertinent section (emphasis mine):

> Your online service will be exempt if... Users can only interact with content generated by your business/the provider of the online service. Such interactions include: comments, likes/dislikes, ratings/reviews of your content including using emojis or symbols. For example, this exemption would cover online services where the only content users can upload or share is comments on media articles you have published...

[1]: https://ofcomlive.my.salesforce-sites.com/formentry/Regulati...

TimTheTinker•2h ago
Perhaps the author is being outwardly cautious but knowingly borderline-obtuse as a form of protest against a dumb law.
krona•2h ago
> Your online service will be exempt if... Users can only interact with content generated by your business

As soon as your blog allows comments which other people can read, then you're allowing people to interact with content not generated by your business.

john_strinlai•2h ago
is this legal advice you are offering, as someone practicing law in the uk? because you are all over this thread stating your opinion very confidently.

(conveniently, there is no risk to yourself if you happen to be wrong or misinformed.)

monooso•2h ago
No, I'm not offering legal advice, and neither am I stating an opinion. I'm simply quoting Ofcom, the regulatory body responsible for overseeing this law.
john_strinlai•2h ago
>I'm simply quoting Ofcom

no, you are doing more than that.

you are saying that everyone who has a different interpretation of the parts you are quoting is misinformed.

that is an opinion, which you are stating as fact, as someone unaffected by the outcome.

monooso•1h ago
A valid point, and maybe I should have phrased it differently. I've deleted the comment which used the word "misinformed", so as not to cause any confusion.

My point is simply that the Ofcom quote clearly states that user comments on an article are not subject to the Online Safety Act. I assume this is a fact, as it's from the horse's mouth.

Some people appear to be basing their opinions on the assumption that the OSA does apply to such comments (hence my use of the offending word).

pixl97•1h ago
>Please note: The outcome of this checker is indicative only and does not constitute legal advice. It is for you to assess your services and/or seek independent specialist advice to determine whether your service (or the relevant parts of it) are subject to the regulations and understand how to comply with the relevant duties under the Act.

I mean even the site itself says it really shouldn't be used for legal advice...

On top of that, none of this matters until said law is settled under a case. Most often it's the first judge and the set of appeals after that point that define how the law is actually implemented. Everything before that is bluster and potential risk.

mock-possum•3h ago
Wow the typography is obnoxious on mobile, some lines only have 3 words due to the text justification
greatpost•3h ago
Thank you for this aphyr.

My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.

atomicnumber3•3h ago
Ah yes just go start a company. Let me just ask my father for a small business loan of a million dollars.
Aurornis•3h ago
Class warfare generalizations have become the safe outlet for internet rage because going after CEOs and billionaires is most “punching up” construction that is generally relatable.

An unintended side effect that I’ve noticed is that it normalizes bad behavior of CEOs for those who invest a lot of “CEOs bad” grist (Reddit, Threads, even Hacker News). When someone, usually early career, takes a job with a bad CEO after years of reading “CEOs bad” content online, they can go into a learned helplessness mode because they think the behavior they’re seeing is normal. They don’t believe changing jobs would help because they’ve learned from social media to believe that their CEO’s bad behavior is actually normal.

This has becoming a frequent topic when in a rotational mentorship program where I volunteer: Early career folk join some toxic startup and stay because the internet told them all CEOs are like this. We have to shake them free from those ideas and get them to realize that there are good and bad companies out there and they have options.

coldtea•3h ago
>Class warfare generalizations have become the safe outlet for internet rage because going after CEOs and billionaires is most “punching up” construction that is generally relatable.

Mainly because "CEOs and billionaires" have fucked us over time and again, with their with their lobbying and bribing, with their power grabs, with their consolidation of news, entertainment, streaming, and social media properties, with their participation in the millitary industrial complex, with their censorship and partisanship, and with their rent seeking and worsening of their products...

forgetfreeman•2h ago
The downvotes in absence of any reply suggest there's a group of individuals who think your position is so correct it's functionally unassailable but are offended you said it out loud.
headcanon•2h ago
> Early career folk join some toxic startup and stay because the internet told them all CEOs are like this.

I literally did this 12 years ago based on this reasoning, its good you're trying to counter that with the next generation.

With that said, I do wish there was more discourse around systemic issues rather than the usual finger-pointing towards rival social groups. Unfortunately I feel like our language gets in the way, systems issues are more abstract, but "bad people" are more visceral and easy to talk about.

dlev_pika•2h ago
“No war but class war” rings as true in 2026 as it did 40 years ago
neutronicus•1h ago
Sure, although the obsession with "CEOs and billionaires" does have the ring of the 300k HHI software-engineer class hoping to play class enemies above and below them against each other.
pixl97•1h ago
>normalizes bad behavior of CEOs

>They don’t believe changing jobs

Um, yea, where did you get these ideas.

Most CEOs want to be CEOs for the potentially vast amounts of wealth they can make from the position. When you're making 20-200x the average person going back to a regular job is pretty much out of the question.

Then when you start making that kind of money you quickly become disconnected from the rest of humanity. [Insert meme: "How much does a banana cost? Like $10 dollars?]

Vast wealth disparity commonly causes the issues that you are saying being normalized by people online, so I think you'd need quite a bit more evidence that is the case then with the already existing hypothesis.

miyoji•1h ago
I think it's true that there are more bad CEOs than good CEOs. I've seen good CEOs turn into bad CEOs, but I've never seen a bad CEO turn into a good CEO. I assume it does happen, but there's a strong cultural pressure (and many hundreds of millions of dollars) pushing bad CEO behavior and very little other than personal ethics pushing good CEO behavior, and when the incentives look like that, swimming upstream is hard.

> We have to shake them free from those ideas and get them to realize that there are good and bad companies out there and they have options.

Not everyone does have options, though. This is why instead of telling people to just avoid the bad CEOs, workers should unionize and collectively bargain against the bad CEOs. I'm sure I'll be seeing a lot of class warfare generalizations about "unions bad" in response to this suggestion.

philipallstar•51m ago
> Class warfare generalizations have become the safe outlet for internet rage because going after CEOs and billionaires is most “punching up” construction that is generally relatable.

The endless re-rise of Marxism has made people assume that any punching is appropriate in the first place, and it's just a question of who. Saying "these are the people it's okay to punch" is dystopian.

nancyminusone•3h ago
When companies do something terrible (and they do, all the time) who are you going to blame for it? It's not at all surprising that CEOs have earned the reputation they have.
aphyr•3h ago
I am, oddly enough, the chief executive officer of two (trivially small) tech companies.
theredleft•2h ago
cheers. I think you're doing a good job and ruffling some feathers here! Your content has been great.

I highly recommend reading Marx. Your content has related Marxist topics like the 'Fetishism of Commodities' (Software as Witchcraft) and the Labor Theory of Value.

aphyr•2h ago
There's a copy of Das Kapital on the shelf behind me right now, though I don't count myself conversant enough to go super deep on class critique. Figured I'd point a few very vague fingers in that direction and let folks with more experience talk about it.
svilen_dobrev•31m ago
i read the other day this: https://jacobin.com/2026/03/work-deskilling-labor-capitalism...

brushing the socialism aside (been there seen that), it talks about the deskilling as inevitable technology consequence. IMO LLMing puts that on steroids, and eats higher up the mental-chain

Quarrelsome•1h ago
Btw why am i as a brit, blocked via my traditional routing because of the OSA? What possible features do you have on that site to make that relevant?
DonaldPShimoda•2h ago
> people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form

Might I suggest a viewing of the 2025 film "Bugonia"?

evan_a_a•2h ago
spoilers
tencentshill•2h ago
>My

And who are you? An account created for one post? There is a pattern of green account with usernames vaguely related to the subject matter of their comments.

Papazsazsa•3h ago
previously: https://news.ycombinator.com/item?id=47754379
dlev_pika•2h ago
I think I’ve seen this article posted every day for the past week or so
hk__2•2h ago
No you haven’t, because it was published today. What you’ve seen are past articles from the same author on the subject that all share the same "The Future of Everything Is Lies, I Guess:" prefix.
dlev_pika•1h ago
Oh that’s what’s going on? Was confused as to why the same title kept popping up. Thank you.
AndrewKemendo•3h ago
This has been on the front page for over a week in different forms what gives?

https://hn.algolia.com/?q=future+of+everything+is+lies

baal80spam•3h ago
There is new part added everyday.
0xbadcafebee•3h ago
> more like witchcraft than engineering

Welcome to web development buddy

> how ML might change the labor market

Human labor is expensive. If LLMs do make things cheaper and faster to produce, you don't need that many humans anymore. Again, assuming the improvement is real, there absolutely will be shrinkage for existing businesses in headcount. What remains to be seen is how much cheaper machines make work. 1.5x? 2x? 10x? 100x?

> unlike sewing machines or combine harvesters, ML systems seem primed to displace labor across a broad swath of industries [...] The question is what happens when [..] all lose their jobs in the span of a decade

It's more like hand tools -> power tools; a concept applied to many things. Everyone will adopt them, and you'll need fewer workers who'll work faster with less skill. You get a gradual labor force shrinkage, but also an increase in efficiency, so it's not like a hole is opening up in your economy. A strong economy can create new jobs, from either private or public sources.

> ML allows companies to shift spending away from people and into service contracts with companies like Microsoft

The price of hardware, as it always has been, is a downward trend, while the efficiency of open weights is going up (it will plateau eventually but it's still going up). We already spend $20,000 on servers, whether it's buying them once on-prem, or renting them out in AWS. ML is just another piece of software running on another piece of hardware

> if companies are successful in replacing large numbers of people with ML systems, the effect will be to consolidate both money and power in the hands of capital

That ship left port like 30 years ago dude. Laborers have no power in the 21st century.

fnimick•2h ago
> That ship left port like 30 years ago dude. Laborers have no power in the 21st century.

Maybe we should fix that.

altruios•34m ago
Less maybe: more should have yesterday. Do so now today.
cratermoon•3h ago
"Another critical lesson is that humans are distinctly bad at monitoring automated processes".

Humans are also distinctly bad at noticing certain kinds of bugs in software. Think off-by-one errors, deadlocks, or any sort of bug you've stared at for days and not noticed the one missing or extra semicolon. But LLMs can generate a tsunami of subtly wrong code in the time a reviewer will notice one typo and miss all the rest.

aphyr•3h ago
Yes. For more on this, see section 2: https://aphyr.com/posts/412-the-future-of-everything-is-lies...
cratermoon•2h ago
Ah I see. I had not gotten that far. Something I got from "Story of Your Life", by Ted Chiang. The sentence, "The rabbit is ready to eat"[1]. Also this old chestnut from NLP:

Fruit flies like a banana. Time flies like and arrow.

[1] The movie Arrival is based on this novella.

intended•2h ago
> "Another critical lesson is that humans are distinctly bad at monitoring automated processes".

I believe the technical term is vigilance degradation?

curuinor•3h ago
Omnissiah-bothering, I call it.
mannanj•2h ago
> This feels hopelessly naïve. We have profitable megacorps at home, and their names are things like Google, Amazon, Meta, and Microsoft. These companies have fought tooth and nail to avoid paying taxes (or, for that matter, their workers). OpenAI made it less than a decade before deciding it didn’t want to be a nonprofit any more. There is no reason to believe that “AI” companies will, having extracted immense wealth from interposing their services across every sector of the economy, turn around and fund UBI out of the goodness of their hearts.

> If enough people lose their jobs we may be able to mobilize sufficient public enthusiasm for however many trillions of dollars of new tax revenue are required. On the other hand, US income inequality has been generally increasing for 40 years, the top earner pre-tax income shares are nearing their highs from the early 20th century, and Republican opposition to progressive tax policy remains strong.

I think we are in general a highly naive, gullible class of people: we were conditioned, programmed and put into environments where being this was the norm and rewarded. The leaders and those extracting resources, who we gullibly allow to trample over our dignity and our rights, take advantage of this and reinforce it through lobby and influence of the mainstream culture and media campaigns around us. Further, if social media becomes a threat to their statuses, they have been shown to employ their influence there too through censorship and more; we therefore, may be best to learn how to not to be gullible and grow some balls.

simianwords•2h ago
No you don’t have to review every single line of code produced by AI in fears of security. This is quite exaggerated and I think the author is biased in his own field.
recursive•2h ago
You're right. You don't have to. Unless you want correct and secure code.
layer8•56m ago
How do you determine which lines have to be reviewed?
vegancap•2h ago
How come this is blocked in the UK? :S
Jtarii•2h ago
I think he is trying to make some misguided political statement.
kentm•2h ago
His reasoning doesn't seem like a political statement: https://news.ycombinator.com/item?id=47754379#47757803

That seems very practical and well-reasoned to me.

jerf•2h ago
The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.

If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.

If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.

forgetfreeman•2h ago
"given 10-ish years at least to adapt, we probably can"

Social media would like a word...

8n4vidtmkvmk•2h ago
We can adapt by shutting down social media. We don't really need that. It's been pretty bad since before the AI wave took off.
fellowniusmonk•1h ago
We needed a better phone book we ended up in a world where most of our fellow citizens fucking casino.
faangguyindia•2h ago
We are bottom. It's just a start.

We are in era of pre pentium 4 in AI terms.

fnimick•2h ago
And you have evidence as basis for this very confident statement... where?
faangguyindia•2h ago
Intuition. It comes from the spiritual awakening and being aware of your consciousness. Only Time will prove what turns out be right.
sophacles•2h ago
You worship the AI?
faangguyindia•2h ago
I see AI has great utility and we'll figure out ways to better it. If I had any power, i would run Nuclear Power plants to run AI dafacenters and find other near infinite sources of energy to create deeper and deeper AIs. This level of ai tech is at its infancy, it's evidently clear. People are assuming it will stall soon, and won't go beyond a certain point. I don't believe this at all, I am believing it will go much much fatherer then this
leptons•43m ago
An LLM is never, ever going to find "other near infinite sources of energy". All it can do is predict the next word in an effort to make the user stop prompting it. That's all it does. It does not have the ability to find solutions to the worlds problems.
hypercube33•1h ago
Weird comparison - The P4 was a major flop out of the gate (rambus anyone?) and at least by any good metric took three revisions (P4c - hypertheading) to make it come out where it should have ahead of its predecessor. The Pentium 3, before it that you are perhaps referring to was the peak of its era. So...it's going downhill right or what are you even saying?
ofjcihen•1h ago
I’m seeing these extremely short but supremely confident hot takes with nothing to back them up on HN more and more these days. It’s like X is leaking.
MagicMoonlight•2h ago
We aren’t anywhere near AGI. They’ve consumed the entirety of human knowledge and poisoned the well, and it still can’t help but tell you to walk to the car wash.

A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.

pixl97•1h ago
Sentience isn't intelligence.
echelon•2h ago
> The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve.

Even using the models we have today, we have revolutionized VFX, video production, and graphics design.

Similarly, many senior software engineers are reporting 2-10x productivity increases.

These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.

These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.

We don't even need new models.

ryandrake•1h ago
> Similarly, many senior software engineers are reporting 2-10x productivity increases.

But are they making 2-10x compensation compared to before these tools? If not, these tools are not really useful to you, they are useful to your employer. The most shocking thing I find about LLM-assisted development is how gleefully we are just handing all this value over to our employers, simultaneously believing that they are great because we're producing more. Totally bonkers!

echelon•1h ago
> handing all this value over to our employers, simultaneously believing that they are great because we're producing more.

You could turn the table and say that you can now launch your own business with far fewer resources.

Who needs financial capital if you can do it all with solo / small team labor capital?

Gossip Goblin ditched his studio and now a16z is trying to throw him money, which he's turned down. He's turning everyone down.

https://www.youtube.com/watch?v=-Rzl7nUdEs4

Dude is legit talented and doesn't need studio capital anymore.

This is the end of the Hollywood nepotism pyramid, where limited production capital was available to only a handful of directors.

We're kind of at the start of a revolution here. I'd be way more worried if I were Disney or Paramount.

Couldn't you take a sabbatical and end it with a brand new SaaS you own and control? That's entirely within reach now.

The people this is going to hurt are the ICs that don't have a go-getting type personality where they take full-stack ownership: marketing, branding, design, customer relationships, etc. If you can do those things, you're going to be a rock star with total autonomy.

You ought to see what the indie game devs are doing with AI (when they aren't getting yelled at on Steam by the haters). It's legitimately incredible. Game designers are taking on full-stack ownership over the entire experience, and they're making some incredible stuff.

ryandrake•1h ago
> If you can do those things, you're going to be a rock star with total autonomy.

What percentage of developers can do these things? 1%? 0.1%? 0.01%? A very small percentage of developers have the desire to take on the full-stack, the temperament of good entrepreneurs, the product judgment of good Product Managers and ability of good Project Managers to juggle dependencies and timeframes. What about the rest of them? The remaining 99+% of us are just handing value over to our employers and getting a 5% raise in return--if we're lucky.

So, the fact that a small percentage of rockstar developers can capture the full value of AI-assisted development reinforces the point that a small number of people/businesses are capturing that value. The vast majority of workers are not capturing any value.

gilfaethwy•35m ago
So... a tiny fraction of people get to capture the value again, and at even greater environmental (and thus societal) cost than before? Wow, what a world.
nostrademons•2h ago
Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."

I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.

But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.

tossandthrow•2h ago
You are very strong on the "slop" bias. Why?

In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.

It is the opposite of slop I am seeing. And that at a lower cost.

Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.

chaps•2h ago
Which company do you work at so we can avoid your migrated endpoints?
tossandthrow•2h ago
Wtf. You don't even know what the migration was about?
chaps•2h ago
I mean, I'm always down for learning something new. But I hope what I learn includes the name of the company I'd like to avoid.
tossandthrow•1h ago
Your tone is in conflict with the statement that you are curious.
chaps•1h ago
It's because you're deflecting. :)
tossandthrow•1h ago
Deflecting from what? Telling the company name so you can avoid it due to your incredibly curious nature?
chaps•1h ago
Sigh.

Look friend, I really hope you can realize how you sound in your post. You're extraordinarily confidently saying that you refactored some ambiguous endpoints in 30 minutes. Whenever I see someone act that confidently towards refactoring, thousands alarms go off in my head. I hope you see how it sounds to others. Like, at least spend longer than a lunch break on it with just a tad more diligence. Or hell, maybe even consider LIEing about how much time you spent on it. But my point is that your shortcuts will burn you. If you want to go down that path, I'm happy to be a witness to eventual schadenfreude.

My issue isn't with the fact that you used AI. My issue is with how confident you are that it worked well and exactly to spec. I'm very well aware of what these systems can do. Hell, I've been able to get postgres to boot inside linux inside postgres inside linux inside postgres recently with these tools. But I'm also acutely aware of the aggressive modes that these systems can break in.

So again, which company should we all avoid so that we can avoid your, specifically your, refactoring?

bsmith•1h ago
All big tech companies are mandating employees to use AI for tasks. Unless there's a similar movement to open source that is AI-free, you're going to need to be tech-free of you want to avoid companies that use AI.
apsurd•2h ago
One point: yes, you're speaking from the power position. God-mode over a fleet of minions has always been an engineer's wet-dream. That's not even bad per-say. It's the collateral damage down stream that's at issue. Maybe you don't see any damage, but that's largely the point. Is it really up to you to say?
tossandthrow•1h ago
What is the collateral damage? In ensuring that a bunch of endpoints use the same structure using LLMs?
apsurd•1h ago
Let's not debate that it's possible to make very large very safe changes. It is possible that you did that.

This is about "slop bias". I'd wager that empowering everyone, especially power-positions to ship 50x more code will produce more code that is slop than not. You strongly oppose this because it's possible for you to update an API?

I'm stuck on the power-position thing because I'm living it. I'm pro-AI but there are AI-transformation waves coming in and mandating top-down. From their green-field position it's undeniable crush-mode killin' it. Maintenance of all kinds is separate and the leaders and implementors don't pay this cost. Maybe AI will address everything at every level. But those imposing this world assume that to be true, while it's the line-engineers and sales and customer service reps that will bear the reality.

tossandthrow•1h ago
> Maybe AI will address everything at every level.

I think this is the idea you need to entertain / ponder more on.

I largely agree with you, what I don't agree with is the weighting about the individual elements.

My point was that I could do a 30 minutes cleanup in order to streamline hundreds of endpoints. Without AI I would not have been able to justify this migration due to business reasons.

We get to move faster, also because we can shorten deprication tails and generally keep code bases more fit more easily.

In particular, we have dropped the external backoffice tool, so we have a single mono repo.

An Ai does tasks all the way from the infrastructure (setting policies to resources) and all the way to the frontends.

Equally, if resources are not addressed in our codebase, we know at a 100% it is not in use, and can be cleaned up.

Unused code audits are being done on a weekly schedule. Like our sec audits, robustness audits, etc.

apsurd•1h ago
Yeah, the more I debate the AI-lovers the more I can empathize with the possibility it may very well turn out to be everything is an Agent. Encodable.

I'm not a doomer either, but I do think this arc is a human arc: there's going to be a lot of collateral damage. To your point, Agents with good stewardship can also implement hygiene and security practices.

It's important we surface potential counter metrics and unintended side effects. And even in doing so the unknown unknowns will get us. With that said, I like this positive stewardship framing, I'll choose to see and contribute to that, thanks!

hliyan•2h ago
> Today, I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.

This is either a very remarkable or a very frightening statement. You're claiming flawless execution within the same day as the change.

If you're unable to tell us which product this is, can you at least commit to report back in a month as to how well this actually went?

tossandthrow•1h ago
It is a part of the smoke testing process right now.

But we run 90% test coverage, e2e test etc. None of which had been altered, and are all passing.

Migrations are generally not that high risk if you have a code base in alright shape.

peterbell_nyc•1h ago
Seeing plenty of this. The quality of agentic code is a function of the quantity and quality of adversarial quality gates. I have seen no proof that an agentic system is incapable of delivering code that is as functional, performant and maintainable as code from a great team of developers, and enough anecdotes in the other direction to suggest that AI "slop" is going to be a problem that teams with great harnesses will be solving fairly soon if they haven't already.
apsurd•1h ago
I take your point but then it makes me think is there no more value in diversity?

[Philosophy disclaimer] So in a code-base diversity is probably a bad idea, ok that makes sense. But in an agentic world, if everything is run through the Perfect Harness then humans are intentionally just triggers? Not even that, like what are humans even needed for? Everything can be orchestrated. I'm not against this world, this is an ideal outcome for many and it's not my place to say whether it's inevitable.

What I'm conflicted on is does it even "work" in terms of outcomes. Like have we lost the plot? Why have any humans at all. 1 person billion dollar company incoming. Software aside, is the premise even valid? 1 person's inputs multiplied by N thousand agents -> ??? -> profit

bluecheese452•1h ago
Ironically the post saying it is not slop sounds exactly like ai slop.
hn_throwaway_99•2h ago
> Somewhere around 2005-2007, when people were wondering if the Internet was done

Literally who wondered that? Drives me nuts when people start off an argument with an obvious strawman. I remember the time period of 2005-2007 very well, and I don't remember a single person, at least in tech, thinking the Internet was done. I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do. E.g. we didn't necessarily know what form mobile would take, but it was obvious to most folks that the tech was extremely immature and that it would have a huge impact on the Internet as it progressed. That's just one example - social media was still in its nascent stages then so it was obvious there would be a ton of work around that as well.

magicalist•1h ago
> I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do

Almost definitely professional ragebaiters in Wired or Time or whatever, yeah.

nostrademons•1h ago
If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.

There is, of course, the Paul Krugman quote from 1998 that by 2005 the Internet would be no more important than a fax machine. [1]

Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]

I remember, being at Google in ~2011, we used to laugh at the Wall Street analysts because they would focus on CPC numbers to forecast a valuation, which is important only if the number of clicks is remaining constant. We knew, of course that total Internet usage was still growing quite rapidly and that queries had increased by roughly 4x over the 2009-2013 timeframe.

And a lot of people will say "If you're so smart, why aren't you rich?", and I'll point out that many people who assumed the Internet had lots of room to grow in 2005-2007 did end up very rich. Google stock has increased roughly 20x since 2007 (and 40x from its 2009 lows). Meta is now worth $1.6T, a 100x increase over the $15B valuation that everyone thought was insane in 2007. Amazon is also up about 100x. It would not be possible to take the other side of the trade and make these kind of profits if the majority of people did not think the Internet was largely over.

[1] https://www.snopes.com/fact-check/paul-krugman-internets-eff...

[2] https://www.wired.com/2007/10/facebook-future/

lamasery•19m ago
> If you were in tech in 2005-2007 you were part of a small minority of the general population. It often didn't feel like a small minority because, well, you knew all those other people on the Internet, but that's a pretty strong selection bias.

Didn't we only pass 50% of households having a home PC in like... '00 or '01 or something? And I mean just in the US, which was way ahead of the curve.

> Here's Wired in 2007 saying, in reference to Facebook, "no company in its right mind would give it a $15 billion valuation". [2]

I actually think that's correct... if the smartphone hadn't taken off right after that. The "consumer" Internet and computing, the attention economy, et c., functionally is the smartphone. A desktop computer and even a laptop aren't in use when driving, at the store, at the park, every moment on vacation, et c. It'd still only be nerds lugging computers everywhere if nobody'd managed to make a smartphone that's capable-enough and pleasant-enough-to-use to expand the market beyond the set of folks who might have had a beeper in earlier years (the part of the market Blackberry was addressing). A gigantic proportion of the "GDP of the Internet", if you will, exists because smartphones exist.

Maxatar•1h ago
I was also in tech at that time, in fact I worked for Google during that period and people definitely thought that the Internet had reached its peak. So many criticisms back then not about just peak Internet but that all these companies were blowing money on unproven business models, they were unsustainable, unprofitable, it was all just hype.

You also had numerous telecommunications companies going bust in one of the largest sector collapses in modern financial history, the largest bankruptcy in history (at that time) was WorldCom, followed by the second largest bankruptcy in history with Global Crossing... Lucent Technologies went belly up and the largest telecom company at the time Nortel lost 90% of its value, eventually going bankrupt in 2009.

And then of course the great recession hit, tech companies took a massive blow, Microsoft, Google, Intel, Apple and other tech giants lost 50% of their stock value in a matter of months. You don't lose 50% of your value because people think you have a promising future.

It wouldn't be until the explosive rise of smart phones and close to zero percent interest rates that sentiment turned around and tech companies ballooned in value in what would end up being the longest bull run in U.S. history.

vharuck•1h ago
I agree with the gist of your points, but not much with these two:

>followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off.

These will be rare boutique affairs. Based on how mass production and cheap shipping played out, most people value price over quality. The economy will rearrange itself around those savings, making boutique products and services expensive.

>mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit.

We have this today. And that's not a "same as it ever was" dismissal. Today, there are a lot of terminally online people posting the equivalent of propaganda (and actual propaganda). Social media pushes hot takes in audiences' faces, a portion of them reshare it, and it spreads exponentially. The only limitation to propaganda today is how much time the audience spends staring at the "correct" content provider.

peterbell_nyc•1h ago
I model this as "stacked sigmoid curves". I have no reason to believe that any specific technological implementation will be exponential in impact vs sigmoidal.

However if we throw enough money and smart people at the problems and get enough value from the early sigmoid curves, the effective impact of a large number of stacked sigmoids could theoretically average to a linear impact, but if the sigmoids stay of a similar magnitude (on average) and appear at a higher velocity over time, you end up with an exponential made up of sigmoids*

* To be fair, it has been so long since I have done math that this may be completely incorrect mathematically - I'm not sure how to model it. However I think in practice more and more sigmoids coming faster and faster with a similar median amplitude is gonna feel very fast to humans very soon - whether or not it's a true exponential.

I'm honestly having a very hard time thinking through the likely implications of what's currently happening over the next 2-10 years. Anyone who has the answers, please do share. I'm assuming from Cynafin that it's a peturbated complex adaptive system so I can just OODA or experiment, sense and respond to what happens - not what I think might happen.

fny•1h ago
Why is everyone so damn obsessed with the singularity? You don't need superintelligence to disrupt humanity. We easily have enough advancement to change the economy dramatically as is. The adoption isn't there yet.
Quarrelsome•1h ago
Moreover the singularity makes this crass assumption that a single player takes all. It seems to ignore a future of many, many AI players, or many, many human + AI players instead.

Furthermore, regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans, who as a race are cognitively extremely diverse and adaptive.

ikrenji•1h ago
that's kind of optimistic. for example a misaligned super AI might engineer a virus that wipes out most of the 7 billion humans. that would put a damper on the adaptability of the human race...
fzzzy•1h ago
The singularity does no such thing.
kaibee•58m ago
> regardless of how smart one thing is, it cannot win towards infinite games of poker against 7 billion humans,

AI isn't one thing though. Really its kind of a natural evolution of 'higher order life'. I think that something like a 'organization', (corps, governments, etc) once large enough is at least as alive as a tardigrade. And for the people who are its cells, it is as comprehensible as the tardigrade is to any of its individual cells. So why wouldn't organizations over all of human history eventually 'evolve' a better information processing system than humans making mouth sounds at each other? (writing was really the first step on this). Really if you look at the last 12,000 years of human society as actually being the first 12,000 years of the evolutionary history of 'organizations', it kinda makes a lot of sense. And so much of it was exploring the environment, trying replication strategies, etc. And we have a lot of different organizations now, like an evolutionary explosion, where life finds various niches to exploit.

/schitzoposting

jerf•49m ago
Even after I explained the exact usage I was invoking, the attractive nuisance of all the science fiction that has gotten attached to the term still prevented you and Quarrelsome from reading my post as written.

I really wish the term hadn't been mangled so much. Though the originator of the term bears a non-trivial amount of the responsibility for it, having written some rather good science fiction on the topic himself. The original meaning from the paper is quite useful and nothing has stepped up to replace it.

All the singularity means as I explicitly used it here is you entirely lose the ability to predict the future. It is relative to who is using it... we are all well past the Caveman Singularity, where no (metaphorical) caveman could possibly predict anything about our world. If we stabilize where we are now I feel like I have at least a grasp on the next ten years. If we continue at this pace I don't. That doesn't mean I believe AI will inevitably do this or that... it means I can't predict anymore, which is really the exact opposite. AI doesn't have to get to "superintelligence" to wreck up predictions.

gilfaethwy•43m ago
We've had enough advancement to change the economy for many decades, but the powers that be have insisted that, despite the lack of need, we continue to toil doing completely unnecessary work, because that's what's required to extend their fiefdoms.

Not that the singularity has any relevance here, either - except maybe that the robots take over, and the billionaires have missed the boat? I don't know.

lamasery•32m ago
> The adoption isn't there yet.

It's worth noting that after ~50 years[edit: to preempt nitpicking, yes I know we've been using computers productively quite a bit longer than that, but that's roughly the time when the computerized office started to really gain traction across the whole economy in developed countries], we've only extracted a tiny proportion of the hypothetical value of computers, period, as far as benefits to the economy and potential for automation.

I actually think a lot of the real value of LLMs is "just" going to be making accessing a little (only a little!) more of that existing unrealized benefit feasible for the median worker.

My expectation is that we'll also harness only a tiny proportion of the hypothetical value of LLMs. We're just not good enough at organizing work to approach the level of benefit folks think of when they speculate about how transformational these things will be. A big deal? Yes. As big a deal as some suppose? Probably not.

[edit: in positive ways, I mean. I think we're going to see huge boosts in productivity to anti-social enterprises. I'd not want to bet on whether the development of LLMs are going to be net-positive or net-harmful to humanity, not due to the "singularity" or "alignment" or whatever, but because of the sorts of things they're most-useful for]

juped•1h ago
Neither! A logistic curve is just an exponential with a carrying capacity - it is still an exponential! There is no reason to believe that AI capability, which grows logarithmically with the handwaved-resources used on it (roughly, this is compute and training data), grows, has grown, or is growing exponentially!

I know this sounds like "the moderate position" to people but you are accepting that something logarithmic is somehow in fact exponential (these are inverse functions of one another) based on no evidence or argument.

Here is Sam Altman, the one man in the world with the most incentive to overstate AI capability, accepting the extremely-well-known logarithmic growth: https://blog.samaltman.com/three-observations

What we see in reality is a basically-linear growth pattern due to pushing exponentially more resources into this logarithm.

_doctor_love•2h ago
Another interesting one from 'aphyr -- I think the points around the Ironies of Automation deserve deeper focus, possibly even a separate follow-up post.

I would encourage folks to look at the following industries: nuclear safety, commercial aviation, remote surgery. These industries have dealt with the issues of automation for much longer than we have as programmers.

In the research I've done, these industries went through a similar journey in the 20th century as we are now: once something becomes automated enough, the old way simply won't work. You have to evolve new frameworks and procedures to deal with it.

So in the case of aviation they developed CRM and SRM - how to manage the airplane as a crew and how to manage it as a solo operator. Remember that modern airplanes are highly automated!! The human pilot is not typically hands-on-wheel for most of the flight.

In the case of surgeons, they found that de-skilling without regular practice can occur in as little as four weeks! So to combat that, some surgeons are now required to practice in simulated environment to keep their skills sharp.

My feeling is that 'aphyr is right in the short-to-medium term. Current market forces and US regulatory posture (or lack thereof) makes it so that there are less rules and less enforcement. IMHO the results are depressingly predictable but the train has left the station with enough momentum that there's no stopping it. If we survive long enough to make it past the medium-term things will change.

aphyr•2h ago
Thank you for this! I really wanted to go deeper on human factors, and I think there's a lot to be said about CRM and sociotechnical systems design, especially when ML gets used for decision support. Ultimately wound up truncating that section (along with more of the economic critique) because the piece was already far too long.
intended•1h ago
There’s a paper out there, on designing IT systems from god knows when. It is incredibly dry, except for a line in it that stood out: All IT systems are political systems, because they decide how information and decisions flow.

I can only guess as to how much content you would have to explore on that axis.

_doctor_love•1h ago
You're welcome! I imagine you already know this one as well but just in case.

Learning to Learn by the late Dr Richard Hamming. See especially Chapter 2.

A point Hamming makes is that when transitions from hand to machine production occurred, usually what is built ends up changing as the old techniques don't transfer 1:1 from the old world.

So for instance, we went from nuts and bolts to rivets and welding (Dr Hamming's literal example). This required builders to produce an equivalent product to the old, built with different techniques - and crucially! - under tighter control limits.

The reason things are going all over the place with AI at the moment is that it's speed, speed, speed. They had an all hands at my company recently where the top brass talked about AI. The only thing mentioned was speed - go faster, do more, etc. Not a single soul talked about quality.

But if you know your software engineering wisdom you know that you can only pick two when it comes to speed, scope, or quality. It's going to get real dumb for a while until people realize/remember quality is how you achieve speed.

aphyr•59m ago
I have not read Hamming yet, thank you!
_doctor_love•34m ago
You're in for a treat :)
enraged_camel•2h ago
>> Imagine a co-worker who generated reams of code with security hazards, forcing you to review every line with a fine-toothed comb. One who enthusiastically agreed with your suggestions, then did the exact opposite. A colleague who sabotaged your work, deleted your home directory, and then issued a detailed, polite apology for it. One who promised over and over again that they had delivered key objectives when they had, in fact, done nothing useful. An intern who cheerfully agreed to run the tests before committing, then kept committing failing garbage anyway. A senior engineer who quietly deleted the test suite, then happily reported that all tests passed.

>> You would fire these people, right?

Okay, now imagine a different colleague. One who writes a solid first draft of any boilerplate task in seconds, freeing you to focus on architecture instead of plumbing. A dev who never gets defensive when you rewrite their code, never pushes back out of ego, and never says "that's not my job." A pair programmer who's available at 3 AM on a Sunday when prod is down and you need to think out loud. One who remembers every API you've forgotten, every flag in every CLI tool, every syntax quirk in a language you use twice a year, or even every day.

You'd want that person on your team, right? In fact, you would probably give them a promotion.

Here's the thing: the original argument describes real failure modes, but then commits a subtle sleight of hand. It personifies the tool as a colleague with agency, then condemns it for lacking the judgment that agency implies. But you don't fire a table saw because it doesn't know when to stop cutting, right? You learn where to put your hands.

Every flaw in that list is, at the end of the day, a flaw in the workflow, not the tool. Code with security hazards? That's what reviews are for. And AI-generated code gets reviewed at far higher rates than the human code people have been quietly rubber-stamping for decades. Commits failing tests? Then your CI pipeline should be the gate, not a promise. Deleted your home directory? Then it shouldn't have had the permissions to do that in the first place. In fact, the whole "deleted my home directory" shit is the same thing as "our intern deleted the prod database". We all know that the response to the latter is "why did they have permission to prod in the first place??" AI is the same way, but for some god damn reason people apply totally different standards to it.

aphyr•2h ago
> It personifies the tool as a colleague with agency,

Er, just to be clear, I am not personifying these tools. This entire section is a critique of the attempt to frame LLMs as "coworkers".

simoncion•2h ago
> But you don't fire a table saw because it doesn't know when to stop cutting, right?

If I purchased a table saw and that table saw irregularly and unpredictably jumped past its safeties -as we've plenty of evidence that LLMs [0] do-, then I would [1] immediately stop using that saw, return it for a refund, alert the store that they're selling wildly unsafe equipment, and the relevant regulators that a manufacturer is producing and selling wildly unsafe equipment.

[0] ...whether "agentic" or not...

[1] ...after discovering that yes, this is not a defective unit, but this model of saw working as designed...

enraged_camel•1h ago
But that's the thing: the table saw has safeties. Someone put them there. Without those safeties, it, too, would jump unpredictably.

Scary scenarios like AIs deleting home directories are the result of the developers explicitly bypassing those safeties.

m0llusk•2h ago
Bullshit is more dangerous than lies.
pixl97•35m ago
In enough quantity it becomes impossible to tell the difference

https://en.wikipedia.org/wiki/Brandolini%27s_law

elcapitan•2h ago
I really appreciate this series of posts, as it serves as a good summary of key points of the discourse around AIs, and links to the relevant articles etc. I find following all those discussions myself exhausting, so if I can find this all in one place and read it nicely grouped, that is very helpful.
buildbot•2h ago
I love the analogy of AI coding as witchcraft! It’s very accurate to how working with these tools feels - At one point I was forced to invoke a “litany against stubbing” in a loop to make claude code actually implement a renode setup for some firmware. That worked really well.

It feels like hexing the technical interview come to real life ;)

barbazoo•2h ago
> I continue to write all of my words and software by hand, for the reasons I’ve discussed in this piece—but I am not confident I will hold out forever.

There it is, an actual em-dash in the wild, written by hand.

aphyr•2h ago
I put... I'd guess around 60 hours into editing this piece, and had review from a dozen-odd friends, and I am still finding and fixing errors. I imagine that asking an LLM for a copyediting pass probably would have been helpful, but goshdarnit, I want to show that we can still write somewhat-passable prose by hand.
bluefirebrand•57m ago
> I want to show that we can still write somewhat-passable prose by hand

For what it's worth I think it's pretty reasonably good prose, not merely somewhat passable

aphyr•41m ago
Thank you <3
itissid•2h ago
Everyday I sit down to build a product for my clients. I am a one man shop _now_. Before I had people helping me. My mental state is not good. A very odd thing happens when claude or codex complete code fast, I begin to think of all the other things that are needed to make AI Agent work better. I begin to worry about problems that other people use to help me with and think "Can I do those too?". Problems like product design, devops work etc. In a bid to try I get nerd sniped by the velocity people seem to have — and these are respected devs not just twitter claims. And because I am so bad at "doing it all" its causing my mental health to suffer because of long hours i have to put it in. I miss my friends and colleagues who I worked with.

I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...

artur_makly•2h ago
you can always course-correct and find your sweeter spot.
itissid•1h ago
For course correction, I began with trying to think a bit more about solving problems for my clients by talking to them more often. That helps to some extent because I feel happy talking to them for understanding how to solve their problems.

What I do feel the issue is with I just having to do everything to keep costs down because hiring another dev vs doing it with AI consideration is real and it has collateral damage: I spend more time trying to build AI agents to do the work and there is 1 or 2 fewer jobs I create.

itissid•2h ago
For any one who has not read the cockpit recording of air-france-447 I would encourage them to[1]. It is simply jaw dropping study in how things go wrong so fast — a risk with AI we have barely begun to acknowledge let alone regulate as a community.

[1](https://tailstrike.com/database/01-june-2009-air-france-447/)

macrocosmos•1h ago
That catastrophe is entirely on Bonin the bonehead.
tra3•1h ago
I read through the link. The other pilot and the captain are complicit by the virtue of being there. Autopilot disengages at 2:10 and they crash at 2:14. Terrible.

My other immediate thought -- Tesla's autopilot. I've never used it so I'm not sure I'm fully correct here, but apparently it requires you to be vigilant and take over in certain situations? Wonder how well that works out in practice.

jcalvinowens•1h ago
Anybody who is interested should read the full report: https://www.faa.gov/sites/faa.gov/files/AirFrance447_BEA.pdf
groby_b•2h ago
I really wish we'd stop arguing about AI with an "some automation failed, so all automation is bad" approach.

Yes, AF447 crashed due to lack of training for a specific situation. And yet, air travel is safer than ever.

Yes, that Tesla drove into a wall, and yet robotaxis exist, work well, and are significantly safer than human drivers.

Yes, there are a lot of "witchcraft" approaches to working with AI, but there are also significant accelerations coming out of the field that have nothing to do with AI.

Yes, AI occasionally makes very stupid mistakes - but ones any competent engineer would have guardrails in place against.

And so a lot of the piece spends time arguing strawmen propped up by anecdotes. And that detracts from the deeply necessary discussion kicked off in the second part, on labor shock, capital concentration, and fever dreams of AI.

The problem of AI isn't that it's useless and will disrupt the world. It's that it's already extremely useful - and that's the thing that'll lead to disrupting the world.

tra3•1h ago
I think you're maybe oversimplifying a bit. I dont think the argument here is that "AI" is not 100% so we shouldn't use AI. There are issues we need to be aware of.

Specifically, AI companies want to inflate the utility of AI because that's how they make money. There should be guardrails where appropriate. Unfortunately, as usual, we need to make mistakes before we can learn from them.

Robotaxis do exist, but they are not made equal. Tesla's for instance are 4x worse than humans: https://electrek.co/2026/02/17/tesla-robotaxi-adds-5-more-cr...

_dwt•58m ago
I think you may have missed a subtle point: there is an especial risk from automation which almost always works correctly. The aviation industry calls the phenomenon "automation fatigue". It's very difficult for humans to stay alert and monitor systems like these, and the use of the systems tends to lead to de-skilling over time in the very skills required to monitor them and fix the (rare but fatal - at least in aviation) error cases when they occur.
GistNoesis•2h ago
Programming is indeed becoming witchcraft, with LLMs it is of the utmost importance that you chose the right database administrator.

For example I'm now relying on Soteria, the greek goddess of safety, salvation and preservation from harm to act as my database administrator.

drivebyhooting•1h ago
In the case of UBI, how would we differentiate between a previously highly paid professional (SWE, lawyer, author) and a pauper (janitor, car washer, unemployed).

It’s only fair that they would receive the same amount. But then how can the former category continue to fulfill their obligations?

stevenally•1h ago
"But then how can the former category continue to fulfill their obligations?".

They can't. Just like the steel workers who lost their jobs in the 1970's.

intended•1h ago
Does Aphyr give himself a limit of 6 semicolons ? If their editor returns, will this count drop to 0?

(And before anyone brings pitch forks out, this is what they wrote in a previous article:

> “Cool it already with the semicolons, Kyle.” No. I cut my teeth on Samuel Johnson and you can pry the chandelierious intricacy of nested lists from my phthisic, mouldering hands. I have a professional editor, and she is not here right now, and I am taking this opportunity to revel in unhinged grammatical squalor.

My life was made poorer for knowing that semicolons are apparently a sin, but richer for the rebellion.

keeganpoppen•1h ago
i respect the author of this post wayyyy too much to ever imply that i know more than them, or that i even have proprietary knowledge that they, themselves do not possess. i admire aphyr, and i aesthetically agree with many of the arguments offered forth. but this whole thing feels a bit cherry-picked— i’m not gonna go chapter-and-verse (cf. belt-and-suspenders) about it, but on some levels this comes across as a bit superficial. i think the general thrust— that ai is a sort of Narcissus’s pond— is completely a reasonable and well-considered take. but i would be shocked if someone with the intellectual powers of someone like Aphyr has never had an interaction with an ai in which they did not feel like they were interacting with the deep recesses of their mind in a way both profound and, more importantly, productive. and yeah, there’s plenty of pyrite in them thar hills. but, it does have this almost Lord of the Rings The One Ring -esque pull when you get into a certain “embedding space” (/ thought space) in a certain thread conversing with ai. it genuinely is a profound transformation of cognition, and working superlinearly productively with it is a matter of “when”, not “if”. i share all the same aeathetic concerns, and all the deeper ones. but there have been sessions that i have had with ai that made me blankly stare up at the heavens as well, and i don’t think i’m anywhere near the only one.
mrdependable•52m ago
Care to provide any examples of what sort of content are in these conversations you had with AI?
hliyan•1h ago
> One of her key lessons is that automation tends to de-skill operators

I recently discovered an example of this phenomenon in a completely unrelated area: navigation. About a week ago, I realized that I couldn't remember the exact turns to reach a certain place I started driving to recently, even after having driven there about 3-4 times over a period of a month. Each time I had used Google Maps. When I used to drive pre-Google-Maps, I would typically develop a good spatial model of a route on my third drive. This skill seems to have atrophied now. Even when I explicitly decide to drive without Google Maps, and make mental notes of the turns, my retention of new routes is now much weaker than it used to be. Thankfully, routes I retained before becoming Google Maps dependent, are still there.

acoard•1h ago
Plato on how reading and writing make us more forgetful as we rely on this new technology:

> And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

ofjcihen•1h ago
I see this copy-pastad everywhere these days but it misses a huge point which is that written things don’t read or understand themselves.
_dwt•1h ago
"Yes, Socrates, you can easily invent tales of Egypt, or of any other country."
wslh•1h ago
I wonder if vibe coding is partly what happens when software engineering fails to converge on reusable abstractions. Instead, we got fragmented tools and endless reinvention of the same components, and LLMs arrived as an ad hoc abstraction layer on top.
Terr_•38m ago
Copy-paste-and-hope As A Service.
asdfman123•1h ago
> I can imagine a future in which some or even most software is developed by witches, who construct elaborate summoning environments, repeat special incantations (“ALWAYS run the tests!”), and invoke LLM daemons who write software on their behalf.

This sort of prompting is only necessary now because LLMs are janky and new. I might have written this in 2025, but now LLMs are capable of saying "wait, that approach clearly isn't working, let's try something else," running the code again, and revising their results.

There's still a little jankiness but I have confidence LLMs will just get better and better at metacognitive tasks.

UPDATE: At this very moment, I'm using a coding agent at work and reading its output. It's saying things like:

> Ah! The command in README.md has specific flags! I ran: <internal command>. Without these flags! I missed that. I should have checked README.md again or remembered it better. The user just viewed it, maybe to remind me or themselves. But let's first see what the background task reported. Maybe it failed because I missed the flags, or passed because the user got access and defaults worked.

AI is already developing better metacognition.

baliex•1h ago
Is anyone else just getting this?

  <h1>Unavailable Due to the UK Online Safety Act</h1>
omega3•1h ago
The answer has always been the same: self-regulated profession and trade unions. Instead the ever efficient software engineers have efficiently dug their own grave. The regulated professions aren't going to be affected by the AI because their members understand that preservation of job security[0], their pay and QOL is more important than automating themselves out of existence.

[0] https://www.bma.org.uk/news-and-opinion/medical-degree-appre...

npodbielski•33m ago
Yes, this is so true. But we never thought about that but instead thought about how smart and better and productive we are over other people in similar position.

Also you forgot the link?

rambambram•37m ago
The comparison with sociopaths is a good one. On the surface all human behavior, but if you lift the veil even a little bit it becomes clear there's no substance, no conscience, etc.

Read up on Cluster B personality disorders (borderline, narcissism, sociopaths/psychopaths) and you see the similarities. Love bombing, gaslighting, a shared fantasy, etc. It's very interesting and scary at the same time.

sambuccid•33m ago
Great article, near the end it talks about where the money go and if there will be universal basic income. I think those paragraph had an assumption that if models get very smart all the money will go to big tech.

But, thanks to all the companies working on open-weight models, I'm starting to think this might no longer happen. Currently open-weights models are said to be just months behind the top players (and I think we should really try to do what we can to keep it that way).

I'm wondering what the predictions would be in the case where AI becomes very powerfull, but also models are generally available.

Two possibilities come to mind, the first one where all the money no longer spent on employment would go towards hardware. New hardware manufacturers or innovators could jump in and create a bit more employment, but eventually it would probably all progress in one direction, which is the only finite resource in the chain, the materials/minerals needed for the hardware. Those materials might become the new "petrol". It's possible that eventually we would have build enough chips to power all the AI we need without needing more extraction, but I wouldn't underestimate our ability to waste resources when they feel aboundant.

In the second possibility, alongside a very powerful open-weight LLM, there could be big performance advancements, which would make the hardware no longer the bottleneck. But I'm struggling to imagine this scenario, maybe we would all be better off? Maybe we would all just be deppressed because most people won't feel "usefull" to society or their peers anymore?