frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Halt and Catch Fire: TV's Best Drama You've Probably Never Heard Of (2021)

https://www.sceneandheardnu.com/content/halt-and-catch-fire
56•walterbell•1h ago•17 comments

Claude Sonnet 4.6

https://www.anthropic.com/news/claude-sonnet-4-6
927•adocomplete•9h ago•838 comments

Thank HN: You helped save 33k lives

519•chaseadam17•10h ago•65 comments

Thousands of CEOs just admitted AI had no impact on employment or productivity

https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technol...
153•virgildotcodes•1h ago•86 comments

BarraCUDA Open-source CUDA compiler targeting AMD GPUs

https://github.com/Zaneham/BarraCUDA
188•rurban•7h ago•56 comments

Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway

https://asteroidos.org/news/2-0-release/index.html
297•moWerk•8h ago•34 comments

Minimal x86 Kernel Zig

https://github.com/lopespm/zig-minimal-kernel-x86
33•lopespm•3h ago•7 comments

Gentoo on Codeberg

https://www.gentoo.org/news/2026/02/16/codeberg.html
267•todsacerdoti•10h ago•93 comments

Using go fix to modernize Go code

https://go.dev/blog/gofix
299•todsacerdoti•10h ago•66 comments

I swear the UFO is coming any minute

https://www.experimental-history.com/p/i-swear-the-ufo-is-coming-any-minute
116•Ariarule•5h ago•34 comments

Google Public CA is down

https://status.pki.goog/incidents/5oJEbcU3ZfMfySTSXXd3
183•aloknnikhil•2h ago•98 comments

So you want to build a tunnel

https://practical.engineering/blog/2026/2/17/so-you-want-to-build-a-tunnel
181•crescit_eundo•10h ago•76 comments

Reverse Engineering Sid Meier's Railroad Tycoon for DOS from 1990

https://www.vogons.org/viewtopic.php?t=105451
23•LowLevelMahn•3d ago•3 comments

Async/Await on the GPU

https://www.vectorware.com/blog/async-await-on-gpu/
163•Philpax•10h ago•48 comments

Assistant to the Regional Manager

https://smallpotatoes.paulbloom.net/p/assistant-to-the-regional-manager
75•NaOH•4d ago•31 comments

Is Show HN dead? No, but it's drowning

https://www.arthurcnops.blog/death-of-show-hn/
424•acnops•17h ago•360 comments

It's not just you, YouTube is partially down in outage

https://9to5google.com/2026/02/17/youtube-outage-february-2026/
31•aqeelat•1h ago•2 comments

Structured AI (YC F25) Is Hiring

https://www.ycombinator.com/companies/structured-ai/jobs/q3cx77y-gtm-intern
1•issygreenslade•6h ago

Show HN: I wrote a technical history book on Lisp

https://berksoft.ca/gol/
170•cdegroot•11h ago•63 comments

Show HN: Pg-typesafe – Strongly typed queries for PostgreSQL and TypeScript

https://github.com/n-e/pg-typesafe
49•n_e•9h ago•21 comments

I converted 2D conventional flight tracking into 3D

https://aeris.edbn.me/?city=SFO
223•kewonit•12h ago•45 comments

Physicists Make Electrons Flow Like Water

https://www.quantamagazine.org/physicists-make-electrons-flow-like-water-20260211/
83•rbanffy•4d ago•8 comments

A Brief History of Xenopus

https://www.asimov.press/p/xenopus
3•surprisetalk•4d ago•0 comments

HackMyClaw

https://hackmyclaw.com/
266•hentrep•10h ago•138 comments

'My Words Are Like an Uncontrollable Dog': On Life with Nonfluent Aphasia

https://thereader.mitpress.mit.edu/my-words-are-like-an-uncontrollable-dog-on-life-with-nonfluent...
30•anarbadalov•4h ago•5 comments

Create bootable ISO image files which are compatible with the Amiga CD32

https://github.com/fuseoppl/isocd-win
10•doener•4h ago•0 comments

Use Microsoft Office Shortcuts in Libre Office

https://github.com/Zaki101Aslam/MS-office-shortcuts-for-Libre-Office
21•Zaki101Aslam•2d ago•5 comments

I Use Obsidian

https://stephango.com/vault
27•hisamafahri•5h ago•23 comments

Show HN: Box of Rain - Auto-Layouted ASCII Diagrams

https://github.com/switz/box-of-rain
17•switz•3d ago•9 comments

Contra "Grandmaster-level chess without search" (2024)

https://cosmo.tardis.ac/files/2024-02-13-searchless.html
33•luu•2d ago•2 comments
Open in hackernews

Thousands of CEOs just admitted AI had no impact on employment or productivity

https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/
149•virgildotcodes•1h ago

Comments

virgildotcodes•1h ago
https://archive.is/L70Ha
DaedalusII•1h ago
If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness
jeron•1h ago
it turns out it's really hard to get a man to fish with a pole when you don't teach them how to use the reel
throwawaysleep•1h ago
Or give them a stick with twine and a plastic fork as a hook, as is the case with Copilot.
Banditoz•1h ago
If AGI is coming, won't there just be autofishers and no one will ever have to fish again, completely devaluing one's fishing knowledge and the effort put in to learn it?
moregrist•25m ago
It’s not a great analogy but...

“Autofishers” are large boats with nets that bring in fish in vast quantities that you then buy at a wholesale market, a supermarket a bit later, or they flash freeze and sell it to you over the next 6-9 months.

Yet there’s still a thriving industry selling fishing gear. Because people like to fish. And because you can rarely buy fish as fresh as what you catch yourself.

Again, it’s not a great analogy, but I dunno. I doubt AGI, if it does come, will end up working the way people think it will.

conductr•1h ago
In regards to copilot, they’ve also been led on a fishing expedition to the middle of a desert
chaos_emergent•1h ago
100% All of the people who are floored by AI capabilities right now are software engineers, and everyone who's extremely skeptical basically has any other office job. On investigating their primary AI interaction surface, it's Microsoft Co-Pilot, which has to be the absolute shittiest implementation of any AI system so far. As a progress-driven person, it's just super disappointing to see how few people are benefiting from the productive gains of these systems.
dboreham•1h ago
This isn't my experience. I see many non-software people using AI regularly. What you may be seeing is more: organizations with no incentive to do things better never did anything to do things better. AI is no different. They were never doing things better with pencil and paper.
DaedalusII•1h ago
I think anthropic will succeed immensely here because when integrated with Microsoft365 and especially Excel it basically does what co-pilot said it would do.

The moment of realisation happen for a lot of normoid business people when they see claude make a DCF spreadsheet or search emails

claude is also smart because it visually shows the user as it resizes the columns, changes colours, etc. Seeing the computer do things makes the normoid SEE the AI despite it being much slower

sheeshkebab•1h ago
No one wants a chatbot “integrated” with excel and office365 crap, it’s clippy 2.0 bullshit.

Replace excel and office stuff with ai model entirely then people will pay attention.

DaedalusII•45m ago
that only works if you can oneshot. but nobody can oneshot.

iterating over work in excel and seeing it update correctly is exactly what people want. If they get it working in MSWord it will pick up even faster.

If the average office worker can get the benefit of AI by installing an add-on into the same office software they have been using since 2000 (the entire professional career of anyone under the age of 45), then they will do so. its also really easy to sell to companies because they dont have to redesign their teams or software stack, or even train people that much. the board can easily agree to budget $20 a head for claude pro

the other thing normies like is they can put in huge legacy spreadsheets and find all the errors

Microsoft365 has 400 million paid seats

yodsanklai•1h ago
I'm a SWE who's been using coding agents daily for the last 6 months and I'm still skeptical.

For my team at least, the productivity boost is difficult to quantify objectively. Our products and services have still tons of issues that AI isn't going to solve magically.

It's pretty clear that AI is allowing to move faster for some tasks, but it's also detrimental for other things. We're going to learn how to use these tools more efficiently, but right now, I'm not convinced about the productivity gain.

david_shaw•17m ago
> I'm a SWE who's been using coding agents daily for the last 6 months and I'm still skeptical.

What improvements have you noticed over that time?

It seems like the models coming out in the last several weeks are dramatically superior to those mid-last year. Does that match your experience?

nrclark•5m ago
Not the grandparent, but I've used most of the OpenAI models that have been released in the last year. Out of all of them, o3 was the best at the programming tasks I do. I liked it a lot more than I like GPT 5.2 Thinking/Pro. Overall, I'm not at all convinced that models are making forward progress in general.
techpression•9m ago
In a team of one at work I see clear benefits, but having worked in many different team sizes for most of my career I can see how it quickly would go down, especially if you care about quality. And even with the latest models it’s a constant battle against legacy training data, which has gotten worse over time. ”I have to spend 45 minutes explaining why a one minute AI generated PR is bad code” was how an old colleague summarized it.
dimitri-vs•1h ago
IMO Copilot was "we need to give these people rope, but not enough for them to hang themselves". A non technical person with no patience and access to a real AI agent inside a business is a bull in a china shop. Copilot Cowork is the closest thing we have to what Copilot should have been and is only possible now because models finally got good enough to be less supervised.

FWIW Gemini inside Google apps is just as bad.

tehjoker•1h ago
There is probably a threshold effect above which the technology begins to be very useful for production (other than faking school assignments, one-off-scripts, spam, language translation, and political propaganda), but I guess we're not there yet. I'm not counting out the possibility of researchers finding a way to add long term memory or stronger reasoning abilities, which would change the game in a very disorienting way, but that would likely mean a change of architecture or a very capable hybrid tool.
DaedalusII•1h ago
the greatest step change will be when mainstream business realise they can use AI to accurately fill in PDF documents with information in any format

filling in pdf documents is effectively the job of millions of people around the world

bubblewand•1h ago
My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)
mr_toad•59m ago
So management basically have no clue and want you to figure out how to use AI?

Do they also make you write your own performance review and set your own objectives?

datenyan•35m ago
> So management basically have no clue and want you to figure out how to use AI?

This is basically the same story I have heard both my own place of employment and also from a number of friends. There is a "need" for AI usage, even if the value proposition is undefined (or, as I would expect, non-existent) for most businesses.

fn-mote•33m ago
Look, to make something productive out of it: a job seeker who has high level skills using LLM assistance will be much more valuable than one without the experience. Never mind your current company mangement's policies.
Herring•1h ago
My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.
Haven880•1h ago
I think both. Most organizatuons lack someone like Steve Jobs to prime their product lines. Microsoft is a good example where you see their products over the years are mostly meh. Then meetings are pervasive and even more so in most companies due to msteam convenience. But currently they faced reduced demands due softer market as compare 2-3 years ago. If you observed that no effect while they layoff many and revenue still hold or at least no negative growth, I would surmise that AI is helping. But in corporate, it only counta if directly contributed sales numbers.
amrocha•1h ago
The where are all the amazing open source programs written by individuals by themselves? Where are all the small businesses supposedly assisted by AI?
Herring•1h ago
> 4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20%+ of all daily commits by the end of 2026.

https://newsletter.semianalysis.com/p/claude-code-is-the-inf...

amrocha•43m ago
There’s lots of slop out there, that doesn’t mean it’s actually good or useful code.
simonw•31m ago
Keep moving those goal posts.
amrocha•21m ago
I deliberately asked for amazing open source projects. I’ve yet to see a single AI coded project i would use.

Keep licking those boots.

techpression•18m ago
They didn’t, amazing open source was asked for, meaningless stats were given. Not that GitHub public repositories were amazing before AI, but nothing has changed since, except AI slop being a new category.
jdlshore•16m ago
Doesn’t look like goal-post moving to me. GP argued that AI isn’t making a difference, because if it was, we’d see amazing AI-generated open source projects. (Edit: taking a second look, that’s not exactly what GP said, but that’s what I took away from it. Obviously individuals create open source projects all the time.)

You rebutted by claiming 4% of open source contributions are AI generated.

GP countered (somewhat indirectly) by arguing that contributions don’t indicate quality, and thus wasn’t sufficient to qualify as “amazing AI-generated open source projects.”

Personally, I agree. The presence of AI contributions is not sufficient to demonstrate “amazing AI-generated open-source projects.” To demonstrate that, you’d need to point to specific projects that were largely generated by AI.

The only big AI-generated projects I’ve heard of are Steve Yegge’s GasTown and Beads, and by all accounts those are complete slop, to the point that Beads has a community dedicated to teaching people how to uninstall it. (Just hearsay. I haven’t looked into them myself.)

So at this point, I’d say the burden of proof is on you, as the original goalposts have not been met.

Edit: Or, at least, I don’t think 4% is enough to demonstrate the level of productivity GP was asking for.

emp17344•13m ago
Weirdly aggressive response, and not a particularly good one either, considering no one agreed to the damn goalposts in the first place.
com2kid•55m ago
Seemingly every day on Show HN?

Also small businesses aren't going to publish blog posts saying "we saved $500 on graphic design this week!"

amrocha•42m ago
Is saving 500$ by generating some shitty AI art the bar? I thought this supposed to replace entire departments
afavour•28m ago
Someone asked “where are all the small businesses”, this was a reply to that. Small businesses don’t have entire art departments.
amrocha•19m ago
Gotcha, so the impact of AI is small businesses get to save a couple hundred dollars and the cost is only 2% of your countries GDP. That’s good.
8note•1h ago
operationally, i think new startups have a big advantage on setting up to be agent-first, and they might not be as good as the old human first stuff, but theyll be much cheaper and nimble for model improvements
kamaal•53m ago
Start ups mostly move fast skipping the necessary ceremony which large corps have to do mandatorily to prevent a billion dollar product from melting. Its possible for start ups because they don't have a billion dollar to start with.

Once you do have a billion dollar product protecting it requires spending time, money and people to keep running. Because building a new one is a lot more effort than protecting existing one from melting.

kjellsbells•1h ago
Maybe experienced people are the L2 cache? And the challenge is to keep the cache fresh and not too deep. You want institutional memory available quickly (cache hit) to help with whatever your CPU people need at that instant. If you don´t have a cache, you can still solve the problem, but oof, is it gonna take you a long time. OTOH, if you get bad data in the cache, that is not good, as everyone is going be picking that out of the cache instead of really figuring out what to do.
canyp•42m ago
L2? I'm hot L1 material, dude.

But I like your and OP's analogy. Also, the productivity claims are coming from the guys in main memory or even disk, far removed from where the crunching is taking place. At those latency magnitudes, even riding a turtle would appear like a huge productivity gain.

sebmellen•1h ago
The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.

Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.

Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.

What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.

But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.

DaedalusII•1h ago
when the work involves navigating a bunch of rules with very ambiguous syntax, AI will automate them to the point computers automated rules based systems with very precise syntax in the 1990s

https://hazel.ai/tax-planning

this software (which i am not related to or promoting) is better at investment planning and tax planning than over 90% of RIAs in the US. It will automate RIA to the point that trading software automated stock broking. This will reduce the average RIA fee from 1% per year to 0.20% or even 0.10% per year just like mutual fund fees dropped in the early 00s

arctic-true•33m ago
You could have beaten the returns of most financial professionals over the last several years by just parking your money in the S&P 500, and yet plenty of people are still making a lucrative career out of underperforming it. In some fields, “being better and cheaper” does not always spell victory.
DaedalusII•12m ago
you are right on beating money managers. when I said investment planning, I meant planning the size and tax structures for investments. this software automates all of the technical work that goes on inside financial planning firms, which is done by tens of thousands of white collar professionals in US/UK/EU, et c. it will then lead to price competitiveness.

more expensive silly companies will exist, but the cheap ones get the scale. SP500 index funds have over 1 trillion in the top 3 providers. cathy wood has like 6-7 billion.

BNYMellon is the custodian of $50 trillion of investment assets. robinhood has $324bn.

silly companies get the headlines though

lich_king•1h ago
> making slides/decks to communicate those thoughts,

That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?

It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.

sebmellen•1h ago
Yeah, but this is self-correcting. Eventually it will get to a point where the data that you use to prompt the LLM will have more signal than the LLM output.

But if you get deep into an enterprise, you'll find there are so many irreducible complexities (as Stephen Wolfram might coin them), that you really need a fully agentically empowered worker — meaning a human — to make progress. AI is not there yet.

LPisGood•1h ago
I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?

If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.

sebmellen•40m ago
Well that’s why AI will not replace the software engineer!
tayo42•35m ago
Ime a team or project lead does that and the rest of the engineers maybe do that on a smaller scale but mostly implement.
8note•37m ago
> unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.

huh? maybe im in the minority, but the thinking:coding has always been 80:20 spend a ton of time thinking and drawing, then write once and debug a bit, and it works

this hasnt really changed with Llm coding either, except that for the same amount of thinking, you get more code output

sebmellen•15m ago
Yeah, ratios vary depending on how productive you are with code. For me it was 50:50 and is now 80:20, but only because I was a relatively unproductive coder (struggled with language feature memorization, etc.) and a much more productive thinker/architect.
maininformer•1h ago
Thousand of companies to be replaced by leaner counterparts that learned to use AI towards greater employment and productivity
SilverElfin•1h ago
These surveys don’t make sense. Ask the forward thinking companies and they’ll say the opposite. The flood of anti AI productivity articles almost feel like they’re meant to lull the population into not seeing what’s about to happen to employment.
throwawaysleep•1h ago
Eh, try using Microsoft Copilot in Word or PowerPoint. It is worthless. If your experience with AI was a Microsoft product, you would think it was a scam too.
yoyohello13•45m ago
Yeah Microsoft has consistently been bragging about how so much code is written by AI, yet their products are worse than ever. Seems to indicate “using AI” is not enough. You have to be smart about when and where.
conductr•37m ago
It’s not just that though. You find when going through AI projects in an organization that many times the process is manual for a reason. This isn’t the first wave of “automation” that’s came through. Most things that can be fully automated already have been long ago and they manual parts get sold as we can make AI do it, until you see the specs and noodle around on the problem some then you realize it’s probably just going to remain manual as the amount of model training requires as much time and effort as just doing it by hand.
pram•1h ago
It’s funny because at work we have paid Codex and Claude but I rarely find a use for it, yet I pay for the $200 Max plan for personal stuff and will use it for hours!

So I’m not even in the “it’s useless” camp, but it’s frankly only situationally useful outside of new greenfield stuff. Maybe that is the problem?

crazygringo•1h ago
Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].

Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.

The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.

And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.

[1] https://en.wikipedia.org/wiki/Productivity_paradox

kamaal•59m ago
One part of the system moving fast doesn't change the speed of the system all that much.

The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.

If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.

kace91•53m ago
The comparison seems flawed in terms of cost.

A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.

Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.

meager_wikis•47m ago
If anything, the 'scariness' of an old computer probably protected the company in many ways. AI's approachability to the average office worker, specifically how it makes it seem like it easy to deploy/run/triage enterprise software, will continue to pwn.
gruez•45m ago
How viable are the $20/month subscriptions for actual work and are they loss making for Anthropic? I've heard both of people needing to get higher tiers to get anything done in Claude Code and also that the subscriptions are (heavily?) subsidized by Anthropic, so the "just another $20 SaaS" argument doesn't sound too good.
8note•40m ago
id guess the 200 subscription sufficient per person.

but at that point you could go for a bugger one and split amongst headcount

simonw•33m ago
I am confident that Anthropic make revenue from that $20 than the electricity and server costs needed to serve that customer.

Claude Code has rate limits for a reason: I expect they are carefully designed to ensure that the average user doesn't end up losing Anthropic money, and that even extreme heavy users don't cause big enough losses for it to be a problem.

Everything I've heard makes me believe the margins on inference are quite high. The AI labs lose money because of the R&D and training costs, not because they're giving electricity and server operational costs away for free.

Esophagus4•25m ago
I always assumed that with inference being so cheap, my subscription fees were paying for training costs, not inference.
latchkey•35m ago
Like Uber/Airbnb in early days, this is heavily subsidized.
abraxas•32m ago
What if LLMs are optimizing the average office worker's productivity but the work itself simply has no discernable economic value? This is argued at length in Grebber's Bullshit Jobs essay and book.
groundzeros2015•21m ago
And that book sort of vaguely hints around at all these jobs that are surely bullshit but won’t identify them concretely.

Not recognizing the essential role of sales seemed to be a common mistake.

emp17344•3m ago
The thesis of Bullshit Jobs is almost universally rejected by economists, FYI. There’s not much of value to obtain from the book.
46493168•16m ago
>I think there are enough short term supposed benefits that something should be showing there.

As measured by whom? The same managers who demanded we all return to the office 5 days a week because the only way they can measure productivity is butts in seats?

calvinmorrison•29m ago
> it's helping lots of people, but it's also costing an extraordinary amount of money

Is it fair to say that wall street is betting America's collective pensions on AI...

_aavaa_•16m ago
For more on this exact topic and an answer to Solow’s Paradox, see, the excellent, The Dynamo and the Computer by Paul David [0].

[0]: https://www.almendron.com/tribuna/wp-content/uploads/2018/03...

ozgrakkurt•5m ago
I don’t think LLMs are similar to computers in terms of productivity boost
istillcantcode•1h ago
Anyone read the goal lately?
cmiles8•1h ago
I like AI and use it daily, but this bubble can’t pop soon enough so we can all return to normally scheduled programming.

CEOs are now on the downside of the hype curve.

They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.

n_u•1h ago
Original paper https://www.nber.org/system/files/working_papers/w34836/w348...

Figure A6 on page 45: Current and expected AI adoption by industry

Figure A11 on page 51: Realised and expected impacts of AI on employment by industry

Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry

These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)

A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.

carefree-bob•1h ago
It's not just technology, it's very hard to detect the effect of inventions in general on productivity. There was a paper pointing out that the invention of the steam engine was basically invisible in the productivity statistics:

https://www.frbsf.org/wp-content/uploads/crafts.pdf

beloch•1h ago
The article suggests that AI-related productivity gains could follow a J-curve. An initial decline, as initially happened with IT, followed by an exponential surge. They admit this is heavily dependent on the real value AI provides.

However, there's another factor. The J-curve for IT happened in a different era. No matter when you jumped on the bandwagon, things just kept getting faster, easier, and cheaper. Moore's law was relentless. The exponential growth phase of the J-curve for AI, if there is one, is going to be heavily damped by the enshitification phase of the winning AI companies. They are currently incurring massive debt in order to gain an edge on their competition. Whatever companies are left standing in a couple of years are going to have to raise the funds to service and pay back that debt. The investment required to compete in AI is so massive that cheaper competition may not arise, and a small number of (or single) winner could put anyone dependent on AI into a financial bind. Will growth really be exponential if this happens and the benefits aren't clearly worth it?

The best possible outcome may be for the bubble to pop, the current batch of AI companies to go bankrupt, and for AI capability to be built back better and cheaper as computation becomes cheaper.

rr808•49m ago
BTW the study was from September 2024 to 2025, so its the very earliest of adopters.
pengaru•46m ago
At $dayjob GenAI has been shoved into every workflow and it's a constant source of noise and irritation, slop galore. I'm so close to walking away from the industry to resume being a mechanic, what a complete shit show.
deadbabe•33m ago
The people who will be most productive with AI will be the entreprompteurs who whip up entire products and go to market faster than ever before, iterating at dangerous speeds. Lean Startup methodology on pure steroids basically.

Unfortunately I think most of the stuff they make will be shit, but they will build it very productively.

J_Shelby_J•17m ago
It’s simple calculus for business leaders: admit they’re laying off workers because the fundamentals are bad and spook investors, admit they’re laying off workers because the economy is bad and anger the administration, or just say it’s AI making roles unnecessary and hope for the best.
acjohnson55•13m ago
It's weird being on here and seeing so much naysaying, because I see a radical change already happening in software development. The future is here, it's just not equally distributed.

In the past 6 months, I've gone from Copilot to Cursor to Conductor. It's really the shift to Conductor that convinced me that I crossed into a new reality of software work. It is now possible to code at a scale dramatically higher than before.

This has not yet translated into shipping at far higher magnitude. There are still big friction points and bottlenecks. Some will need to be resolved with technology, others will need organizational solutions.

But this is crystal clear to me: there is a clear path to companies getting software value to the end customer much more rapidly.

I would compare the ongoing revolution to the advent of the Web for software delivery. When features didn't have to be scheduled for release in physical shipments, it unlocked radically different approaches to product development, most clearly illustrated in The Agile Manifesto. You could also do real-time experiments to optimize product outcomes.

I'm not here to say that this is all going to be OK. It won't be for a lot of people. Some companies are going to make tremendous mistakes and generate tremendous waste. Many of the concerns around GenAI are deadly,serious.

But I also have zero doubt that the companies that most effectively embrace the new possibilities are going to run circles around their competition.

It's a weird feeling when people argue against me in this, because I've seen too much. It's like arguing with flat-earthers. I've never personally circumnavigated Antarctica, but me being wrong would invalidate so many facts my frame of reality depends on.

To me, the question isn't about the capabilities of the technology. It's whether we actually want the future it unlocks. That's the discussion I wish we were having. Even if it's hard for me to see what choice there is. Capitalism and geopolitical competition are incredible forces to reckon with, and AI is being driven hard by both.

nowittyusername•12m ago
As we approach the singularity things will be more noisy and things will make less and less sense as rapid change can look like chaos from inside the system. I recommend folks just take a deep breath, and just take a look around you. Regardless on your stance if the singularity is real, if AI will revolutionize everything or not, just forget all that noise. just look around you and ask yourself if things are seeming more or less chaotic, are you able to predict better or worse on what is going to happen? how far can your predictions land you now versus lets say 10 or 20 years ago? Conflicting signals is exactly how all of this looks. one account is saying its the end of the world another is saying nothing ever changes and everything is the same as it always was....
1broseidon•12m ago
I think the 'AI productivity gap' is mostly a state management problem. Even with great models, you burn so much time just manually syncing context between different agents or chat sessions.

Until the handoff tax is lower than the cost of just doing it yourself, the ROI isn't going to be there for most engineering workflows.