frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why is Singapore no longer "cool"?

https://marginalrevolution.com/marginalrevolution/2026/02/why-is-singapore-no-longer-cool.html
1•paulpauper•1m ago•0 comments

M1 MacBook Pro as a K3s Node with Asahi Linux

https://grh.am/2026/m1-macbook-pro-as-a-k3s-node-with-asahi-linux/
1•graystevens•1m ago•0 comments

Incident with GitHub Issues and Pull Requests

https://www.githubstatus.com/incidents/smf24rvl67v9
1•maxloh•3m ago•0 comments

NASA's Artemis Faces a Complex Path to Lunar Landing

https://spectrum.ieee.org/nasa-artemis-blue-origin-spacex
1•rbanffy•3m ago•0 comments

72cb3b4cdfac38b3140dc3451522356e

https://gist.github.com/jewe8ham/72cb3b4cdfac38b3140dc3451522356e
1•graefsu•4m ago•0 comments

Show HN: Bub – A Pythonic OpenClaw

https://github.com/PsiACE/bub
1•recrush•4m ago•0 comments

GitHub Is Down

https://downdetector.tr/durum/github/
1•bakigul•5m ago•0 comments

SpaceMolt: An MMORPG for AI to Play

https://blog.langworth.com/spacemolt
2•statico•6m ago•1 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•6m ago•1 comments

Converting a $3.88 analog clock from Walmart into a ESP8266-based Wi-Fi clock

https://github.com/jim11662418/ESP8266_WiFi_Analog_Clock
1•tokyobreakfast•7m ago•0 comments

Over 1k tok/s on an RTX 5090 with Qwen3 0.6B

https://blog.alpindale.net/posts/5090_decode_optimization/
2•AlpinDale•7m ago•1 comments

Show HN: Vivideo: AI Video Generator – the most basic form of AI video creation

https://vivideo.ai
1•mevlut•7m ago•0 comments

Show HN: ClawdTalk: Voice Calls for ClawdBots

https://clawdtalk.com/
3•abhi_telnyx•7m ago•0 comments

The Twin Engine Strategy That Propels AWS Is Working Well

https://www.nextplatform.com/2026/02/08/the-twin-engine-strategy-that-propels-aws-is-working-well/
1•rbanffy•8m ago•0 comments

My New Mobile Phone

https://michal.sapka.pl/weblog/2026/my-new-mobile-phone/
1•speckx•8m ago•0 comments

Show HN: CityTitles – an arena where AI agents trade cities for real money

1•jordiadria•8m ago•2 comments

The split-stack billing problem

https://www.solvimon.com/blog/the-split-stack-billing-problem
1•arnon•8m ago•0 comments

To reuse or not reuse–the eternal debate of New Glenn's second stage reignites

https://arstechnica.com/space/2026/02/to-reuse-or-not-reuse-the-eternal-debate-of-new-glenns-seco...
1•rbanffy•8m ago•0 comments

I built an anonymous discussion layer with time-bound posts

2•plainspeech•9m ago•0 comments

How AI is changing my development workflow

https://www.santoshyadav.dev/blog/how-ai-is-changing-my-development-workflow-and-i-am-excited-abo...
1•TheAnkurTyagi•9m ago•0 comments

What I learned from a desktop AI tool getting 400 stars in days

https://github.com/evinjohnn/natively-cluely-ai-assistant
1•Nive11•11m ago•1 comments

Zero crashes, zero compromises: inside the HAProxy security audit

https://www.haproxy.com/blog/haproxy-security-audit-results
1•owenthejumper•11m ago•0 comments

15 Years of Blogging

https://nolanlawson.com/2026/02/01/15-years-of-blogging/
1•herbertl•11m ago•0 comments

Show HN: Shovel.js – A portable meta-framework built on web standards

https://shovel.js.org/blog/introducing-shovel/
2•bikeshaving•12m ago•1 comments

'Homes may have to be abandoned': climate crisis has shaped Britain's flood risk

https://www.theguardian.com/news/ng-interactive/2026/jan/31/climate-crisis-flood-risk-britain
1•PaulHoule•13m ago•1 comments

Show HN: An AI-agent-friendly CV site (llms.txt, schema.org, case studies)

https://github.com/vassiliylakhonin/vassiliylakhonin.github.io
1•vassilbek•15m ago•0 comments

Three Red States Are the Best Hope in Schooling

https://www.nytimes.com/2026/02/09/opinion/red-states-good-schools.html
1•7402•15m ago•0 comments

Codex changes things you never asked it to touch

https://twitter.com/OrganicGPT/status/2020894850606858443
1•behnamoh•15m ago•0 comments

Vibe Coding: Lessons Learned

https://michelenasti.com/2026-vibe-coding/
1•speckx•15m ago•0 comments

Portfolio/Investment Growth Benchmarking

https://finbodhi.com/docs/blog/benchmark-scenarios/
2•ciju•16m ago•0 comments
Open in hackernews

AI Doesn't Reduce Work–It Intensifies It

https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
132•swolpers•1h ago

Comments

hlynurd•1h ago
The title alone maximizes the word-to-LLMism ratio
energy123•1h ago
> In an eight-month study

Things have improved significantly since then. Copying and pasting code from o1/o3 versus letting codex 5.3 xhigh assemble its own context and do it for you.

hennell•1h ago
Since when? You're quoting the timeframe not the period under study.

It's also not a study of just engineers, it's people across engineering, product, design, research, and operations. For a lot of none-code tasks Ai needs pasted context as it's not usually in a repo like code is.

(And their comments about intensifying engineering workload also aren't really changed by ai copy/paste vs context).

coffeefirst•44m ago
At some point, the “but everything has radically changed in the past 15 minutes” counterargument to all material evidence that undermines AI marketing has to become boring and unpersuasive.

I humbly propose that point is today.

linsomniac•37m ago
>I humbly propose that point is today.

You're right that the argument will become boring, but I think it's gonna be a minute before it does so. I spent much of yesterday playing with the new "agent teams" experimental function of Claude Code, and it's pretty remarkable. It one-shotted a rather complex Ansible module (including packaging for release to galaxy), and built a game that teaches stock options learning, both basically one-shotted.

On Thursday I had a FAC with a coworker and he predicted 2026 is going to be the year of acceleration, and based on what I've seen over the last 2-3 years I'd say it's hard to argue that.

Uehreka•18m ago
Last night I tried out Opus 4.6 on a personal project involving animating in Gaussian Splats where the final result is output as a video.

In the past, AI coding agents could usually reason about the code well enough that they had a good chance of success, but I’d have to manually test since they were bad at “seeing” the output and characterizing it in a way that allowed them to debug if things went wrong, and they would never ever check visual outputs unless I forced them to (probably because it didn’t work well during RL training).

Opus 4.6 correctly reasoned (on its own, I didn’t even think to prompt this) that it could “test” the output by grabbing the first, middle and last frame, and observing that the first frame should be empty, the middle frame half full of details, and the final frame resembling the input image. That alone wouldn’t have impressed me that much, but it actually found and fixed a bug based on visual observation of a blurry final frame (we hadn’t run the NeRF training for enough iterations).

In a sense this is an incremental improvement in the model’s capabilities. But in terms of what I can now use this model for, it’s huge. Previous models struggled at tokenizing/interpreting images beyond describing the contents in semantic terms, so they couldn’t iterate based on visual feedback when the contents were abstract or broken in an unusual way. The fact that they can do this now means I can set them on tasks like this unaided and have a reasonable probability that they’ll be able to troubleshoot their own issues.

I understand your exhaustion at all the breathless enthusiasm, but every new models radically changes the game for another subset of users/tasks. You’re going to keep hearing that counterargument for a long time, and the worst part is, it’s going to be true even if it’s annoying.

bunderbunder•1h ago
When you’re sailing, juat going as fast as possible won’t necessarily get you where you need to go.

Not just won’t get you there fastest. At all.

larodi•1h ago
it intensifies work, and shortens time to burnout, which... like nobody still talks of. ingesting these huge slops of information can be super tiring.
baal80spam•1h ago
> it intensifies work, and shortens time to burnout

This is most likely correct. Everyone talks how AI makes it possible to "do multiple tasks at the same time", but noone seems to care that the cognitive (over)load is very real.

menaerus•54m ago
IME you don't even have to do multiple things at the same time to reach that cognitive fatigue. The pace alone, which is now much higher, could be enough to saturate your cognitive capabilities.
bunderbunder•20m ago
For me one unexpected factor is how much it strains my executive function to try and maintain attention on the task at hand while I’m letting the agent spin away for 5-10 minutes at a stretch. It’s even worse than the bad old days of long compile times because at least then I could work on tests or something like that while I wait. But with coding agents I feel like I need to be completely hands off because they might decide to touch literally any file in the repository.

It reminds me a bit of how à while back people were finding that operating a level 3 autonomous vehicle is actually more fatiguing than driving a vehicle that doesn’t even have cruise control.

btbuildem•1h ago
I think the article nails it, on multiple counts. From personal experience, the cognitive overload is sneaky, but real. You do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn't free you from comprehending, evaluating and managing the work. It's intense.
varispeed•51m ago
"Explain to me like I am five what you just did"

Then "Make a detailed list of changes and reasoning behind it."

Then feed that to another AI and ask: "Does it make sense and why?"

coldtea•21m ago
Then get rid of. They can keep 1/10 the humans, and have them run such agents.
salawat•19m ago
Garbage In, Garbage Out. If you're working under the illusion any tool relieves you from the burden of understanding wtf it is you're doing, you aren't using it safely, and you will offload the burden of your lack of care on someone else down the line. Don't. Ever. Do. That.
wnolens•50m ago
This has been my experience too. I feel freed up from the "manual labor" slice of software development and can focus on more interesting design problems and product alignment, which feels like a bit of a drug right now that i'm actually working harder and more hours.
Tade0•49m ago
Started referring to it as "speed of accountability".

A responsible developer will only produce code as fast as they can sign it off.

An irresponsible one will just shit all over the codebase.

Molitor5901•46m ago
I'm not sure I would agree in totem. Freeing the minutia allows for a higher cognitive load on the bigger picture. I use AI primarily for research gathering, and refining of what I have, which has freed up a lot of time to focus on the bigger issues, and specifically in my case, zeroing in on the diamond in the rough.
maccard•43m ago
For a very small number of people the hard part is writing the code. For most of us, it’s writing the correct code. AI generates lots of code but for 90% of my career writing more code hasn’t helped.

> you do end up taking on more than you can handle, just because your mob of agents can do the minutia of the tasks, doesn’t free you from comprehending, evaluating and managing the work

I’m currently in an EM role and this is my life but with programmers instead of AI agents.

snovv_crash•32m ago
Also EM and it feels like now I have a team of juniors on my personal projects, except they need constant micromanaging in a way I never would for real people.
eloisant•22m ago
So you're saying AI doesn't help, and having reports is just like using AI (which you said doesn't help).

What's stopping you from becoming an IC and producing as much as your full team then? What's the point of having reports in this case?

autoconfig•1h ago
"It never gets easier, you just go faster" - Greg LeMond
criddell•49m ago
That quotation pops up on cycling subreddits occasionally and I've always disliked it because I think it discourages people from casual bike riding.

I've been biking to work occasionally for a few years now and it definitely gets easier.

gbjw•37m ago
Yeah the quote assumes you're riding without speed limits. In a typical commute, it does get easier once your cardiovascular ability exceeds the upper speed limit given the route.
criddell•22m ago
No, the quote assumes you want to go faster. I don't really. I enjoy my ride and if I wanted to get to work 5 minutes faster, I would leave 5 minutes earlier.
matwood•25m ago
I read that quote as speaking more to the human condition and less about cycling. Humanity has a tendency to keep pushing to the edge of its current abilities no matter how much they expand.
almost_usual•35m ago
Only if you continue to push yourself while training. What used to be difficult absolutely gets easier in endurance after training.
gk1•59m ago
Exactly as happened with computer revolution... Expectations raised in line with productivity. In HN parlance, being a 10x engineer just becomes "being an engineer," and "100x engineer" is the new 10x engineer. And from what I can see in myself and others right now, being a 100x of anything, while exhilarating, is also mentally and physically taxing.
c-linkage•45m ago
Being a 100x developer means you can work just 1% of the time you used to work, right?
gk1•36m ago
Think of some 100x folks you know of. Are they working more or less than before?
wiseowise•15m ago
They sure as hell don’t make 100x more. Maybe from ads they serve selling AI/productivity snake oil.
jplusequalt•11m ago
>Think of some 100x folks you know of.

This mythical class of developer doesn't exist. Are you trying to tell me that there are a class of developers out there that are doing three months worth of work every single day at the office?

pixl97•3m ago
It's odd that kind of developer doesn't exist, but that type of CEO does. Maybe we need to replace CEOs with AI.
javcasas•35m ago
When the expectation is 100x, then it is work 100% of the time at maximum speed.
maccard•42m ago
If people are realistically a baseline of 10x more productive where are all the features, games, applications, SAAS’s that are suddenly possible that weren’t before?
neutronicus•34m ago
Windows 11
bluGill•29m ago
AI might be 100x faster than me at writing code - but writing code is a tiny portion of the work I do. I need to understand the problem before I can write code. I need to understand my whole system. I need to test that my code really works - this is more than 50% of the work I do, and automated tests are not enough - too often the automated test doesn't model the real world in some unexpected way. I review code that others write. I answer technical questions for others. There is a lot of release work, mandatory training, and other overhead as well.

Writing code is what I expect a junior or mid level engineer to spend 20% of their time doing. By the time you reach senior engineer it should be less (though when you write code you are faster and so might write more code despite spending less time on it).

shepherdjerred•7m ago
I can tell you, with absolute certainty, that before AI ~0 junior/mid level devs spent just 20% of their time programming. At least not at tech companies
tsunamifury•56m ago
In the Ford matrix of smart to dumb and hardworking to lazy AI will enable the dumb and hard working to 100x their damage to a company over night.
sph•41m ago
Can’t miss the opportunity to share my favourite aphorism:

“ I distinguish four types. There are clever, hardworking, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and hardworking; their place is the General Staff. The next ones are stupid and lazy; they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the mental clarity and strength of nerve necessary for difficult decisions. One must beware of anyone who is both stupid and hardworking; he must not be entrusted with any responsibility because he will always only cause damage.”

— Kurt von Hammerstein-Equord

grey-area•50m ago
Delete the llmism after the dash and the title is correct.
micromacrofoot•47m ago
Same with most productivity gains in tooling historically, I think one way we should consider reckoning with this is through workers rights.

The industrial revolution lead to gains that allowed for weekends and the elimination of child labor, but they didn't come for free, they had to be fought for.

If we don't fight for it, what are we gaining? more intense work in exchange for what?

hansonkd•43m ago
I've been saying this for the past 2 years. Even think about the stereotypical "996" work schedule that is all the rave in SF and AI founder communities.

It just takes thinking about it for 5 seconds to see the contradiction. If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.

20 years ago SV was stereotyped for "lazy" or fun loving engineers who barely worked but cashed huge pay checks. Now I would say the stereotype is overworked engineers who on the midlevel are making less than 20 back.

I see it across other disciplines too. Everyone I know from sales, to lawyers, etc if they engage with AI its like they get stuck in a loop where the original task is easier but now it revealed 10 more smaller tasks that fill up their time even more so than before AI.

Thats not to say productivity gains with AI aren't found. It just seems like the gains get people into a flywheel of increasing work.

seanmcdirmid•40m ago
996 is a Chinese term, not American.

There is a lot of work to do, just because you are doing more work with your time doesn’t mean you can somehow count that as less work.

Bullfight2Cond•38m ago
china outlawed it
hansonkd•37m ago
I've only seen it in job postings and linkedin posts from SF founders.
DaedalusII•39m ago
now everyone gets to be a manager !
kibwen•37m ago
Talking about "productivity" is a red herring.

Are the people leveraging LLMs making more money while working the same number of hours?

Are the people leveraging LLMs working fewer hours while making the same amount of money?

If neither of these are true, then LLMs have not made your life better as a working programmer.

coldtea•24m ago
>Are the people leveraging LLMs making more money while working the same number of hours?

Nobody is getting a raise for using AI. So no.

>Are the people leveraging LLMs working fewer hours while making the same amount of money?

Early adopters maybe, as they offload some work to agents. As AI commodifies and is the baseline, that will invert, especially as companies shed people to have the remaining "multiply" their output with AI.

So the answer will be no and no.

elevatortrim•23m ago
Of course not. In the world of capitalism and employment, money earned is not a function of productivity, it is a function of competency. It is all relative.
athrowaway3z•19m ago
Lines of code are not a good metric for productivity.

Neither are the hours worked.

Nor is the money.

Just think of the security guard on site walking around, or someone who has a dozen monitors.

pixl97•8m ago
Regardless of that, LLMs could be a Moloch problem.

That is, if anyone uses it your life will be worse, but if you don't use it then your life will be even worse than those using it.

Too bad you programmers didn't unionize when you had the chance so you could fight this. Guess you'll have to pull yourself up by your bootstraps.

bschwindHN•36m ago
Same story with hardware and software. Hardware gets more efficient and faster, so software devs shove more CPU intensive stuff into their applications, or just go lazy and write inefficient code.

The software experience is always going to feel about the same speed perceptually, and employers will expect you to work the same amount (or more!)

throwawaysleep•30m ago
> If AI was so good at reducing work, why is it every company engaging with AI has their workload increase.

Throughout human history, we have chosen more work over keeping output stable.

coldtea•22m ago
Throughout human history we were never given the choice. We were forced into it like cattle.
whaleidk•17m ago
See a lot of people on this site doing it willingly. I think a lot of people will always choose perceived convenience over anything
joenot443•26m ago
I don't think it's super complicated. I think that prompting takes generally less mental energy than coding by hand, so on average one can work longer days if they're prompting than if they were coding.

I can pretty easily do a 12h day of prompting but I haven't been able to code for 12h straight since I was in college.

jvanderbot•22m ago
That's a bingo.

Additionally, I can eke out 4 hrs really deep diving nowadays, and have structured my workday around that, delegating low-mental-cost tasks to after that initial dive. Now diving is a low enough mental cost that I can do 8-12hrs of it.

It's a bicycle. Truly.

packetlost•20m ago
While I agree with the idea that prompting is easier to get started, is it actually less work. More hours doesn't mean they're equally as productive. More, lower quality hours just makes work:life balance worse with nothing to show for it.
treetalker•19m ago
Isn’t the grander question why on earth people would tolerate, let alone desire, more hours of work every day?

The older I get, the more I see the wisdom in the ancient ideas of reducing desires and being content with what one has.

Ygg2•18m ago
If you're in the office for 12h it won't matter if you're proompting, pushing pens or working your ass off. You gave that company 12h of your life. You're not getting those back.
lm28469•16m ago
> I can pretty easily do a 12h day of prompting

Do you want to though?

jplusequalt•8m ago
>so on average one can work longer days if they're prompting than if they were coding

It's 2026 for god's sake. I don't want to work __longer__ days, I want to work __shorter__ days.

skybrian•17m ago
Maybe ask the friendly AI about reducing project scope? But we probably won’t if we’re having too much fun.
asdev•43m ago
The cognitive overload is more so people not understanding the slop they are generating. Slop piles on top of slop until a situation arrives where you actually need to understand everything, and you don’t because you didn’t do the work yourself.
bryanlarsen•42m ago
I've started calling it "revenge of the QA/Support engineers", personally.

Our QA & Support engineers have now started creating MR's to fix customer issues, satisfy customer requests and fix bugs.

They're AI sloppy and a bunch of work to fix up, but they're a way better description of the problem than the tickets they used to send.

So now instead of me creating a whole bunch of work for QA/Support engineers when I ship sub-optimal code to them, they're creating a bunch of work for me by shipping sub-optimal code to me.

skybrian•38m ago
I wonder how well a coding agent would do if you asked one to review the change and then to rewrite the merge request to fix the things it criticized?
bryanlarsen•27m ago
It does quite well and definitely catches/fixes things I miss. But I still catch significant things it misses. And I am using AI to fix the things I catch.

Which is then more slop I have to review.

Our product is not SaaS, it's software installed on customer computers. Any bug that slips through is really expensive to fix. Careful review and code construction is worth the effort.

yerik•42m ago
I've noticed first hand how the scope of responsibilities is broadened by integrating AI on workflows. Personally if feels like a positive feedback loop: I take more responsibilities; since they are outside my scope I have a harder time reviewing AI output; this increases fatigue and makes me more prone to just accepting more AI output; with the increase in reliance on AI output I get to a point where I'm managing things that are way outside my scope and I can't do it unless I rely on AI entirely. In my opinion this also increases Imposter Syndrome effects.

But I doubt companies and management will think for a second that this voluntary increase in "productivity" is any bad, and it will probably be encouraged

neversupervised•38m ago
HBR is analyzing this with an old world lens. It might very well be that the effects are as they say temporarily. But the reason this is happening is because AI is in fact replacing human labor and the puppeteers are trying to remain employed. The steady state outcome is human replacement, which means AI does in fact reduce human labor, even if the remaining humans in the loop are more overloaded. The equation is not workload per capita but how many humans it takes to accomplish a goal.
simonw•37m ago
If AI was indeed replacing human labor I would expect HBR to be among the first publications to cover it.
ChrisArchitect•36m ago
Related:

AI makes the easy part easier and the hard part harder

https://news.ycombinator.com/item?id=46939593

ishtanbul•32m ago
This is jevons paradox at its purest. Who really thought companies were just going to let everyone go home earlier? Work is easier, now you will do even more. Congratulations.
stuaxo•24m ago
Feels like there may be a gap in the market for businesses that do this though, since keeping everyone at work leads to burn out and isn't an edge.

Having well rested employees that don't burn out is though.

pixl97•4m ago
The capital class wants you naked and afraid. If you're well rested you might have thoughts like "Why am I working for this guy, why don't I become a competitor". Instead them going "Shit, I need to work 5 more hours though I've already worked 8 today so I can keep my health insurance" is far more beneficial for them controlling everything.
xXSLAYERXx•32m ago
> On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.

Love this quote. For me, barely a few weeks in, I feel exactly this. To clarify - I feel this only when working on dusty old side projects. When I use it to build for the org its still a slog just faster.

nihzm•28m ago
This article is scratching the surface of the concept of desynchronization from the theory of social acceleration and the sociology of speed. Any technology that is supposed to create idle time, once it reaches mass adoptions has the opposite effect of speeding up everything else.

We have been on this track for a long time: cars were supposed to save time in transit, but people started living farther from city centres (c.f. Marchetti's constant). E-Mail and instant messaging were supposed to eliminate wait time from postal services, but we now send orders of magnitude more messages and social norms have shifted such that faster replies are expected.

"AI" backed productivity increases are only impressive relative to non-AI users. The idilliac dream of working one or two days a week with agents in the background "doing the rest" is delusional. Like all previous technologies once it reaches mass adoption everyone will be working at a faster pace, because our society is obsessed with speed.

leoedin•19m ago
If anyone is saying "yeah, but this time will be different", just look at our society now.

Arguably the only jobs which are necessary in society are related to food, heating, shelter and maybe some healthcare. Everything else - what most people are doing - is just feeding the never ending treadmill of consumer desire and bureaucratic expansion. If everyone adjusted their desired living standards and possessions to those of just a few centuries ago, almost all of us wouldn't need to work.

Yet here we are, still on the treadmill! It's pretty clear that making certain types of work no longer needed will just create new demands and wants, and new types of work for us to do. That appears to be human nature.

coldtea•27m ago
Every other advancement in office productivity and software has intensified work. AI will too. It will also further commodify it.
stuaxo•25m ago
The only sustainable thing to do is to reduce peoples work hours, but keep paying them to the same over the week.

If before AI we were talking about 6 hours days as an aim, we should be talking about a 4 hour work day, without any reduction in pay.

Otherwise everyone is going to burn out.

wiseowise•13m ago
“lol”

- Average manager

alexpotato•24m ago
My dad was a stockbroker in the 1970s and he had a great line:

“When computers first came out we were told:

‘Computers will be so productive and save you so much time you won’t know what to do with all of your free time!’

Unsurprisingly, that didn’t happen.”

Aka Jevon’s Paradox in practice

bonesss•4m ago
The Mythical Man Month was published in 75, with a deep technical insiders perspective.

The kinds of productivity scaling they had been seeing to that point could be reasonably extrapolated to all kinds of industrial re-alignment.

Then we ran out of silver bullets.

[Still waiting to see what percentage of LLM hype is driven by people not having read The Mythical Man Month.]

plainspeech•20m ago
AI speeds things up at the beginning. It helps you get unstuck, find answers quickly without jumping through different solutions from internet, write boilerplate, explore ideas faster. But over time I reach for it faster than I probably should. Instead of digging into basic code, I directly jump to AI. I’ve been using it for even basic code searches. AI just makes it easier to outsource thinking. And your understanding of the codebase can get thinner over time.
vagrantstreet•18m ago
How about using LLM's to improve developer experience instead? I've had a lot of failures with "AI" even on small projects, even the best things I've tried like (agentic-project-management) I still had to just go back to traditional coding.

Not sure if everyone shares this sentiment but the reason I use AI as a crutch is due to poor documentation that's out there, even simple terminal commands don't show use examples for ls when you try to type man ls. I just end up putting up with the code output because it works ok enough for short term, this however doesn't seem like a sustainable plan long term either.

There is also this dread I feel because what I would do if AI went down permanently? The tools I tried like Zeal really didn't do it for me either for documentation, not sure who decided on the documentation format but this "Made by professionals, for professionals" isn't really cutting it anymore. Apologies in advance if I missed out on any tools but in my 4+ years of university nobody ever mentioned any quality tools either, and I'm sure this trend is happening everywhere.

__MatrixMan__•14m ago
I'm not sure if intensifies is the word. AI just has awkward time dynamics that people are adapting to.

Sometimes you end up with tasks that are low intensity long duration. Like I need to supervise this AI over the course of three hours, but the task is simple enough that I can watch a movie while I do it. So people measuring my work time are like "wow he's working extra hours" but all I did during that time is press enter 50 times and write 3 sentences.