frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Removing the modem and GPS from my 2024 RAV4 hybrid

https://arkadiyt.com/2026/05/13/removing-the-modem-and-gps-from-my-rav4/
265•arkadiyt•2h ago•118 comments

RTX 5090 and M4 MacBook Air: Can It Game?

https://scottjg.com/posts/2026-05-05-egpu-mac-gaming/
336•allenleee•4h ago•87 comments

New Nginx Exploit

https://github.com/DepthFirstDisclosures/Nginx-Rift
150•hetsaraiya•2h ago•37 comments

First public macOS kernel memory corruption exploit on Apple M5

https://blog.calif.io/p/first-public-kernel-memory-corruption
37•quadrige•1h ago•7 comments

The Power of a Free Popsicle (2018)

https://www.gsb.stanford.edu/insights/power-free-popsicle
25•NaOH•1h ago•3 comments

The AI Zombification of Universities

https://www.thenewcritic.com/p/the-great-zombification
38•rmdmphilosopher•1h ago•12 comments

HDD Firmware Hacking

https://icode4.coffee/?p=1465
61•jsploit•3h ago•6 comments

A message from President Kornbluth about funding and the talent pipeline

https://president.mit.edu/writing-speeches/video-transcript-message-president-kornbluth-about-fun...
505•dmayo•5h ago•527 comments

AI is making me dumb

https://jpain.io/god-damn-ai-is-making-me-dumb/
213•Eighth•1h ago•140 comments

Computer Hobby Movement in Canada

https://museum.eecs.yorku.ca/exhibits/show/hobby_canada/hobby_canada
154•rbanffy•7h ago•44 comments

Understanding the Linux Kernel: The Linux Kernel Startup

https://internals-for-interns.com/posts/linux-kernel-startup/
25•valyala•1h ago•0 comments

Terranox AI (YC W26) Is Hiring a Founding AI/ML Engineer and Summer AI/ML Intern

https://www.workatastartup.com/companies/terranox-ai
1•jadecheclair•2h ago

Int a = 5; a = a++ + ++a; a =? (2011)

https://gynvael.coldwind.pl/?id=372
34•e-topy•2d ago•55 comments

WinUI 3 Performance: A Leap Forward

https://github.com/microsoft/microsoft-ui-xaml/discussions/11096
6•whatever3•56m ago•0 comments

You Don't Align an AI, You Align with It

https://danieltan.weblog.lol/2026/05/you-dont-align-an-ai-you-align-with-it
24•danieltanfh95•1h ago•6 comments

Fossils show millipede and centipede ancestors evolved legs underwater

https://phys.org/news/2026-05-ancient-sea-fossils-millipede-centipede.html
53•gmays•2d ago•2 comments

London's Smallest Public Sculptures

https://lookup.london/londons-smallest-public-sculptures/
9•susam•3d ago•0 comments

Germany's Sovereign Tech Fund Backs KDE with €1.3M

https://www.theregister.com/oses/2026/05/14/kde-bags-13m-as-europe-realizes-it-might-need-an-os-o...
39•Lihh27•41m ago•4 comments

What's in a GGUF, besides the weights – and what's still missing?

https://nobodywho.ooo/posts/whats-in-a-gguf/
28•bashbjorn•2h ago•11 comments

German intelligence offices snub Palantir software

https://www.dw.com/en/german-intelligence-offices-snub-us-based-palantir-software/a-77160897
36•abawany•1h ago•2 comments

The conflation of money and things

https://lithub.com/is-it-even-real-on-the-conflation-of-money-and-things/
48•bookofjoe•4h ago•14 comments

EditLens: Quantifying the extent of AI editing in text (2025)

https://arxiv.org/abs/2510.03154
24•horseradish•1d ago•2 comments

60fps Video on a CGA? – The GlyphBlaster

https://martypc.blogspot.com/2026/05/60fps-video-on-cga-glyphblaster.html
48•tambourine_man•4d ago•6 comments

DIY open-source ultrasound hardware on the rp2040/rp2350

http://un0rick.cc/pic0rick
10•kelu124•1h ago•1 comments

Rewrite Bun in Rust has been merged

https://github.com/oven-sh/bun/pull/30412
308•Chaoses•11h ago•365 comments

Show HN: Running the second public ODoH relay

https://numa.rs/blog/posts/odoh-anonymous-dns-without-an-account.html
103•rdme•9h ago•36 comments

Myths about /dev/urandom (2014)

https://www.2uo.de/myths-about-urandom/
76•signa11•8h ago•40 comments

The Tree House: A voyage to the source of a backyard dream

https://www.laphamsquarterly.org/roundtable/tree-house
61•Caiero•3d ago•11 comments

Leaving the Physical World

https://www.eff.org/pages/leaving-physical-world
167•andsoitis•4d ago•78 comments

Green Card Holders Targeted for Deportation by New 'Removal Apparatus'

https://www.nytimes.com/2026/05/14/us/politics/green-cards-immigration-deportation-trump.html
14•donohoe•50m ago•4 comments
Open in hackernews

AI is making me dumb

https://jpain.io/god-damn-ai-is-making-me-dumb/
213•Eighth•1h ago

Comments

ge96•1h ago
Use AI to fix that cert
bigstrat2003•1h ago
The cert is fine according to my browser.
ge96•1h ago
Fair seems it was my vpn

As far as the topic on hand, I work with someone whenever you ask them a question they say "AI says..." I'm not a big fan of that.

skeptic_ai•40m ago
If you have checked your AI would have said is not the cert
ge96•24m ago
Possible?

AI would have to know I have a VPN active

voncheese•1h ago
Relatable! Or at least making me feel dumb (at times). Things that help me feel smarter are

* actually writing more on my own - created a personal blog just to get myself to write more

* upleveling my thinking - think more about problems and framing

* leverage my experience - guide (or sometimes force) the AI assistant to leverage my experience to avoid problems

* learning new things - rather than let AI just replace things I can do, I use AI to help me learn new things/technology faster than I would have pre-AI

blain•1h ago
> learning new things

I wonder lately, doesn't that all new knowledge push out the old knowledge? As in new things replace old things we know. I don't know any studies on this but do we have infinite capacity for knowledge?

What about retaining it? I catch myself asking AI wondering about random things that pop into my head, reading it, maybe using that knowledge once and later no longer remembering what it was. Maybe if you use that knowledge in practice from the get go but projects get so complicated sometimes it seems like there is not enough space in my brain for things AI is learning me.

voncheese•50m ago
New knowledge doesn't necessarily push out old knowledge, and we probably don't have infinite capacity for knowledge. That being said, at least in my experience, the time when new pushes out old is when old is less useful than new.

Retaining (again just speaking for myself) requires actually using / applying the knowledge at some point within some timeframe of learning it. Otherwise yeah it fades to the point of disappearing over time.

eikenberry•30m ago
Knowledge memory doesn't really work that way, it is more like that it is constantly fading unless re-imprinted by use and learning new things is just imprinting new knowledge on top. The new knowledge will form connections with the old knowledge which will help keep some of it from fading, but not all.

Another way of looking at what you said is that the practicing the new knowledge takes the place of practicing the old knowledge. So it isn't the knowledge that is replaced, but the learning (imprinting).

pton_xd•1h ago
We'll just move to a higher level of abstraction; thinking will be like efficiently coding in assembly, no longer necessary in today's world.
steezeburger•1h ago
I've been thinking a lot about the new primitives and paradigms we'll see.
AlecSchueler•55m ago
Care to share some of these thoughts?
simianwords•49m ago
1. we will be thinking at the level of systems like services and DB's and forget about inconsequential things like methods, classes, variables

2. we will think of verification loop more - tasks will be chosen that have more ability to be easily verified

3. the concept of the difference between "generation" and "verification" will be more mainstream [1]

4. spec driven development will become more common

5. scenario testing will become mainstream

i have few more predictions like these.

[1] I wrote a blog post on this explaining why I keep this generation vs verification difference in many parts of life https://simianwords.bearblog.dev/the-generation-vs-verificat...

Imustaskforhelp•34m ago
> 2. we will think of verification loop more - tasks will be chosen that have more ability to be easily verified

> 4. spec driven development will become more common

I do believe both of these, recently someone created an rar open source alternative for all its version using LLM agents because of that specs and in some sense verification/easy debug (or compile time) aspect.

On the other hand, I was making a GUI application (a rough scratchpad app) in Odin and there were so many bugs that I had to explain it and even then it was like lottery or just about unpredictable would be the better word as it would fix one thing and break another or just not fix it.

At the end of the day for GUI apps, it just doesn't have any way of testing them that greatly perhaps. There are many GUI things which I feel like LLM's are still underwhelming in, especially if you wish to create a GUI in say any niche language.

It can do that but the workflow is so bad that it might just not be worth it. i do wonder if GUI development becomes the one thing that AI can't do and their software development jobs are safe.

I was just scrolling upwork randomly and I saw tons of flutter & wordpress jobs.

svnt•1h ago
A higher level of abstraction that doesn't require thinking? Did you mean to write thinking here?
ryeights•57m ago
Reads like great satire to me.
bogzz•56m ago
Welcome to Costco. I love you.
simianwords•52m ago
Higher levels of abstraction require more complex levels of thinking. Why do you think it won't?
happytoexplain•47m ago
The entire point of abstraction layers is that they require less thinking most of the time (and, usually as a tradeoff, more thinking a minority of the time).
simianwords•45m ago
The point of abstractions are to do more work because the lower levels are done kinda in the background with less energy

Like GC langauges help me do more productive work by hiding useless info about memory

tombert•44m ago
I'm not sure I agree with this at all.

I don't think I think less when writing Clojure or Rust than I would writing raw assembly code, I just broaden the scope of my projects to fill up my thinking capacity.

Animats•31m ago
Putting info into a spreadsheet is a higher level of abstraction that doesn't require thinking. There are many concrete representations like that. LLMs don't use them much. This is a lack.

Can you point a LLM at a body of code, and tell it "give me a concise UML chart of what this does"? I'm not advocating humans writing UML, but some representation like that may be useful to AIs. Except that they don't really do graphs very well. We may need a specification language intended to be read and written by AIs, readable by humans but seldom written by them. Going directly from natural language specifications to code causes the LLM blithering problem to generate too much code.

happytoexplain•1h ago
People say this constantly, but it's a qualitatively different jump from all previous abstraction layers. Previously, the part of your brain you had to use, and the way you had to think, changed from old layer X to new layer Y, but they were still very similar qualitatively. A person who was good at and enjoyed layer X either naturally was good at and enjoyed layer Y, or they could achieve both of those things after a little time. But with LLMs, the jump is much more lateral.

To do the thing I hate and use an analogy: It's not like asking a furniture maker to start using power tools; it's like asking a furniture maker to start telling a robot to make the furniture, in English. Yes, the people who were already good at furniture-making will have an advantage in how to direct the robot - but the salient point is that it's a recipe for misery for many people.

johnfn•47m ago
Hmm. I use AI to write almost all my code, and I feel that the "part of the brain" I use is mostly the same. Pre-AI I spent a lot of time thinking about code architecture, schemas, APIs, etc. Post-AI I spend a lot of time thinking about essentially the exact same thing. Yea, there are some things that I used to think about that I don't now - the fiddly bits, like why my parentheses weren't balanced or what field I was missing that was causing a 3rd-party API to fail. But the work feels more similar than different.
raincole•45m ago
You should've turned the sarcasm detector on.
hansmayer•57m ago
Ha ha ha... actually in the last 20-30 years most people learnt programming in assembly no for the sake of building programs in assembly - it was tought so you can have a grasp of microprocessor architecture. Instructions, interrupts,registers and all that. It means being fully aware of your environment. Without this knowledge of our environment, not only in our jobs, but also generally in life, what are we? Not more than wild animals surviving on instincts and an occasional burst of conciousness. Well, no thanks - I don't want to be an Eloi.
melenaboija•43m ago
This is not a higher level of abstraction only, is the highest level of abstraction for humans. So seems to me like a a substantially higher jump than assembly to modern programming languages. Not only technically but also conceptually.
EvanAnderson•43m ago
> ...like efficiently coding in assembly, no longer necessary in today's world.

Assembly is a stretch (albeit a few applications still need it), but otherwise that sentiment (and people who actually believe it) speaks a lot to me about what makes today's PCs slower, more latent, and less enjoyable to use than the machines of the past. Today's world sucks.

weezing•1h ago
You are doing this to yourself.
Quarrelsome•1h ago
Is it tho? I get paid more these days to write less code. Is it dumb to be paid more/do more, have more oversight and deal less with the minutia?

Im still concerned enough about the specifics to show concern about background refresh tokens silently failing in OAuth in a mission critical real-time system.

Im not coding it, but im still thinking it. That's the important part, ain't it? Is it dumb or just clever delegation?

ge96•1h ago
Me personally if I had the money to get out of dev I would just not fun anymore if you HAVE to use AI to code instead of doing it yourself. That's the name of the game, velocity.

I like making things myself, I have self-navigating robotics projects I do on my own time, but I'm not gonna use an AI to do it for me, the joy I get is figuring it out myself.

I will use AI if I'm stuck on something or need a specific algo written that I've spent enough time on and couldn't figure out.

Quarrelsome•1h ago
I just query about whether were talking about "dumb" or "fun". Agree with the latter but question the former.
malicka•1h ago
No, not really. You know what to think about because you were trained to by coding through the problem by hand. If you stop doing that, you stop learning the specifics of whatever problem domain you work with.
Quarrelsome•1h ago
Sure, that's why we probably shouldn't start with vibe coding. Or otherwise at least learn formal methods to test against assumptions and doubly triply check vibe coded output.
croes•48m ago
Your thinking is based on your experience with code that’s why you do the thinking and not your customers and managers.

You lose some ideas of thinking if AI does all the coding unless you study that code. But that’s still different to creating the code yourself.

Same with authors or songwriters. Their brain is used to create stories and songs that‘s why it’s easier for them to come up with ideas.

If you are just a reader or listener your brain doesn’t get wired in the same way.

AI is a lesser problem for already experienced developers, because they just lose some abilities but new developers will never get those abilities in the first place, which will limit their thinking especially for edge cases that need creativity

Imustaskforhelp•40m ago
> Is it tho? I get paid more these days to write less code. Is it dumb to be paid more/do more, have more oversight and deal less with the minutia?

I do think that what people are being paid might get adjusted to whatever is happening.

Firstly off-shores, then now Tech companies have convenient response to lay off people and they genuinely believe that companies can be 5-10x shorter with AI and 90% of code will be written by AI.

They then push it on engineers and some adopt, some don't. It becomes a goodhart's law and people just start spending tokens to look good too and just spearhead using AI because hey 1) the corporate is recommending you to do this and then 2) the points you talked about.

The AI bill blows up (Cloudflare spends 5 million $ per month probably more in AI bills iirc) and with all of these, the company fires people off.

The amount of software engineers laid off all then try to create another AI tool (...using AI) or try to overcompete when the job market is at one of its all time low. Combine this with the overall trillion dollar and more of US stock market which is attached to the AI bubble.

I do think that you are paying a price in all of this, I feel like job insecurity is at an all time high, people are just scared of losing jobs from my understanding within this career. Some are closer to retirement than others but that's about it.

I think that nobody is that happy to be honest, the software engineer is worried about his job, the CEO is worried about being replaced or his product replaced by AI, the AI company is worried about how it would be profitable in first place, the investors are worried if they got into a bubble, the government is worried about all these people and other so distractions (think UFO files for example) and wars are happening and its successfully diverting our attention from real issues.

I don't know but I think that we are all paying a price and I say this as I sometimes feel the most over-empowered by AI, (like young guy in his teens) but I just feel like we lost something more critical along the way. We lost some senses of our humanity and peace as we are embedding this technology and just have people who are only thinking about it 24/7. I have to be honest but I do sometimes feel like I would've fared off okay without the AI thing too and I don't care about my personal gains so much sometimes but I do think that the world would've probably been net positive if AI's plateaued or were never created.

slackfan•1h ago
Skill issue.
steezeburger•1h ago
I enjoy using and orchestrating agents a lot to build software, but have never really had the desire to replace my writing with LLMs. I don't write a whole whole lot, so maybe I just don't have enough writing to do to make it appealing, but my emails, blog posts, comments, whatever are the last thing I want to automate. Not only because it's less personal, but because I'm so tired of reading AI cruft myself. So much more text in tickets than there needs to be, for example.

And how are people forgetting to code by using LLMs? Do they just mean they forgot the syntax of a particular language? Or forgot how to architect features or how the development lifecycle works?

I've mostly used LLMs to build more complex things that would have been a lot to manage previously, or to build something completely new and learn how it works. I feel like I've only become a better engineer (and programmer too) because of LLMs.

Rooster61•1h ago
I can't relate that much to this. Every time I use AI to write code, I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code. That ick feeling counteracts the dopamine hit of having a working app after a few minutes of vibe coding, and I don't think that's going anywhere anytime soon.

That said, I have experience. I could absolutely see myself falling into this as a junior or even mid level dev. I'd no doubt not feel that feeling on my neck if it wasn't scarred from code review lashings early in my career by knowledgeable mentors.

steezeburger•1h ago
Experience is so so valuable right now. We can guide these agents super well, but I do fear for the juniors as you said. I would like to think I'd use the agents to dive deeper and learn faster. It was pretty rough piecing together solutions from Stack Overflow, various irc channels, Reddit, etc. But also, I cheated on my homework in college and didn't really review the answers, so not sure. Though I pursued programming out of interest and not just to complete a degree. Maybe it would have been different. In any case, I'm glad I came into the LLM era with a lot of experience and failures already.
hparadiz•1h ago
Metrics, profilers, architecture! Use AI to get back to basics! Wanna prove a technique is better? Use AI to make a benchmark! Learn by experimentation! That is my advice to juniors. At the end of the day AI is writing code and there may be 10 different ways to run something. Only one is the fastest in any given use case.
chowells•58m ago
Fastest is usually the wrong metric. But you'd need experience to know that, I suppose...
steezeburger•57m ago
I think it's just the wrong metric to optimize for _first_. It's not generally a bad metric to keep tabs on though. Make it work, make it right, make it fast? Or something like that.
mikepurvis•54m ago
But the point is that LLMs giving you 10x the potential code output doesn't have to mean 10x the code committed. It can also be "let's look at all three possible implementations in more detail and decide which is really the best fit for our situation, and commit that one."

That's still 2-3x the velocity, but you get a better result because you went deeper on the paths-not-taken when designing.

steezeburger•58m ago
Yeah I totally agree! I also think people should still be reading as much code as they can. That's always been true imo. It is just hard to keep up with it now because of how much code an LLM can generate for $20/month. I do think we'll move to higher abstractions of course. We won't have to understand code as much as how the systems and components are architected. It would also be nice to use our new efficiency to return to producing truly optimized and fast software.
shigawire•30m ago
I don't think "cheating" is the right way to frame it.

A junior has managers pushing them to do more, faster. You review the code but do you really understand it the same as if you struggled through it? Do you ever build the muscle memory of what works and what doesn't?

It is the thought process that builds skills. I've seen some projects trying to be deliberate about learning from the agent as it writes to code - but I'm not sure there is a substitute for struggling and learning by doing.

svachalek•5m ago
When the chainsaw fails the juniors, they're going to be adding wood chippers and stump grinders. The seniors are going to be out there chipping artisanal wood blocks with a hatchet. You don't need a lot of history to see who you really need to be worried about.
sarreph•3m ago
I think this is one of the key takes right now. I too have similar experience.

Which way is it going to go?

i) “Seniors” also get superseded by even more capable models that can do all of the things which currently require experience.

ii) Linguistics become the new higher order abstraction (English is the new high-level programming language) _but_ there are different / orthogonal ways of approaching software development than the way we do things now — which “juniors” become more adept at more quickly.

nomel•2m ago
> Experience is so so valuable right now.

And probably the least valued it has ever been.

embedding-shape•52m ago
> after a few minutes of vibe coding

Don't vibe-code, it's a joke someone coined in the moment, that somehow the industry decided shouldn't be a joke, and some people think it's a feasible way of developing stuff, it's not.

Find a better way of working together with agent, where you get the review what's important to be reviewed by a human, and "outsource" the rest, and you'll end up with code and a design that works the way you'd program it yourself, you just get there faster. I probably end up reviewing maybe 90% of the code that the agent writes, but still it's a hell of a lot more pleasant writing/dictating a few prompts over typing tens of thousands of characters and constantly moving between files. Maybe I'm just tired of typing...

wahnfrieden•42m ago
There are tasks where it is appropriate to vibe code
embedding-shape•33m ago
Agreed, whenever you're 99% sure you'll throw away the code afterwards.
wahnfrieden•30m ago
Internal/personal tooling, marketing automations, etc. tend to afford it without needing to throw it out after. These are also cases where you can simply rewrite later without having to address a mountain of debt.

If you do this work for a wage and are nearly fully alienated from the value of your labor, I understand the distaste for applying it in any circumstance. You'll care more for your personal experience of the work: how informed you appear when reporting on it to your colleagues, how your boss/colleagues will judge you when an issue arises, how much you feel you are learning from the work, how frustrating it feels to return to items at the behest of others, etc. Vibe coding in these circumstances is unpleasant.

embedding-shape•15m ago
I care about building programs that work, do their thing well and are easy to change today and in the future :) I'm not sure where you're extrapolating the rest from, vibe-coding simply isn't for long-lasting software, you need to actually be involved then. Don't get me wrong, most of the code I "produce" today is written by LLMs/agents, but almost none of it is "vibe-coded".

Personal tooling especially, since you want to be able to just do small changes over long periods of time, it's important it makes sense when you come back to it, even if you forgotten all about it since your last change.

Xmd5a•38m ago
I've been thinking of using Kiczales's Systematic Program Design [0]. Write the skeleton. Let the IA fill in the blanks.

[0] https://news.ycombinator.com/item?id=16563160

zackify•51m ago
I agree with your sentiment. I've been trying to get from plan -> complete with AI and it's been working very well in a sandbox.

I am trying super hard to give the tools to validate everything to AI.

I finish by opening a draft PR and then I go through doing a deep review myself.

If I didn't already have 10+ years experience, it would be hard to learn and not atrophy with easy shortcuts being so available.

You still need people who know stuff in detail and can own the code... for now

stolen_biscuit•49m ago
Fully agree. I supplement my game development with AI. Anything novel or interesting I want to do, I need to write the code for myself, otherwise I'm in for a world of hurt. But for the drudgery work that is necessary to invest a lot of time in but boring to actually write, I design a clear architecture and ask AI to do the implementation leg-work. And still you have to go back over and make sure it didn't decide to just create something outlandish. A good recent example is Codex trying to recreate from scratch the behaviour already provided by Area2D in a game I'm making with Godot.

If you try and get AI to do anything meaningful, it will be riddled with footguns and bizarre choices. Maybe if you have hundreds of dollars worth of tokens that might not be the case - but for someone who spends $10 a month, it's just not worth the headache.

Besides, for me these are hobby projects and writing code is still fun, I just make AI write the boring parts (good examples: saving and loading, parsing of data files and settings menu functionality) - but I keep it away from anything that needs a humans judgement to create.

SarikayaKomzin•47m ago
I have the same feeling on the back of my neck. I think it’s born from my crippling imposter syndrome, which is maybe a super power now.
ryandrake•44m ago
In my experience, Claude only knows how to spew code. Every problem you want it to solve, it translates into "more code" rather than "less code". You have to very closely code review everything it does, otherwise your codebase is going to just grow and grow, and asymptotically approach 100% debt.

I code review everything that Claude produces, and I'd estimate about 90-95% of the time, my reaction is WOW it works but too much code dude, let's take 3 hours to handhold you through simplifying it until nothing more can be removed.

tailscaler2026•39m ago
Of course it writes a lot of code. It gets paid per token. That's guaranteed future income every additional line of technical debt.
HoldOnAMinute•34m ago
Periodically you can also ask it to review the recent changes and see if there is a risk-free way to streamline them.

You can also tell it to periodically summarize the "lessons learned" from the recent session(s)

embedding-shape•30m ago
Then local models shouldn't suffer from the same problems, but they do. They just aren't trained in the direction of "less code == better long-term maintainability" I'd say, rather than some grand "increased-token-usage" conspiracy.

You can certainly steer them a bit to reduce the issue parent talks about, but they still go into that direction whenever they can, adding stuff on top of stuff, piling hacks/shim on top of other hacks/shims, just like many human developers :)

bonesss•16m ago
Training data is the masses of code from everyone.

Restrict that data to just the best of the best, the tersest of the tersest, and we’d see better output. I don’t think people are sharing that kinda stuff (Jane Street’s gems stay locked up), and even if they did my presumption is that it’d be too narrow and demanding for general audiences.

Big hopes for the long future, damned to some degree of mediocrity in the near term mass product.

operatingthetan•36m ago
A lot of people seem to think if you give the agent a framework and clear plans that it spews "good" code. I doubt it though.
wccrawford•36m ago
I haven't used Claude, just Sweep, Copilot and whatever Jetbrains has. But they've definitely deleted code, not just added it. I know, because they have deleted code that I definitely still needed, and I had to reject those changes and start over on the prompt.
HoldOnAMinute•35m ago
Here's what I do

Tell it "Do not change any files yet, just listen." Then we discuss the problem. Then I have it write to a file it's understanding of the change.

I review that carefully. Then I let it implement. I approve each change after manually looking at it. I already know what it should be doing.

Make smaller changes and check each one carefully before and after.

joebates•28m ago
Same. Luckily I enjoy the process of refactoring and deleting code is nearly arousing, so I get the initial dopamine rush of wow this works, followed by the dopamine rush of "wow now this is cleaner and works so much better". Keeps me in touch with the codebase too.
gchamonlive•40m ago
> I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code

Can relate but the only thing I do different is I teach AI how to cleanup after herself in followup prompts, sessions and refining AGENTS.md. Static code quality analysis tools are also really good to keep the agent on its toes.

aerodexis•38m ago
Reading and writing are related, but separate activities. One's capability to write code can degrade independently of one's ability to review it.
epolanski•37m ago
The thing is that you seem to have that luxury to be able to dig more into the problem and scratch that itch.

But the industry is changing around you fast.

If MIT-bred devs were already building crap in faang before, the trend has been getting nothing short of worse across the industry.

Expectations are rising, the field is becoming a rat race of which engineer can output the most mediocre/acceptable/good enough amount of features in the least time as possible.

Let me make this clear: you're in an increasingly rarer bubble where you have a luxury that is disappearing in this industry, plain and simple.

I have the fortune of having stellar devs around me, people that contributed to projects and software you use every day.

They are also outputting magnitude of order more than they ever did, and none of them is getting genuinely better at the craft, but it is what it is.

dclowd9901•34m ago
I've been using it mostly to bat away yak shaving rabbit holes one can get into when working on a large and complex project. I work mostly on platform work, which is generally nebulous in its feedback loop and testing. Relegating AI to refactoring and building tools to help me research keeps me focused on solving the actual main problem I'm trying to solve, reduces context switching. I really don't understand people who use it to bat out their main focus. I simply don't trust it at that level.
onlyrealcuzzo•34m ago
> I'm constantly fighting a feeling on the back of my neck that I need to look over everything it has done and supplement/alter it with my own code.

On the flip side, I'm working on stuff FAR more challenging than I would ever be able to do on my own.

My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.

AI might be making me a worse coder, but I don't care. If it hasn't "solved" coding now, I'm pretty confident it will long before my career is over. I don't have a job because I can write code - that's a small part of it. I have a job because I can get things to work. Anyone can code things that don't work (especially AI).

AI is certainly making me a far better overall engineer. Instead of spending my time trying to make the compiler happy (or fixing dynamic type errors at runtime), I can spend my time trying to solve substantially harder problems that I would never even dare try without an entire team to back me up (i.e. never).

Coding - imo - is VERY low on the totem pole of engineering skills.

I don't care if the function is pretty. I care if the system is upholding invariants and performing as expected, and there's adequate testing in place to PROVE to me that it ACTUALLY works.

High performance concurrent code has always blurred the line between sorcery and arcana... Go didn't really solve that. Rust/Tokio didn't. Zig didn't. C certainly hasn't.

It might be easier to prove to yourself, if you're the one doing all the writing, but at the end of the day, code is rarely just for you...

You probably should have the same level of proof whether you wrote it yourself and just trust yourself bro, or whether a Chinese Room wrote if for you.

I feel like I'm living in a Brave New World, and - at least for the time being - I'm enjoying it, even if it feels like I'm sprinting as fast as I can and still unable to keep up.

ferngodfather•31m ago
> My brain is melting because I can barely keep up with learning how to figure out if I'm even doing what I'm trying to do.

This is not a good thing. You should understand what your code does. Writing code nobody can understand is not a flex.

movpasd•34m ago
I feel the same way, and yet I would still say I feel AI usage atrophying my thinking skills. I feel less tempted to use it to shortcut whole files, but even just using it to speed up looking up and carefully reading docs, tinkering with a library to understand it when docs are inadequate, working out the tradeoffs for design decisions... These sound less objectionable and more like simplr speedups, but when I _do_ need to do it (because the agent refuses to do it properly) I can feel the friction so much more keenly. Whether that's just me losing the habit of those specific tasks, or a generalised loss of g-factor, I don't know.
randusername•32m ago
I really enjoy having the AI write the spec then I write the code.

Reviewing code is pain, reviewing requirements and giving feedback feels more productive. I have to confront the full shape of the problem and I usually discover a few cans of worms that make me rethink my approach.

collingreen•23m ago
Learning from code review lashings is amazing in its effectiveness and minimal blast radius! I'm glad you were able to take that in the easy way.

Scar tissue from production going down and staying down is probably powering those code reviews and I think will be teaching this wave of vibe projects a few hard lessons. I've had to learn a few things the hard way like this and it's as effective as it is painful.

I'm very pro ai-generated-software in the right context. I think being able to vibe out software as needed is awesome and could finally unlock the potential of our computer and data dominated world. I also think we haven't yet learned as a culture where this new thing is different from traditional software and misunderstanding that is where a lot of the pain will be felt.

svachalek•20m ago
I'm a very senior dev (32 years exp) but I've got the process nailed down tight enough with .md documents, skills, review agents, etc, that I don't typically have that feeling or any need to do anything extra.

I don't think this makes me dumb though, I've just moved up stack. Rather than caring about assembly language or source code, I'm focused on requirements, architectural decisions, engineering process, and ever more automation.

guelo•15m ago
Every engineer to manager has the same thought but after a few years they can barely code.
andrewstuart2•1h ago
I was talking to some friends about this over drinks the other day. I feel it has the same effects as any drug (or behavior) that triggers dopamine. If I can get a dopamine hit for lower effort AI in 10 minutes, and maybe a tiny bit better of a hit doing it myself after a day, why would my brain go for anything but AI? Especially when my DIY muscles are a bit atrophied.

And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.

dogleash•57m ago
I'm curious whenever I hear takes with your perspective.

Entering the workforce happens at an age where people have built (some more rudimentary than others) a level of understanding and self control regarding delayed gratification and Type II fun.

Did you have the kind of life where you were never really challenged to build that skillset, or is the mental stimulation so strong for you when you use AI that it overcomes executive function?

AlecSchueler•53m ago
> Did you have the kind of life where you were never really challenged to build that skillset,

Do you really think phrasing a question like this will ever induce a productive response?

dogleash•33m ago
I guess I could have phrased it better, but at some point I'm asking about weak self control vs if the drug is that strong. The life experience thing was meant as laying down a facesaving reason that it's OK to say your willpower sucks. You just weren't forced to cultivate it. Plenty of reasons that can happen in life.

I think it's pretty normal to be able to reflect on the difference in life skills between myself and those I see in others. There are things I've struggled with throughout adulthood because through some happenstance I was able to avoid the class of challenge as a child.

I didn't learn how to study until my 20s. I didn't have will-power over eating and exercise until my body changed around 30 and I suddenly got fat, then I talked with friends that teased me for being less skilled at something than a teenage version of themselves.

What's the saying: someone who's never smoked doesn't have to learn how to quit smoking?

AlecSchueler•25m ago
I understand what you're asking and why, but the phrasing reads very dismissively and that's what I was asking about. Generally a friendly tone will get you a lot further.
marknutter•1h ago
Ai has been the best learning tool I have every used and it's not even close. I've learned more in the past year than I have in the past 5.
Ifkaluva•1h ago
Yeah it really depends on how it is used
happytoexplain•1h ago
There are two kinds of learning: Reading and doing (and you need both). AI has been great for the reading half of learning, but has harmed the doing half of learning due to efficiency demands. We can still "do" in private, but no longer in our day-to-day.
comrade1234•1h ago
For my current project I'm coding every day in Java, ruby, and JavaScript. I waste a lot of tokens doing what used to be simple google searches for language differences since I mix up things like the null-safe operator ruby vs jscript or what is the continue/break statement in ruby vs java. I think Claude is probably very disappointed in me that the most complicated thing I use it for is refactoring old Java loops to use more modern streams which can be unwritable for a human off the top of your head.
d1sxeyes•48m ago
It doesn’t help that Google has gone to shit though, and what used to be a simple Google search is now an enshittified embedded experience with an AI anyway.
Bolwin•29m ago
No one's forcing you to use Google. I've found Kagi to be pretty good
behole•1h ago
I feel lucky cause I started dumb. Unintentional level-up!
Accacin•1h ago
I have a nice balance of using AI at work as a C#/TS developer which allows me to get stuff done and working on personal projects at home using AI purely for ideas when I'm stuck or not able to figure something out myself.

I personally think it can be a great tool for learning but it's so easy to fall into the trap of getting AI to do everything for you.

I've also used it for personal projects like a Chip8 emulator I wrote in C where I'd managed to run a few basic games and ran out of steam. Used AI to help me implement the rest.

Aurornis•56m ago
> With coding, I've been using AI entirely for a year or two. I've been entirely prompting and I haven't written a single line of code. I have mostly forgotten how to code

I've been using AI coding tools a lot lately, though I'm always in the loop. I write most of the important code by hand, but I like to send Claude Code or Codex off to try to come up with a solution in parallel to compare.

Having reviewed so much of my hand-written code side by side with AI-written alternatives, I am still amazed that anyone admits to letting AI write all of their code. Either you're working on much simpler problems than I am, or you don't really care about anything other than making the tests go green and waiting for bug reports to come back so you can feed them back into the LLM again.

Some times the coding tools come back with better ideas than I came up with. Some times my idea is much better. Most often with medium to high complexity problems, if the AI comes up with a working solution it has enough problems that an attentive human reviewer would have rejected it at best. At worst, it creates a mess of spaghetti code with maintenance time bombs ticking away. And that's for one change. I can't imagine what a codebase would look like if you completely deferred to AI tools to do everything.

This quote is even weirder because they claim to have been doing this for two years! Two years ago, coding tools were much worse than they are today. Using AI to write all of your code 2 years ago would have been a weird choice.

When I read posts like this I don't know what to think. Is this real? Or is it exaggerated for effect?

I also roll my eyes a little bit at the idea that not writing code for 1-2 years means you forget how to code. I've been back and forth between 100% management and 100% IC in my career. While there is a warm-up time to get back into coding, you should not completely forget how to code after such a short time. The only reason this person feels like they've forgotten how to code is that they've made a choice not to code for 2 years and, apparently, they don't feel like making any effort to change this. For someone who claims to love writing code, I don't get it. Something doesn't make sense about this writing.

dabinat•56m ago
It doesn’t have to be this way. You can use AI in ways that don’t rot your brain. You can delegate easy tasks to the AI to save time, while saving the harder tasks for yourself. Or you can treat it more as a mentor / tutor and have it explain why it made certain decisions.

I find that AI fails at things that are truly creative. I have been thoroughly unimpressed with ideas it has had or things it’s written for me. There’s still a lot of room for human creativity.

miltonlost•52m ago
> You can delegate easy tasks to the AI to save time, while saving the harder tasks for yourself.

This sounds a lot like "You can skip the fundamentals of basketball and just focus on dunking!"

asdff•52m ago
Well, the "easy" tasks people are delegating are still leading to atrophy. Stuff like having it take over your writing. Now you feel you cannot write without this crutch. I've seen stuff pitched like AI that makes your slide decks for you. That to me is dangerous because creating the slide deck in a coherent way is imo a very valuable way to understand your project and keep on track with the story you are trying to tell about the work. I think a lot of what we think is easy or even boring has a lot of value in building up our understanding.
iamcalledrob•46m ago
It baffles me a bit that people are working so hard to replace themselves with AI. It's such a high bar for the AI to hit, and takes the creativity away from the human.

I have a pet theory that perhaps the optimal way to use AI will be more like an "exoskeleton" that turns you into a super-human programmer. Something that plugs the deficiencies of the human programmer, rather than replacing you entirely.

wccrawford•34m ago
I wouldn't keep the "hardest" tasks. I'd keep the important ones. It's often the same, but there are differences. And I'd argue that the important ones are the ones that you most want to retain the ability to do yourself anyhow.
riazrizvi•54m ago
It converts ICs into project managers, by default. I've been wrestling with this issue for a year.
tombert•46m ago
Yeah, I've felt like it has converted my job from "writing software" to "babysitting interns".

There are things that I think are very cool; there are lots of projects that I've sort of wanted to do for the last decade that I have pushed off because they're reasonably high effort and I don't want them that much, so being able to have a pretend intern write it for me has been great.

On the other hand, I do think that using Claude/Codex to do all the coding at work has become a little soul sucking. Now instead of being paid to do fun software work, a lot of my work still boils down to babysitting interns.

When I do get to work on projects that are interesting, it's still fun because I can justify writing TLA+, and using that as a guiding spec for my projects. The problem is that most work really isn't that interesting; a lot of it is glorified SQL queries, or CRUD, or "put thing into Kafka in one place, and take it out in another place". Those jobs can be tedious, but they aren't interesting, and now instead of even getting that, I yell at Codex to do it and I awkwardly sit and wait.

I didn't think I'd miss writing stupid CRUD apps, but here we are.

riazrizvi•29m ago
I'm convinced Claude Code and Codex are not the future. The cap seems to be a 3-500 line file so I just use ChatGPT and/or my own front end to APIs on OpenAI and others including local. Much beyond that it will not do what I want. Too many expert details to get right.
tombert•6m ago
When it was just ChatGPT, I actually really enjoyed it. I still had to do a lot of the work, but I could use ChatGPT to explain arcane logs and help me diagnose errors. It didn't feel like babysitting interns, it felt more like "smarter google".

Codex and Claude have been a bit soul sucking. I feel like I'm doing less of the planning and the like. I acknowledge that most code that makes it into production doesn't have to be amazing, but I would still take some level of pride when I would figure out an interesting optimization, even for a simple CRUD app, and now I am somewhat deprived of that kind of stuff.

pplonski86•53m ago
before I ask AI to write anything, I prepare a plan, I was very positively surprised when noticed Plan mode in Codex recently. It make me feel that maybe others doing the same and that's why they added it. Anyway, I start with plan, then ask AI to do just one step.

If coding a new feature, I do one step and check the code, doing git diff, reading changes, or just asking Codex, to show me changes.

If writing an article, I ask for only one paragraph. I read paragraph and if it is ok, I accept it, if it doesn't show off my thoughts I work on one paragraph.

If doing data analysis with AI, I do one step of analysis and ask AI to display intermediate results so I can see if all is going in good direction and there are no hallucinations, additionally I have follow-up prompts for AI to do results verification. If all looks good, then I continue to the next step.

I don't like situation when I ask AI to do all code changes, or all article, or all data analysis in one pass with one prompt. It is simply impossible to check if AI is correct and results are not satisfactory. You can easily see this when asking AI to write a deep article with one prompt - you clearly see that it doesn't reflect your thoughts.

Maybe step-by-step is the approach to use AI and not feel dumber.

Imustaskforhelp•51m ago
Firstly I salute the author for saying these things. I mean we know the feeling of criticizing AI and certainly I criticize it a lot too, but when it comes to personal matters or how I am using AI, there are somethings I shy away from saying online and I wonder other people might feel the same way too.

So for example, once AI deleted my project, I was able to recover it but I lost version control through series of mistakes and IMO I lost a good version. (I think after abandoning that project and coming back, I was able to accomplish it)

Another example which is the one which is biting me the most is that I wanted to create a copy.sh/v86 based thing where you are able to edit the .img files of distros and save them all within the browser. I was able to run v86 custom way but I wasn't able to mount or have a proper way for making it work.

And now although I mean this is just an optional project and I just thought hey it would be fun to edit .img files in browser but now it feels like I get disappointed.

I think that disappointment is in both say a frustration of thing not working and secondly, just realizing that I might be dropping this idea altogether. Now I must admit that this is a field that I have absolutely no expertise at all in, but still, it feels disappointing to me and I kept thinking about it for sometime now.

I wonder how many people just feel that if AI is unable to make their project, to then either get frustrated/disappointed and even a salt of panic. I think its just wrong for how damn much we are relying on LLM's at this point. It feels like the whole economy is just doing what I am doing but with billions of dollars.

Another thing that I feel like is that both young and elderly people are really much like the same in vibe-coding. (Yes specs can help but LLM's are still autocorrect on steroids), I feel like we are both forsaking the junior developers and also forsaking the expertise created by senior developers as we replace it with these LLM's

han1•50m ago
related: https://news.ycombinator.com/item?id=48139058
HeinrichAQS•49m ago
Understand you 100% - thats why I force myself to study maths as a "hobby" at a remote university. Its completly useless these days since I will probably never reach a level where I am better than current frontier models - but it sharpens my own mind, just by doing it. I would compare this to the same principle which applies to physical training - its not essentially required these days anymore to be physical active, still its quite helpfull. It would be dumb to not use it - since it is usefull, but its also dumb to see yourself getting dumber and not doing something against it.
syl5x•49m ago
I feel that few will have the privilege to have the time to write code by hand. And let's actually see what we are actually writing, most of the time for me its nothing novel, nothing fancy its the same old create a backend for X, fix some simple bug and stuff that are trivial for a mid-senior programmer. The harder tasks are mostly (again for us) architectural decision over the code and I am even thinking of how we can develop a system where LLM wouldn't derail on feature implementations. Anyway, what I am trying to say is that you writing code by hand may be okay for now but in the future I believe the shareholders and whoever is on top of you will want you to deliver features/fixing bugs FASTER with the help of LLMs and if you can't deliver that you will under perform. So in the end it's not what we want but what the shareholder wants. Of course if you aren't drained by this you can write code by hand in your free time. I don't want to sounds like a doomer but I believe this will be very much a reality sometime soon.
iLoveOncall•42m ago
Everyone has the time to write code by hand, because AI doesn't yield real productivity gains.
lacedeconstruct•41m ago
It was a never a velocity problem though, rapid progress comes mainly from designing better systems and building tight abstractions not by writing using the same primitives faster
pmg101•29m ago
That certainly used to be the case.

Do you think that in 2026 maybe rapid progress can also come from using the same primitives faster?

I'm still figuring this out but I'm certainly open to the possibility.

kingstnap•49m ago
I agree 100% with this article.

You need to spend time on coding without agents and writing without AI as practice if nothing else.

You should not get complacent in offloading all detail oriented work to agents.

raincole•47m ago
> I just caught myself about to copy and paste it into Claude to see what it thinks because I'm worried that it doesn't make sense or it reads funny or there's something missing

I unironically believe this is a very good habit. When it comes to writing, instead of starting with AI, finish a chapter by hand first then ask AI to review it strikes the best balance.

collinmanderson•39m ago
> I just caught myself about to copy and paste it into Claude to see what it thinks because I'm worried that it doesn't make sense or it reads funny or there's something missing. That's the self-doubt that it's feeding on and what I need to fight back.

This is where I'm at. I feel like I need AI to review everything.

temporallobe•46m ago
I only really use GH CoPilot and while it’s really damn good at predicting what I’ll do next, I find it really makes me lazier. It’s like using GPS - it’s much faster, easier, accurate, and reliable than not using it, but I have found I don’t remember routes like I used to, as if that part of my brain just stopped working. If we don’t use a skill, our brains seem to want to almost immediately reclaim those resources for something else.
jasonjei•43m ago
I’m not using AI to eliminate thinking but to free me from the rote mundane code writing. AI is perfectly competent at writing code once a prototype is implemented.

I do write initial proof of concept crude prototypes (not commented, hardcoded variables, etc), and AI does the productionizing of them. It has really allowed me to command a team of agents instead of keeping track of a bunch of humans of varying work ethic, skill, and ability to maintain high code quality. And often AI is very good at maintaining patterns used in the code base or even keeping them to industry best practices.

When using AI you will no longer be writing so much in programming languages—English or whatever language you talk to the LLM will be the main language.

zer00eyz•43m ago
God damn this nail gun is making me lazy, its like I don't have to swing the hammer any more...

Most people, given a nail gun, cant build a house, thats where the skill is...

Im not someone whose validation came from the lines of code, but from the resulting working system.

intended•42m ago
AI use and low confidence are correlated with lower ownership and deferment of critical thinking skills.

Based on the MIT and MSFT studies.

WalterBright•41m ago
Well, James, forgive me for being so inquisitive; but during the past few weeks, I’ve wondered whether you might be having some second thoughts about the mission.
hedayet•40m ago
also, chatting with AI makes me impatient and delusional.
epolanski•40m ago
I try the compensate for skills atrophy with leetcodes and kata wars, but the plain reality is that real software engineering, the one requiring you to absorb and make a problem intimate is just not there.

The work rhythm has ballooned and as every co worker is now pushing work (generally mediocre but acceptable due to strong codebase fundamentals and them being good engineers) it is increasingly becoming a rat race of who delivers more. Companies don't even need to promote AI productivity because engineers being engineers will engineer the minimum effort required to deliver as much output that makes stakeholders happy.

I am less and less fond of this work.

I'm sure there will be people with different experiences, but I've never worked as much as I did in the last two years, but I'm too burned out. I genuinely feel I've regressed as an engineer and I see the same in my coworkers, some of them contributors to the highest impact OSSs you can think of.

Every day, I'm more and more leaning into changing industry.

I love code and programming and solving product problems. But the job has changed dramatically.

If the pay+comfort ratio wasn't that good I would've done that already.

It's hard to give up to 6/7k+ net per month in southern Mediterranean. I'm way better off financially than most US devs making even more, there's no comparison.

dfxm12•39m ago
AI is keeping me on my toes. Many people in my org are experiencing the Dunning-Kruger effect after being armed with AI and are making such new and spectacular messes that I've had no other choice but to ratchet up governance controls. Improving documentation didn't help. The few people who read it complain to me when it is contradicted by AI.
coldtea•35m ago
Most people who say this didn't/can't happen to them, are the worst cases...
VikRubenfeld•35m ago
"That's the self-doubt that it's feeding on and what I need to fight back."

Yes -- now let's talk about the correct form of fighting back.

It is not "I don't want to feel self-doubt so I will suppress that feeling."

It is, "The self-doubt is valuable -- it's pushing me to improve."

The AI is never going to be able to say what you really mean. But it may inspire you to push harder to improve your ability to do that.

deathanatos•33m ago
My company has 3 AI on every pull request now. They behave as follows:

1. a general coding AI: Completely broken. Should auto-comment, but never does anymore. Stopped a while back, nobody seems to know why.

2. another general AI: You have to at-chat it. It reacts to the message with <eyes emoji>, but never actually posts a comment?

3. a security bot. Comments, when it thinks there's a problem, in the most obtuse way possible. "SAST findings". But the findings are behind a link, and none of us devs are given access.

I could lean on and press the various people shoving AI down my gullet to like … look at this, and the actual lived experience of devs trying to derive productivity from this mess? But IDK what's in it for me, really.

Even Claude, when it worked, would comment in the most sociopathic manner possible: an English prose description of the problem, attached to an utterly unrelated line of code. Part of that is probably Github, who does let you attach comments to arbitrary lines of code in a review, only the blesséd lines can have comments. Literally none of our AI can format their complaint with a freaking suggested change (i.e., the Github feature, no, instead I get English prose).

Honestly for all I know we failed to pay the bill or something inane, but it would be nice if the AI could format an error message, or something.

dbvn•33m ago
You haven't written a line of code in 2 years and you're confused why its making you feel like you can't code?
jonstaab•32m ago
I have been telling people lately that I feel like I'm losing my mind. And I'm not even someone who has leaned into AI coding that much either; I've just tried to learn the tools since Claude got "good". But my inherent laziness, which was always flattered as something that makes me a good programmer, has made me unable to use the tools with the required discipline. The result is that I have not thought deeply about the software I write for around 3 months. Every additional week that goes by without me doing a refactor or serious feature addition saps my confidence. I know I can still code. But I feel worried that I can't. Today I am refactoring a 4k LOC AI-written rust codebase. I don't know rust, but I will finally learn it today. And I can already tell the end result will be 50% the size and immeasurably more coherent.
kimjune01•32m ago
using AI to red-team your thoughts and assumptions is the fastest way to get smart since the dawn of time
itissid•30m ago
Has anyone gone back to doing code katas, code craft like exercises by hand? They help keep me grounded.

Also I feel like it’s fine to let AI write your code. I felt very much like the OP did. A couple of things help keep my sanity. one is that as developers I think our job has evolved to knowing what decision an AI makes is good and which one is bad, this can be code or design – but there is nowhere a developer(or for that matter a knowledge worker) can hide from ai. In this world you will be forced to communicate with them. Partly because as a community we have decided(for better or worst) that AI should bring non trivial amounts of productivity gains to software development.

The other one is something I want to validate which is for those of us who are mediocre at coding, it might be a gift because it would free up some time and thus mind space to consider what we are actually good at.

alasano•30m ago
The main thing that was dumbing me down (and burning me out) was having to babysit LLMs on anything except basic tasks if I care about code quality/structure/maintainability.

I love coding, it always felt like Legos for adults. Not that Legos aren't also Legos for adults.

But there's no fighting the fact that we won't be writing 99% of the code anymore so I take pleasure in crafting the specs and requirements clearly, that's where I put the effort.

And then to avoid having to babysit the agents to get them to stick to the plan, I built a super robust external orchestrator that forces multiple review and fix rounds until I get the result I want.

I'll be fully open sourcing that soon also https://engine.build

mariopt•30m ago
I feel your pain.

Today I'm forcing myself to learn SwiftUI and type each character with my hands, there is a part of me asking "Why are you wasting your time instead of prompting it and getting the UI you want in minutes?". Well, even I use AI I must know the domain I'm operating in to create good products instead of useless slop. Even though I've been coding for 20 years now, I still need to be humble to grown in anything new. I can vibecode full apps but I'm not gonna pretend that my experience isn't playing a massive role in guiding the models.

Don't let AI take away your joy for building stuff, it's totally fine not being "productive" and taking your time. Just force yourself to have, at least, 2 AI days off every week.

gavinh•29m ago
When I work with Claude to plan a feature and then review Claude's implementation, I don't understand the feature as well those I developed without AI assistance. I don't recall details of the feature's behavior as well, even days later. I suspect that this is not surprising to anyone who has studied pedagogy. I've been working on applying some exercises during code review (including self-review of my own AI-assisted code) to improve comprehension and recall (https://bridgekeeper.io/). If this problem resonates with you, I would like to talk.
thisisthenewme•28m ago
As a developer, I kind of feel like this all smells like job security.

After using LLMs for a while, I have to admit it's pretty nice, and I like using it. I've been vibecoding a few apps, and it's a good dopamine hit to immediately see your ideas come to life. However, based on my experience, it will bite you if you trust it blindly. Even in my vibecoded projects, it keeps adding "features" without me asking for them. Since they're just pet projects, I don’t really care as long as the end result is what I'm expecting, but I don’t think companies will be as flexible. I also don't think customers would like it if features changed or got added with every new fix or update.

So this could go in a bunch of different directions from here, but to summarize the current situation:

    A lot of companies are heading in this direction.
    Without proper engineering, AI will easily write more code and potentially change the application unintentionally.
    We will have fewer junior engineers entering the market because of fear around AI and reduced hiring.
    AI usage will hit a critical point where it is making massive amounts of changes, and the people "prompting" it might start getting overwhelmed.
    We will end up with more features that people have to keep in their heads. I don’t think we can trust LLMs 100%, and because of that, developers will still need to know exactly what the application does.
    Eventually, there will be a lot of bugs, and developers will complain that we need additional human resources.
    Hiring starts again.
I think, right now, the toughest position is for new developers, and the best position is for people already in the market.
photochemsyn•28m ago
Aggressively red-teaming your own work with LLMs is a good habit to get into. Prompts like “I’ve been told me to find the flaws in this argument/presentation/code file/etc.”. Doesn’t save any time, but is pretty educational, as long as you go back and forth a lot. It can fall into a style disagreement loop between two equivalent code blocks as it will try to find something wrong if instructed to do so, which is interesting.

If you don’t do this constantly, LLMS can certainly lead you right down the Dunning-Kruger path (though that’s a big oversimplification of a whole collection of psychological features from idee fixe to narcissism to fear of failure/criticism). If you really work at getting the LLM into the proper state it will happily rip your work apart in a rather cruel and indifferent manner, like an unsympathetic corporate gatekeeper who relishes exposing your flaws in a public setting. Debate club is another tactic that’s a bit less harsh, you have the LLM flip back and forth between defense and prosecution of your work.

I think this should be the default setting, but it doesn’t encourage engagement, the average customer will think the LLM is a mean jerk if it starts off like that.

winrid•21m ago
I don't feel this way. I have just been tackling more and larger problems, I think? This week one of many things for example is switching a multi-master KV store for tracking views on individual objects to tiered hyperloglogs that periodically merge. I could do this without AI, but it would take me a week instead of a day.

I think, if you're not feeling challenged, you're probably just doing the same work but faster. You should try to tackle harder problems, too!

nancyminusone•17m ago
Computers read my code, so I don't mind upsetting their feelings.

But why would anyone use AI to write documents or articles? Do you really respect your recipients so little that you can't be bothered to share your own thoughts?

I might as well get an AI to call my own mother on mother's day.

mjfisher•10m ago
I think the specific case of having a long conversation with an agent about what you're trying to achieve and why, and then have it update a README or a skill based on that conversation is a useful thing to do. Captured the context of the conversation without having to essentially write the same thing again.
h14h•16m ago
I'm worse at producing code by hand, but feel smarter overall.

I've learned an insane amount in a very short period of time, and have been engaging in much more challenging problems.

Instead of "what's the right syntax for this for loop again?" I'm asking "what's the business critical module in this system and how do I structure the test suite to prove it's working to spec?"

throw4847285•15m ago
We talk a lot about the risks of AI in schools, but those same risks apply in any learning environment.

I recently started a new job and I find that AI is making it so much harder for me to onboard. I am adjusting to my role much slower than my peers who are using AI less. I am coding in a language I am unfamiliar with, which makes the lure of vibe coding stronger. I am at least skilled enough to recognize when Claude gives me an answer that either makes no sense or is unnecessarily verbose. But the more time I spend asking Claude to write code, the less I feel like I'm developing the skills that the job requires. Plus, when I submit a PR, I lack the necessary confidence in my own work, which just feels bad.

Honestly, another part of this is that I'm asking Claude to search through Slack and docs for answers to questions when I should just ask another person. The AI is feeding my social anxiety, luring me into avoiding human contact that I know will be good for my understanding as well as my general need for social interaction.

That all sounds like I am absolving myself of responsibility, but I think it's important to point out how a given technology is especially addictive for a certain type of person, and traps them in a negative behavioral cycle. If I hold off on relying on AI now, I suspect I can grow in my skills to the point that I can delegate tasks to AI that are rote and easy for me to verify their results. It feels challenging, but it's necessary.

gralab•1m ago
We need to separate our emotions from these things. I understand why people don't like AI, or are fearful of it, but we need to have good faith arguments about it. Not this. These articles are just cope.
projektfu•1m ago
It is making me feel less dumb when I use it to get Linux admin things done because 1) it gets it wrong and I have to help it and 2) even though I would have gotten frustrated and given up without AI it shows me that Linux has gotten way out of hand for administration. Wheels have been reinvented and conventions have been changed for no good reason, or because of https://xkcd.com/927/