One could argue that it was always like this. Low-level languages like C abstracted away assembly and CPU architecture. High-level languages abstracted away low-level languages. Frameworks abstracted away some of the fundamentals. Every generation built new abstractions on top of old ones. But there is a big difference with AI. Until now, every abstraction was engineered and deterministic. You could reason about it and trace it. LLMs, on the other hand, are non-deterministic. Therefore, we cannot treat their outputs as just another layer of abstraction.
I am not saying we cannot use them. I am saying we cannot fully trust them. Yet everyone (or maybe just the bubble I am in) pushes the use of AI. For example, I genuinely want to invest time in learning Rust, but at the same time, I am terrified that all the effort and time I spend learning it will become obsolete in the future. And the reason it might become obsolete may not be because the models are perfect and always produce high-quality code; it might simply be because, as an industry, we will accept “good enough” and stop pushing for high quality. As of now, models can already generate code with good-enough quality.
Is it only me, or does it feel like there are half-baked features everywhere now? Every product ships faster, but with rough edges. Recently, I saw Claude Code using 10 GiB of RAM. It is simply a TUI app.
Don’t get me wrong, I also use AI a lot. I like that we can try out different things so easily.
As a developer, I am confused and overwhelmed, and I want to hear what other developers think.
lhmiles•1h ago
dokdev•1h ago
AnimalMuppet•1h ago