So, this sounds to me like an expanded version of that, more or less.
I think I'd prefer an AI future with lots of little focused models running locally like this rather than the "über models in the cloud" approach. Or at least having such options is nice.
[1] https://vim.fandom.com/wiki/Any_word_completion
There's also omni-completion, a bit more advanced: https://vim.fandom.com/wiki/Omni_completion
What you're describing is called: programming. This can't be serious. What about the cognitive overhead of writing a for loop? You have to remember what's in the array you're iterating over, how that array interacts with maybe other parts of the code base, and oh man, what about those pesky indices! Does it start at 0 or 1? I can't take it! AI save me!
Is that the part of programming that you enjoy? Remembering logger vs logging?
For me I enjoyed the technical chalenges, the design, solving customer problems all of that.
But in the end, focus on the parts you love.
The user, infact, has setup a tool for the task - an "AI model", unless you're saying one tool is better than others.
Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.
- Your hot path functions get optimized, probabilistically
- Your requests to a webserver are probabilistic, and most of the systems have retries built in.
- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.
Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.
> Just because YOU dont deal with probabilistic events while programming in ...
Runtime events such as what you enumerate are unrelated to "probabilistic codegen" the GP references, as "codegen" is short for "code generation" and in this context identifies an implementation activity.
Again, the post to which you originally replied was about code generation when authoring solution source code.
This has nothing to do with Linux, Linux process scheduling, RTOS[0], or any other runtime concern, be it operating system or otherwise.
0 - https://en.wikipedia.org/wiki/Real-time_operating_system
Old heads cling to their tools and yell at kids walking on lawns, completely unaware that the world already changed right under their noses.
Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.
I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?
I think you're clinging onto low-level thinking, whereas today you have tools at your disposal that allow you to easily focus on higher level details while eliminating the repetitive work required by, say, the shotgun surgery of adding individual log statements to a chain of function calls.
> Of course LLMs can do a lot more than variable autocomplete.
Yes, they can.
Managing log calls is just one of them. LLMs are a tool that you can use in many, many applications. And it's faster and more efficient than LSPs in accomplishing higher level tasks such as "add logs to this method/methods in this class/module". Why would anyone avoid using something that is just there?
You are commenting a blog post on how a user set up his tools. It's just that it's not your tool that is being showcased.
> You should be able to type log and have it tab complete because your editor should be aware of the context you're in.
...or, hear me out, you don't have to. Think about it. If you have a tool that you type "add logs" and it's aware of best practices, context, and your own internal usage... I mean, why are you bothering with typing "log" at all?
No but genuinely like writing informative logs. I have been in production support roles and boy does the lack of good logging (or barely any logs at all!) suck. I prefer print style debugging and want my colleagues on the support side to have the same level of convenience.
Not to mention the advantages of being able to search through past logs for troubleshooting and analysis.
If you're proficient in a programming language then you don't need to remember these things, you just do it, much like spoken language.
I am using a much wider range of languages now that I have LLM assistance, because I am no longer incentivized to stick to a small number that are warm in my mental cache.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If a person cannot remember what to use in order to define their desired solution logic (how do I make a log statement again?), then they are unqualified to implement same.
> But in the end, focus on the parts you love.
Speaking only for myself, I love working with people who understand what they are doing when they do it.
You make my point for me.
When I wrote:
... I love working with people who understand what they
are doing when they do it.
This is not a judgement about coworker ability, skill, or integrity. It is instead a desire to work with people who ensure they have a reasonable understanding of what they are about to introduce into a system. This includes coworkers who reach out to team members in order achieve said understanding.You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.
This is the core part of what's changing - the most important people around me used to be "People who know how".
We're slowly shifting to "Knowing what you want" is beating the Know-how.
People without any know-how are able to experiment because they know what they want and can keep saying "No, that's not what I want" to a system which will listen to them for without complaining supplying the know-how.
From my perspective, my decades of accumulating know-how is entirely pointless and wiped away in the last 2 years.
Adapt or fall behind, there's no way to ignore AI and hope it passes by without a ripple.
So part of this is just another abstraction. But another part, which I agree with, is that abstracting how you learn shit is not good. For me, I use AI in a way that helps me learn more and accomplish more. I deliberately don’t cede my thinking process away, and I deliberately try to add more polish and quality since it helps me do it in less time. I don’t feel like my know-how is useless — instead, I’m seeing how valuable it is to know shit when a junior teammate is opening PRs with critical mistakes because they don’t know any better (and aren’t trying to learn)
But sure, vibe away.
in C this leads to remote code execution (%n and friends)
in java (with log4j) this previously lead to remote code execution (despite being memory safe)
why am I not surprised the slop generator suggests it
Even better, you should be interpreting them at time of reading the log, not when writing it. Makes them a lot smaller.
Yes, f""-strings may be evaluated unnecessarily (perhaps, t-strings could solve it). But in practice they are too convenient. Unless profiler says otherwise, it may be ok to use them in many circumstances.
Wait a second.. If I do ANY ACTUAL engineering and log out the time savings, it's completely negligible and just makes the code harder to read?
It is complete insanity to me that literally every piece of programming literature over the past sixty years has been drilling the concept about code readability over unncessary optimizations and yet still I constantly read completely backwards takes like this.
- that the code itself is an interlinked structure (LSPs and code navigation),
- that the syntax is simple and repetitive (snippets and generators),
- that you are using a very limited set of symbols (grep, find and replace, contextual docs, completion)
- and that files are a tools for organization (emacs and vim buffers, split layout in other editors)
Your editor should be a canvas for your thinking, not an assembly line workspace where you only type code out.
I thought “making tools to automate work” was one of the key uses of a computer but I might be wrong
When I was younger, I loved 'getting into the weeds'. 'Oh, the audio broke? That gives me a great change to learn more about ALSA!'. Now that I'm older, I don't want to learn more about ALSA, I've seen enough. I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality, I don't care which library or data structure was particularly used to get the job done. (both camps obviously overlap, many issues require getting into the weeds)
In this framework, the promise of AI is great for camp 'solving problems' (yes yes hallucinations etc.), but horrible for camp 'getting into the weeds'. From your framing you sound like you're from camp 'getting into the weeds', and that's fine too. But I can't say camp 'solving problems' doesn't like programming. Lot of carpenters out there who like to build things without caring what their hammer does.
Though, doing this is still the right way to learn how to debug things!
nb I actually just realized I never understood a specific bit of image processing math after working on ffmpeg for years, asked a random AI, and got a perfectly clear explanation of it.
Maybe because I don’t think in terms of code. I just have this mental image that is abstract, but is consistent. Code is just a tool to materialize it, just like words are a tool to tell a story. By the time I’m typing anything, I’m already fully aware of my goals. Designing and writing are two different activities.
Reading LLMs code is jarring because it changes the pattern midway. Like an author smashing the modern world and middle earth together. It’s like writing an urban fantasy and someone keeps interrupting you with some hard science-fiction ideas.
To split the definitions one step further: That actually sounds not like you 'enjoy solving problems'(the process), but rather you 'enjoy not having the problem anymore'(the result).
Meaning you don't like programming for itself(anymore?), but merely see it as a useful tool. Implying that your life would be no less rich if you had a magical button that completely bypassed the activity.
I don't think someone who would stop doing it if given the chance can be said to "like programming", and certainly not in the way GP means.
Disliking toil is not the same as disliking programming.
It is just a bunch of people that don't take pride in self-sufficiency. It is a muscle that has atrophied for them.
You're right about the pride of writing actually good code. I think a lot about why I'm still writing software, and while I don't have an answer, it feels like the root cause is that LLMs deprive us of thoughts and decisions, our humanity actually.
I have never felt threatened by an LSP or a text editor. But LLMs remove every joy, and their output is bad or may not what you wanted. If I hated programming, I would actually buy software as I don't have such precise needs to require tools perfect for those needs.
No need to enjoy a good meal, AI will chew food for you and inject it in your bloodstream. No need to look at nature, AI will take pictures and write a PDF report.
Tools help because they are useful. AI is in a weird position to replace every job, activity, and feeling. I don't know who enjoys that but it's very strange. Do they think living in a lounge chair like Wall-E spaceship is good?
As for the article, it's yet another developer not using its tools properly. The free JetBrains code completion is bad, and using f-strings in logs is bad. I would reject that in a merge request, sorry. But thinking too much about it makes me sad about the state of software development, and sad about the pride and motivation of some (if not most) developers nowadays.
We live in a society. That means giving up self-sufficiency in exchange for bigger leverage in our chosen specialisation. I am 110% confident that when electric power became widespread, people were making the exact same argument against using it as you are making now.
All logs can be message, object and no need to format anything.
That said ai saves typing time.
And once you have enough experience, you realize that maintaining your focus and managing your cognitive workload are the key levers that affect productivity.
But, it looks like you are still caught up with iterating over arrays, so this realization might still be a few years away for you.
By that standard, python is not real programming because you're not managing your own memory. Is python considered AI now?
In the old days, we used to do this by using static type inference. This is harder to do in dynamic languages (such as Python), so now we try to do it with LLMs.
It is not obvious to me that LLMs are a better solution; you may be able to do more but you loose the predictability of the classic approach.
Does anyone else here dislike loguru on appearance? I don't have a well-articulated argument for why I don't like it, but it subconsciously feels like a tool that is not sharp enough.
Was looking for evidence, either way, honestly. The author is using loguru here and I've run into it for a number of production deployments.
Anyone have experiences to share?
For example the backtrace try to be cooler for display but are awful, and totally not appropriate to pipe in monitoring systems like Sentry.
In the same way, as you can see in the article, that is the only logging library that doesn't accept the standard %-string syntax to use instead its own shitty syntax based on format.
# yes
logger.info("Super log %s", var)
# no
logger.info(f"Super log {var}")
I know, it's not as nice looking. But the advantage is that the logging system knows that regardless of what values var takes that it's the same log. This is used by Sentry to aggregate the same logs together. Also if the variable being logged also happens to contain a %s then the f-string version will throw an exception. It doesn't matter because f-strings are so fast but the % method is also lazy and doesn't interpolate/format if it's not going to be logged. Maybe in the future we'll get to use template strings for this.I ask the model to create tons of logs for a specific function and make sure there's an emoji in the beginning to make it unique (I know HN hates emojis).
Best thing of all, I can just say delete all logs with emojis and my patch is ready. Magical usage of LLMs
I do wonder if this is part of the divide in how useful one finds LLMs currently. If you're already using languages which can tell you what expressions are syntactically valid while you type rather than blowing up at runtime the idea a computer can take some of the cognitive overhead is less novel.
Then I learned why programs don't do that by default.
Not to be snarky, but when you become more experienced you will figure out that logging is just writing to permanent storage, one of the most basic blocks of programming, you don't need a dependency for that, writing to disk should be as natural as breathing air, you can do print("var",var). That's it.
If you are really anal you can force writing to a file with a print argument or just a posix open and write call. No magic, no remembering, no blogpost. Just done, and next.
You mean like this?
[2025-07-17 18:05] Pallet #3027 stacked: Coast‐live oak, 16" splits, 0.22 cord.
[2025-07-17 18:18] Moisture check → 14 % (prime burning condition).
[2025-07-17 18:34] Special request tagged: “Larina—aromatic madrone, please!”
[2025-07-17 18:59] Squirrel incident logged: one (1) cheeky Sciurus griseus absconded with wedge.
...or something more like this? Base-10 (log10) handy values
--------------------------------
x log10(x)
------------------
2 0.3010
e 0.4343
10 1.0000
42 1.6232
1000 3.0000
1e6 6.0000
Do you know if there's a similar solution for vscode?
On the other hand, writing logs is a skill worth mastering. Wrote too many and when a service crashes you have to shift through lots of noise and potentially miss the signal. I once went down a rabbit hole trying to root cause an issue in a Gunicorn application that had custom health check logic in it. An admin worker would read health check state from a file that worker threads wrote to. Sometimes a race condition would occur, in which an error log would be emitted. The thing was, this error wasn't fatal and was a red herring for why the service actually crashed. Of instead it would have been logged at the debug level a lot of time would have been saved.
Fine let LLMs write code but take logging seriously!!!
I agree with K&R about debuggers: when writing services that you need to debug in prod, you live and die by your logs. That said, sometimes an interactive debugger is faster than adding logs, like when you’re not sure about the precise call stack that leads to a given point, or there’s a bunch of stuff you need to keep track of and adding logs for all of it is tedious. But pretty quickly you can hit a point where you’re wasting more time in the debugger than it would’ve taken to add those logs…
Rust lets you write a default debug print for each struct, and will generate one if asked. So you don't have to write out all the fields yourself. That's enough to do the annoying part of the job.
cosmicgadget•5h ago