So, this sounds to me like an expanded version of that, more or less.
I think I'd prefer an AI future with lots of little focused models running locally like this rather than the "über models in the cloud" approach. Or at least having such options is nice.
[1] https://vim.fandom.com/wiki/Any_word_completion
There's also omni-completion, a bit more advanced: https://vim.fandom.com/wiki/Omni_completion
What you're describing is called: programming. This can't be serious. What about the cognitive overhead of writing a for loop? You have to remember what's in the array you're iterating over, how that array interacts with maybe other parts of the code base, and oh man, what about those pesky indices! Does it start at 0 or 1? I can't take it! AI save me!
Is that the part of programming that you enjoy? Remembering logger vs logging?
For me I enjoyed the technical chalenges, the design, solving customer problems all of that.
But in the end, focus on the parts you love.
The user, infact, has setup a tool for the task - an "AI model", unless you're saying one tool is better than others.
Of course LLMs can do a lot more than variable autocomplete. But all of the examples given are things that are removing cognitive overhead that probably won't exist after a little practice doing it yourself.
- Your hot path functions get optimized, probabilistically
- Your requests to a webserver are probabilistic, and most of the systems have retries built in.
- Heck, 1s and 0s operate in a range, with error bars built in. It isnt really 5V = 1 and 0V = 0.
Just because YOU dont deal with probabilistic events while programming in rust, or python doesnt mean it is inherently bad. Embrace it.
> Just because YOU dont deal with probabilistic events while programming in ...
Runtime events such as what you enumerate are unrelated to "probabilistic codegen" the GP references, as "codegen" is short for "code generation" and in this context identifies an implementation activity.
Again, the post to which you originally replied was about code generation when authoring solution source code.
This has nothing to do with Linux, Linux process scheduling, RTOS[0], or any other runtime concern, be it operating system or otherwise.
0 - https://en.wikipedia.org/wiki/Real-time_operating_system
Apples and oranges. It's frankly nonsense to tell me to "embrace it" as a phantom / strawman rebuttal about a broader concept I never said was inherently bad or even avoidable. I was talking much more specifically about non-deterministic code generation during implementation / authoring phase.
Old heads cling to their tools and yell at kids walking on lawns, completely unaware that the world already changed right under their noses.
This change is good for some people, but it isn't good for us – and I suspect the problems we're raising the alarm about also affect you.
Still prefer my neovim, but it really made me realize how much cognitive load all the keyboard shortcuts and other features add, even if they feel like muscle memory at this point.
I mean, sure, yes, it can. But drastically less efficiently, and with the possibility of errors. Where the problem can be easily soluble, why not pick the solution that's just...right?
I think you're clinging onto low-level thinking, whereas today you have tools at your disposal that allow you to easily focus on higher level details while eliminating the repetitive work required by, say, the shotgun surgery of adding individual log statements to a chain of function calls.
> Of course LLMs can do a lot more than variable autocomplete.
Yes, they can.
Managing log calls is just one of them. LLMs are a tool that you can use in many, many applications. And it's faster and more efficient than LSPs in accomplishing higher level tasks such as "add logs to this method/methods in this class/module". Why would anyone avoid using something that is just there?
You are commenting a blog post on how a user set up his tools. It's just that it's not your tool that is being showcased.
> You should be able to type log and have it tab complete because your editor should be aware of the context you're in.
...or, hear me out, you don't have to. Think about it. If you have a tool that you type "add logs" and it's aware of best practices, context, and your own internal usage... I mean, why are you bothering with typing "log" at all?
I’m not saying not to use them, but you putting it like that is very dishonest and doesn’t represent the actual reality of it. It doesn’t serve anyone but the vendors to be a shill about LLMs.
There are many people out there that have absolutely no idea how horrible or great they have it.
It can infer the correct logging setup from the rest of the project and add the most logical values to it automatically
No but genuinely like writing informative logs. I have been in production support roles and boy does the lack of good logging (or barely any logs at all!) suck. I prefer print style debugging and want my colleagues on the support side to have the same level of convenience.
Not to mention the advantages of being able to search through past logs for troubleshooting and analysis.
If you're proficient in a programming language then you don't need to remember these things, you just do it, much like spoken language.
I am using a much wider range of languages now that I have LLM assistance, because I am no longer incentivized to stick to a small number that are warm in my mental cache.
Granted, AI can definitely ease that ramp-up time at the cost of lengthening it.
Thank you for this analogy.
It's not the ramp up time. There's no problem with learning yet another one. There's a problem with remembering them all as you switch between the projects. Most of the time LLM will know exactly what to use, how, and what data I want to log. Which will take way less time than me rediscovering how a specific project I haven't seen in weeks does things.
> Is that the part of programming that you enjoy? Remembering logger vs logging?
If a person cannot remember what to use in order to define their desired solution logic (how do I make a log statement again?), then they are unqualified to implement same.
> But in the end, focus on the parts you love.
Speaking only for myself, I love working with people who understand what they are doing when they do it.
You make my point for me.
When I wrote:
... I love working with people who understand what they
are doing when they do it.
This is not a judgement about coworker ability, skill, or integrity. It is instead a desire to work with people who ensure they have a reasonable understanding of what they are about to introduce into a system. This includes coworkers who reach out to team members in order achieve said understanding.I can reasonably expect Python to be installed on every Linux system, the debugging experience is amazing (e.g. runtime evaluation of anything), there's a vast amount of libraries and bindings available, the amount of documentation is huge and it's probably the language LLMs know best.
If there were two languages I would suggest anyone to start with, it would be C and Python. One gives a comprehensive overview of low-level stuff, the other gives you actual power to do stuff. From there on you can get fancy and upgrade to more advanced languages.
It's still important to know both, and especially when I began working on aspects like multithreading I found my basis in C helped me learn far easier, but i'm definitely more supportive of the ship it mindset.
It's better to have a bad side project online than have none - you learn far more creating things then never making things, and if you need LLMs and Python to do that, fine!
I think it depends on how you approach these tools, personally I still quite focus on learning general, repeatable concepts from LLMs as i'm an idiot who needs different terms repeated 50 times in similar ways to understand them properly!
In many many cases Python will be perfectly fine until you hit absolutely massive loads, and even then you can optimise the hot path with something else while keeping the rest as is.
Why choose to use something unreliable instead of using or writing something reliable? Isn't reliability the whole point of automation in the first place?
And I think it's fairly widely accepted that you shouldn't check compiled binaries or generated code into your source control, you should check in the configuration that generates it and work with that. (Of course this presupposes that your generator is reliable)
You see a person whose conception of programming is different from yours; I see a person who's finding joy in the act of creating computer programs, and who will be able to bring even more of their ideas to life than they would have beforehand. That's something to celebrate, I think.
This is the core part of what's changing - the most important people around me used to be "People who know how".
We're slowly shifting to "Knowing what you want" is beating the Know-how.
People without any know-how are able to experiment because they know what they want and can keep saying "No, that's not what I want" to a system which will listen to them for without complaining supplying the know-how.
From my perspective, my decades of accumulating know-how is entirely pointless and wiped away in the last 2 years.
Adapt or fall behind, there's no way to ignore AI and hope it passes by without a ripple.
So part of this is just another abstraction. But another part, which I agree with, is that abstracting how you learn shit is not good. For me, I use AI in a way that helps me learn more and accomplish more. I deliberately don’t cede my thinking process away, and I deliberately try to add more polish and quality since it helps me do it in less time. I don’t feel like my know-how is useless — instead, I’m seeing how valuable it is to know shit when a junior teammate is opening PRs with critical mistakes because they don’t know any better (and aren’t trying to learn)
The thing about these layers of abstraction is that they add load and thus increase the demand for people and teams and organizations that command these lower levels. The idea that, on a systemic level, higher abstraction levels can diminish the importance, size, complexity or expertise needed overall or even keep it at current levels is entirely misguided.
As we add load on top, the base has to become stronger and becomes more important, not less.
It's not hopeless though, it feels like that in the past decade, some of the smartest minds working at the lower levels of abstractions have come up with great new technologies. New programming languages that push the envelope of performance and security while maintaining good developer experience, great advancements in microchip technologies, that kinda thing.
It's important to maintain access to universities and higher education, where people who have the interest and mindset can learn and become part of this base that powers the greater software market.
Sure, they will enable even more people proportionally to not think about those low level systems. But my argument is that the need for that low level expertise has always expanded and will keep expanding.
Automation entails tonnes of complexity that need to be managed. It doesn't just evaporate. More automatic systems will demand more people and teams to learn low level systems in great detail and at high levels of accuracy.
I find this very difficult to believe, but I have no idea what you do. I'm a generalist and this isn't even close to true for me with state of the art llms.
All the layers of abstraction are well intended and often useful. But they by no means eliminate the need to understand in detail the hard facts underlying computer engineering if you want to build performant and reliable software.
You can trade that off at different rates for different circumstances but the notion that you can do away entirely with the need to know these details has never been true.
More people being enabled to think less about these details necessitates more expertise to exist to support them, never less.
Agreed. A good abstraction usually doesn't obviate the need for understanding what's going on behind the scenes, it just means that I don't have to think about it all the time.
As a more extreme example, I don't usually think about the fact that the Java (or Kotlin, Scala, ...) compiler generates bytecode that runs in an interpreter that translates the bytecode to machine code on the fly. But sometimes it's useful to remember (e.g. when dealing with instrumentation).
Another example are things like databases, or concurrency constructs, etc. There it's usually good to know the properties they guarantee and one way to be able to reason through these is by having some understanding of how they're implemented under the hood.
Your decades of experience are probably a bit like mine: you sense the cause of a problem in an almost psychic way, based on knowledge of the existing codebase, the person who wrote the last update, the "smell" of the problem. I've walked into a major incident, looked at a single alert on a dashboard, and almost with a smile on my face identified the root cause immediately. It's decades of knowledge that allow to know what to ask for.
Same with vibe coding: I've been having tremendous fun with it but occasionally a totally weird failure will occur, something that "couldn't happen" if you wrote it yourself. Like, you realise that the AI didn't refactor your code when you added the last feature, so it added "some tiny thing" to three separate functions. Then over several refactors it only updated one of those "tiny things", so now that specific feature only breaks in very specific cases.
Or let's say you want to get it to write something that AI seems to have problems with. Example, assemble and manage a NAS array using mdadm. I've been messing with that recently and Google Gemini has lost the whole array twice, and utterly failed to figure out how to rename an array device name. It's a hoot. Just to see if it would ever figure it out I kept going. Pages and pages of over-and-back, repeating the same mistakes, missing the obvious. Maybe it's been trained on 10 years of Muppets online giving terrible advice on how to manage mdadm?
And this is the reason for why I think I am productive with LLMs, and why people who know nothing about the underlying concepts are not going to be as productive.
Both will never push back or say no and will just rush headlong into implementing - something. They will use 57 libraries when stdlib will do and make convoluted hierarchies when a simple functional program is enough.
But both can produce very good results if you have predetermined limits, acceptance criteria and a proper plan and spec.
Then you iterate in “sprints” and check the results after each one and challenge their output.
So yeah, it's absolutely possible. From personal experience, I was able to implement a basic scan and go application complete with payment integrations without going through a single piece of documentation.
As long as you're ready to jostle for a bit in frustration with an AI, you can make something monetizable once you've identified an underserved market.
Just last night I asked it to create a project plan as markdown files. It started writing to disk and I tabbed out to watch Squid Game
When I came back it was 80% through implementation and was trying to fix some weird python async issue in a loop over and over again. I never told it to implement anything. But it will always - ALWAYS - rush into implementation unless you tell it not to with ALL CAPS.
Google really needs to add a Claude style explicit plan mode for it…
I interrupted it, gave the error to Claude Code, which fixed it in one go.
Your claim that LLMs do away entirely with accidental complexity and manage essential complexity for you is not supported by reality. Adding these tools to workflows adds a tonne of accidental complexity, and they still cannot shield you from all the essential complexity because they are often wrong.
There have been endless noise made over semantics but the plain fact is that LLMs render output that is incongruent with reality very often. And now we are trying to remedy it with what amounts to expert systems.
There is no silver bullet. You have to painstakingly get rid of accidental complexity and tackle essential complexity in order to build complex and useful systems.
I don't understand what's so abhorrent about it that people invent layers and layers of accidental complexity trying to avoid facing simple facts. We need to understand computers and domains with high accuracy to build any useful software and that's how it's always been and how it's always gonna be.
There is no silver bullet.
But sure, vibe away.
in C this leads to remote code execution (%n and friends)
in java (with log4j) this previously lead to remote code execution (despite being memory safe)
why am I not surprised the slop generator suggests it
Even better, you should be interpreting them at time of reading the log, not when writing it. Makes them a lot smaller.
This reminds me of a fun interaction in browser devtools: you can log a complex struct to the console. The browser does not copy the entire nested structure when you do so immediately, meaning that if someone mutates that struct after it was logged, the value you see depends on when you expanded the nested struct.
Yes, f""-strings may be evaluated unnecessarily (perhaps, t-strings could solve it). But in practice they are too convenient. Unless profiler says otherwise, it may be ok to use them in many circumstances.
Wait a second.. If I do ANY ACTUAL engineering and log out the time savings, it's completely negligible and just makes the code harder to read?
It is complete insanity to me that literally every piece of programming literature over the past sixty years has been drilling the concept about code readability over unncessary optimizations and yet still I constantly read completely backwards takes like this.
The footgun is there. Hot-loop debug logging can cost a ton with f-strings, and an LLM can just as easily use the standard log formatting without this problem. Few people use a profiler prior to a production bug (or just end up with eye popping autoscaling bills).
Shrug. I guess everyone can learn for themselves if it’s a problem for them. But I’ve been there; it would be nice if the tools did better with best practices.
- that the code itself is an interlinked structure (LSPs and code navigation),
- that the syntax is simple and repetitive (snippets and generators),
- that you are using a very limited set of symbols (grep, find and replace, contextual docs, completion)
- and that files are a tools for organization (emacs and vim buffers, split layout in other editors)
Your editor should be a canvas for your thinking, not an assembly line workspace where you only type code out.
On the other hand, if I used an LLM to guess what I wanted to get logged (as the author did), I'd have to read and verify the LLMs output. That's cognitive overhead I normally wouldn't have.
Maybe I'm missing something, but I just don't find myself in situations where I only vaguely know what I want to debug and would benefit from an LLM guessing what that could be.
I thought “making tools to automate work” was one of the key uses of a computer but I might be wrong
When I was younger, I loved 'getting into the weeds'. 'Oh, the audio broke? That gives me a great change to learn more about ALSA!'. Now that I'm older, I don't want to learn more about ALSA, I've seen enough. I'm now more in camp 'solving problems', I want the job done and the task successfully finished to a reasonable level of quality, I don't care which library or data structure was particularly used to get the job done. (both camps obviously overlap, many issues require getting into the weeds)
In this framework, the promise of AI is great for camp 'solving problems' (yes yes hallucinations etc.), but horrible for camp 'getting into the weeds'. From your framing you sound like you're from camp 'getting into the weeds', and that's fine too. But I can't say camp 'solving problems' doesn't like programming. Lot of carpenters out there who like to build things without caring what their hammer does.
Though, doing this is still the right way to learn how to debug things!
nb I actually just realized I never understood a specific bit of image processing math after working on ffmpeg for years, asked a random AI, and got a perfectly clear explanation of it.
Maybe because I don’t think in terms of code. I just have this mental image that is abstract, but is consistent. Code is just a tool to materialize it, just like words are a tool to tell a story. By the time I’m typing anything, I’m already fully aware of my goals. Designing and writing are two different activities.
Reading LLMs code is jarring because it changes the pattern midway. Like an author smashing the modern world and middle earth together. It’s like writing an urban fantasy and someone keeps interrupting you with some hard science-fiction ideas.
To split the definitions one step further: That actually sounds not like you 'enjoy solving problems'(the process), but rather you 'enjoy not having the problem anymore'(the result).
Meaning you don't like programming for itself(anymore?), but merely see it as a useful tool. Implying that your life would be no less rich if you had a magical button that completely bypassed the activity.
I don't think someone who would stop doing it if given the chance can be said to "like programming", and certainly not in the way GP means.
Someone may say, like in this post, "AI makes my work easier", and that may be a valid point.
Someone else may say "AI has made my work much harder" [1], and that's a valid point too.
It may very well be that we'll find a place in the middle, but in the meantime it seem disingenuous to me to accept one side without acknowledging that the other is making valid points too.
I personally love programming but I don't disrespect people who are in it because they need to be or even just feel like they need to be.
Disliking toil is not the same as disliking programming.
Most software developers never write software outside of education or employment. That's completely normal.
Even most recreational programming is to solve a problem. Very few developers write software for the pleasure of it and would happily use a magical "solve my problem" button instead.
This is also true for all employers and customers.
Honestly, it sounds to me that you fundamentally misunderstand the industry in which you work.
Pay and conditions in software are so much higher than any vocation that it attracts many people who would never program recreationally. If that's not you and you are overjoyed to be able to write software everyday, that's fine. But you should recognise that you're in a tiny minority of developers.
Money. Startup hype. Thinking they’ll be the next Zuckerberg (as if that’d be a good thing).
Do you like playing guitar or do you like putting pressure on the correct length of the string at the specific distance from the fret?
Do you like programming, or do you like manually typing out all the trivial details of each separate line? I've not found any pleasure in typing out the details of DTOs and their conversation functions repetitively by hand, for example.
It is just a bunch of people that don't take pride in self-sufficiency. It is a muscle that has atrophied for them.
You're right about the pride of writing actually good code. I think a lot about why I'm still writing software, and while I don't have an answer, it feels like the root cause is that LLMs deprive us of thoughts and decisions, our humanity actually.
I have never felt threatened by an LSP or a text editor. But LLMs remove every joy, and their output is bad or may not what you wanted. If I hated programming, I would actually buy software as I don't have such precise needs to require tools perfect for those needs.
No need to enjoy a good meal, AI will chew food for you and inject it in your bloodstream. No need to look at nature, AI will take pictures and write a PDF report.
Tools help because they are useful. AI is in a weird position to replace every job, activity, and feeling. I don't know who enjoys that but it's very strange. Do they think living in a lounge chair like Wall-E spaceship is good?
As for the article, it's yet another developer not using its tools properly. The free JetBrains code completion is bad, and using f-strings in logs is bad. I would reject that in a merge request, sorry. But thinking too much about it makes me sad about the state of software development, and sad about the pride and motivation of some (if not most) developers nowadays.
We live in a society. That means giving up self-sufficiency in exchange for bigger leverage in our chosen specialisation. I am 110% confident that when electric power became widespread, people were making the exact same argument against using it as you are making now.
All logs can be message, object and no need to format anything.
That said ai saves typing time.
And once you have enough experience, you realize that maintaining your focus and managing your cognitive workload are the key levers that affect productivity.
But, it looks like you are still caught up with iterating over arrays, so this realization might still be a few years away for you.
By that standard, python is not real programming because you're not managing your own memory. Is python considered AI now?
Formalism is so essential that we we use it to create spontaneous form of programming language (that we call pseudocode) to express some idea clearly.
In the old days, we used to do this by using static type inference. This is harder to do in dynamic languages (such as Python), so now we try to do it with LLMs.
It is not obvious to me that LLMs are a better solution; you may be able to do more but you loose the predictability of the classic approach.
What that statement is telling me is that the author at one point didn't consider logging to be a fully fledged formal non-functional requirement, more of an optional and/or informal side chore.
I used to work at a company that built software for mobile networks; their entire C codebase was full of logging statements, pretty much every step made in processing a signal was logged. Probably as a form of tracing, and automated instrumentation would possibly have been a better solution, but they were a bit old fashioned.
If you feel copying and pasting documentation is programming, you’re gonna have a really rough time.
Programming is solving a problem, not learning 20 different ways to say Log, logger, lumber, timber, and whatever else “creative” name writer of logger library chose to use.
I can’t be arsed to remember how I add parameters in which type and did this one have context or not.
Just today I got bit (again) by the fact that the Go log library doesn’t have Debug as an option. And I’ve been writing Go for a decade. (log/slog does have levels though)
I am old enough to remember people complaining about having to write a semicolon at the end of each line. Or declare a variable...
Does anyone else here dislike loguru on appearance? I don't have a well-articulated argument for why I don't like it, but it subconsciously feels like a tool that is not sharp enough.
Was looking for evidence, either way, honestly. The author is using loguru here and I've run into it for a number of production deployments.
Anyone have experiences to share?
For example the backtrace try to be cooler for display but are awful, and totally not appropriate to pipe in monitoring systems like Sentry.
In the same way, as you can see in the article, that is the only logging library that doesn't accept the standard %-string syntax to use instead its own shitty syntax based on format.
For gcp jobs you can just output the structured log in the standard output so I don't see the purpose here. There are other structured logging libraries if this is what you need.
For the backtrace yes, because logging.exception will be broken by default and all tools that process the backtrace automatically like sentry. And again, for no valid reason except than "look, my traceback looks so much cooler on my screen because I added crap and new lines when printing it"...
# yes
logger.info("Super log %s", var)
# no
logger.info(f"Super log {var}")
I know, it's not as nice looking. But the advantage is that the logging system knows that regardless of what values var takes that it's the same log. This is used by Sentry to aggregate the same logs together. Also if the variable being logged also happens to contain a %s then the f-string version will throw an exception. It doesn't matter because f-strings are so fast but the % method is also lazy and doesn't interpolate/format if it's not going to be logged. Maybe in the future we'll get to use template strings for this.I ask the model to create tons of logs for a specific function and make sure there's an emoji in the beginning to make it unique (I know HN hates emojis).
Best thing of all, I can just say delete all logs with emojis and my patch is ready. Magical usage of LLMs
I do wonder if this is part of the divide in how useful one finds LLMs currently. If you're already using languages which can tell you what expressions are syntactically valid while you type rather than blowing up at runtime the idea a computer can take some of the cognitive overhead is less novel.
Typing "log" and tab to accept the auto complete is faster than writing the whole log yourself.
When you need to add a bunch of log statement, this tool makes the activity faster and less tedious.
Is it a technological breakthrough? No
Does it save hours of developer time each day? No
Is it nice to have? Hell yeah
Then I learned why programs don't do that by default.
Not to be snarky, but when you become more experienced you will figure out that logging is just writing to permanent storage, one of the most basic blocks of programming, you don't need a dependency for that, writing to disk should be as natural as breathing air, you can do print("var",var). That's it.
If you are really anal you can force writing to a file with a print argument or just a posix open and write call. No magic, no remembering, no blogpost. Just done, and next.
You mean like this?
[2025-07-17 18:05] Pallet #3027 stacked: Coast‐live oak, 16" splits, 0.22 cord.
[2025-07-17 18:18] Moisture check → 14 % (prime burning condition).
[2025-07-17 18:34] Special request tagged: “Larina—aromatic madrone, please!”
[2025-07-17 18:59] Squirrel incident logged: one (1) cheeky Sciurus griseus absconded with wedge.
...or something more like this? Base-10 (log10) handy values
--------------------------------
x log10(x)
------------------
2 0.3010
e 0.4343
10 1.0000
42 1.6232
1000 3.0000
1e6 6.0000Do you know if there's a similar solution for vscode?
On the other hand, writing logs is a skill worth mastering. Wrote too many and when a service crashes you have to shift through lots of noise and potentially miss the signal. I once went down a rabbit hole trying to root cause an issue in a Gunicorn application that had custom health check logic in it. An admin worker would read health check state from a file that worker threads wrote to. Sometimes a race condition would occur, in which an error log would be emitted. The thing was, this error wasn't fatal and was a red herring for why the service actually crashed. Of instead it would have been logged at the debug level a lot of time would have been saved.
Fine let LLMs write code but take logging seriously!!!
I agree with K&R about debuggers: when writing services that you need to debug in prod, you live and die by your logs. That said, sometimes an interactive debugger is faster than adding logs, like when you’re not sure about the precise call stack that leads to a given point, or there’s a bunch of stuff you need to keep track of and adding logs for all of it is tedious. But pretty quickly you can hit a point where you’re wasting more time in the debugger than it would’ve taken to add those logs…
Rust lets you write a default debug print for each struct, and will generate one if asked. So you don't have to write out all the fields yourself. That's enough to do the annoying part of the job.
Remember, it's only AI until it works.
I'd rather simplify our software stacks than accept that an autocomplete that lies to me is the only tractable solution, though
The Goland and Pycharm experience must be radically different from the Java experience then.
Normally I think it's a bit rude to criticize the code of blog posts, bit I thought it was relevant here for these reasons:
"I often don’t even remove when I’m done debugging because they’re now valuable in prod" - think about where your production credentials end up. Most of the time, logging them won't hurt, just like keeping your password on a post-it doesn't hurt most of the time.
The arguments about letting an AI reduce the mental overhead is compelling, but this shows one of the (often mentioned) risks: you didn't write it so you didn't consider the implications.
Or maybe the author did consider it, and has a lot of good arguments for why logging it is perfectly safe. I often get pushback from other devs about stuff like this, for example:
- We're the only ones with access to the logs (still, no reason to store credentials in the logs)
- The Redis URL only has an IP, no credentials. (will we remember to update this log line when the settings.redis_url changes?)
- We only log warnings or higher in production (same argument as above)
Maybe I should stop worrying and learn to love AI? Human devs do the same thing, after all?
It kinda speaks bad of the auto-completion that it suggests such anti-pattern.
While what you're doing consciously is something simple you're simultaneously also mucking about in your codebase in the "spinal cord level", or spending quality time with your creation. It's times like those when bigger/other things often click together the first time all the while you're doing something that's seemingly just grunt work.
I get that is most devs least favorite part, but logs and errors are supposed to be unshakable, ground-truth understandings, things that point you to the light in a dark room of broken spaghetti code and misunderstandings.
Think about this from a users or a testers perspective, can you imagine the compounding frustration you would experience? To be chasing a skein of understanding through wall of text that you only mostly understand, to find out that the hunch was based on a red-herring by a dev who wasn't bothered to help you in return.
Not to mention the amount of non-bugs you are generating for yourself in the future, we already have bug bounties being swarmed by LLM-gen faux-bugs, how is anyone supposed to reason about real bugs if the logs are only tangentially related to the truth?
This is the same complaint about all AI-generated code, the simple answer to which is, review the code yourself before committing. Or if it's a real project, it'll get reviewed by someone else anyway, same as any other code that could have a mistake in it.
cosmicgadget•6mo ago
specproc•6mo ago
I can throw up a basic sketch, focusing on the code, and get it to add the quality stuff after.