[0] https://chatgpt.com/share/68143a97-9424-800e-b43a-ea9690485b...
Plus having used it in JetBrains IDE it makes me sad to see them ditching their refactoring for LLM refuctoring.
I have an on again off again relationship with LLMs. I always walk away disappointed. Most recently for a hobby project around 1k lines so far, and it outputs bugs galore, makes poor design decisions, etc.
It's ok for one off scripts, but even those it rarely one shots.
I can only assume people who find it useful are working on different things than I am.
Most people tell me I'm just not that good at prompting, which is probably true. But if I'm learning how to prompt, that's basically coding with more steps. At that point it's faster for me to write the code directly.
The one area where it actually has been successful is (unsurprisingly) translating code from one language to another. That's been a great help.
I get very similar code to what I would normally write but much faster and with comments.
Why are you so willing to teach a program how to do your job? Why are you so willing to give your information to a LLM that doesn't care about your privacy?
Teaching a program how to do your job has been part of the hacker mindset for many decades now, I don't think there is anything new to be said as to why. Anyone here reading this on the internet has long since decided they are fine preferring technical automations over preserving traditional ways of completing work.
LLMs don't inherently imply anything about privacy handling, the service you select does (if you aren't just opting to self host in the first place). On the hosted service side there's anything from "free and sucks up everything" to "business data governance contracts about what data can be used how".
https://terrytao.wordpress.com/2025/05/01/a-proof-of-concept...
esafak•17h ago
https://github.com/teorth/estimates/blob/main/src/estimates....