And I'm not even sure these 'template adjacent' regurgitations are what the crude LLM is best at, as the output needs to pass some rigorous inflexible test to 'pass'. Hallucinating some non-existing function in an API will be a hard fail.
LLM's have a far easier time in domains where failures are 'soft'. This is why 'Elisa' passed as a therapist in the 60's, long before auto-programmers were a thing.
Also, in 'academic' research, LLM use has reached nearly 100%, not just for embelishing writeups to the expected 20 pages, but in each stage of the'game' including 'ideation'.
And if as a CIO you believe that your prohibition on using LLMs for coding because of 'divulging company secrets' holds, you are either strip searching your employees on the way in and out, or wilfully blind.
I'm not saing 'nobody' exists that is not using AI in anything created on a computer, just like some woodworker still handcrafts exclusive bespoke furniture in a time of presses, glue and CNC, but adoption is skyrocketing and not just because the C-suite pressures their serves into using the shiny new toy.
Right so if you are in certain areas you'll be legally required not to send your work to whatever 3:rd party that promises to handle it the cheapest.
Also so since this is about actually "interesting" work if you are doing cutting edge research on lets say military or medical applications** you definitely should take things like this seriously.
Obviously you can do LLM's locally if you don't feel like paying up for programmers who likes to code, and who wants to have in-depth knowledge of whatever they are doing.
I suppose the problem isn't really the technology itself but rather the quality of the employees. There would've been a lot of people cheating the system before, lets say just by copy pasting or tricking your coworkers into doing the work for you.
However if you are working with something actually interesting, chances are that you're not working with disingenuous grifters and uneducated and lazy backstabbers, so that's less of a concern as well. If you are working on interesting projects hopefully these people would've been filtered out somewhere along the line.
You'd think that there be someone who'd be nice enough to create a library or a framework or something that's well documented and is popular enough to get support and updates. Maybe you should consider offloading the boring part to such a project, maybe even pay someone to do it?
This thesis only makes sense if the work is somehow interesting and you also have no desire to extend, expand, or enrich the work. That's not a plausible position.
Or your interesting work wasn't appearing in training set often enough. Currently I am writing a compiler and runtime for some niche modeling language, and every model I poke for help was rather useless except some obvious things I already know.
But I do use LLM/AI, as a rubber duck that talks back, as a google on steroids - but one who needs his work double checked. And as domain discovery tool when quickly trying to get a grasp of a new area.
Its just another tool in the toolbox for me. But the toolbox is like a box of chocolates - you never know what you are going to get.
“We thought we were getting an accountant, but we got a poet.”
Frederik Kulager: Jeg fik ChatGPT til at skrive dette afsnit, og testede, om min chefredaktør ville opdage det. https://open.spotify.com/episode/22HBze1k55lFnnsLtRlEu1?si=h...
The thing is that since it can't think, it's absolutely useless when it comes to things that hasn't been done before, because if you are creating something new, the software won't have had any chance to train on what you are doing.
So if you are in a situation in which it is a good idea to create a new DSL for your problem **, then the autocruise control magic won't work because it's a new language.
Now if you're just mashing out propaganda like some brainwashed soviet apparatchik propagandist, maybe it helps. So maybe people who writes predictable slop like this the guardian article (https://archive.is/6hrKo) would be really grateful that their computer has a cruise control for their political spam.
) if that's what you meant *) which you statistically speaking might not want to do, but this is about actually interesting work where it's more likely to happen*
LLMs are not sentient. They are designed to make stuff up based on probability.
It's also definitely real that a lot of other smart productive people are more productive when they use it.
These sort of articles and comments here seem to be saying I'm proof it can't be done. When really there's enough proof it can be that you're just proving you'll be left behind.
... said every grifter ever since the beginning of time.
So LLMs are awesome if I want to say "create a dashboard in Next.js and whatever visualization library you think is appropriate that will hit these endpoints [dumping some API specs in there] and display the results to a non-technical user", along with some other context here and there, and get a working first pass to hack on.
When they are not awesome is if I am working on adding a map visualization to that dashboard a year or two later, and then I need to talk to the team that handles some of the API endpoints to discuss how to feed me the map data. Then I need to figure out how to handle large map pin datasets. Oh, and the map shows regions of activity that were clustered with DBSCAN, so I need to know that Alpha shape will provide a generalization of a convex hull that will allow me to perfectly visualize the cluster regions from DBSCAN's epsilon parameter with the corresponding choice of alpha parameter. Etc, etc, etc.
I very rarely write code for greenfield projects these days, sadly. I can see how startup founders are head over heels over this stuff because that's what their founding engineers are doing, and LLMs let them get it cranking very very fast. You just have to hope that they are prudent enough to review and tweak what's written so that you're not saddled with tech debt. And when inevitable tech debt needs paying (or working around) later, you have to hope that said founders aren't forcing their engineers to keep using LLMs for decisions that could cut across many different teams and systems.
That boilerplate heavy, skill-less, frontend stuff like configuring a map control with something like react-leaflet seems to be precisely what AI is good at.
edit: Just spot checked it and it thinks it's a good idea to use convex hulls.
One implication is that when AI providers claim that "AI can make a person TWICE as productive!"
... business owners seem to be hearing that as "Those users should cost me HALF as much!"
Either that, or replacing the time with slacking off and not even getting whatever benefits doing the easiest tasks might have had (learning, the feeling of accomplishing something), like what some teachers see with writing essays in schools and homework.
The tech has the potential to let us do less busywork (which is great, even regular codegen for boilerplate and ORM mappings etc. can save time), it's just that it might take conscious effort not to be lazy with this freed up time.
lubujackson•6h ago
ssivark•5h ago
And I don't mean cutting-edge research like funsearch discovering new algorithm implementations, but more like what the typical coder can now do with off-the-shelf LLM+ offerings.
helloplanets•4h ago
NitpickLawyer•2h ago
Previously discussed on HN - oAuth library at cloudflare - https://news.ycombinator.com/item?id=44159166
n4r9•57m ago
Upshot: though it's possible to attempt this with (heavily supervised) LLMs, it's not recommended.
NitpickLawyer•27m ago
I'd be curious to see how the same exercise would go with Neil guiding claude. There's no debating that LLMs + domain knowledge >>> vibe coding, and I would be curious to see how that would go, and how much time/effort would an expert "save" by using the latest models.
bravesoul2•4h ago