On the other hand, if you just tell it to do a thing, I could believe that it would just do the thing. It is pretty bad at high level design judgment. Human guidance on architecture choices results in much better output.
If you tell it to leverage dependencies, it will. If you (like me) prefer that it avoid dependencies, it will.
The tradeoffs are very different with AI code than human written code. There are still tradeoffs, but they are different now.
If you want a particular implementation approach, you need to specify not only the features you want, but the implementation strategy at least at a high level. This could be as simple as adding "use pywikibit" or "use relevant packages from pypi" to the end of your prompt. Or you could seed your project with some manually writtem scaffolding, including a pyproject.toml
While LLMs do tend have NIH syndrome by default, I think this is a good default. I'd much rather have tight control over when and how to include external dependencies as opposed to letting a prompt fire for 40 minutes, and coming back to find 2 GB of newly installed node packages with a dependency tree 300 levels deep.
Then, you tell your AI to stick to that rule, and it will. There are tradeoffs to each choice, and people fall into different camps. Make your choice, write it down, and tell the AI to always follow that rule, and then you have it your way.
Tiberium•1h ago
https://www.pangram.com/history/dee030c0-0362-43d0-8fbd-bbab...
firef1y1203•56m ago
astro-lizard•40m ago
duskdozer•37m ago
latentsea•28m ago
duskdozer•21m ago
altmanaltman•32m ago