2. People are doubtful that the agent will be able to complete the task properly.
what do you say?
2. People are doubtful that the agent will be able to complete the task properly.
what do you say?
I like having conversations with my agent, asking questions so I know how things work, asking it to ask me questions, etc. Personally for me one benefit of AI coding is better quality and better understanding. If it’s not clear how to do something with a certain library for instance I check it out of GitHub and point IntelliJ IDEA at it and ask Junie.
Agents also very rarely are truly hands-off: most of my usage is walking through a pre-determined set of steps and course-correcting along the way. In my experience, having it run out of sight leads to heavier editing since smaller realignments couldn't be applied along the way.
You answered your own question.
I do not trust an agent to give it unsupervised access to my systems.
If I had a completely local agent that was fully sandboxed and I would be willing to put data in the sandbox, give it a task, and come back later to see what it did.
I would not trust agents to run unsupervised with similar restrictions.
lompad•1h ago
Otherwise you'd always have to context switch, consider which git state it's actually working from, etc. - rather than just letting the code directly before you change in your IDE.
It's significantly lower cognitive load and has a higher success rate, in my experience.
But, of course: Highly depends on the software being written and the general code infrastructure.