This question was triggered by this post: https://news.ycombinator.com/item?id=44196417. I’ve noticed that this topic comes up a lot in discussions lately.
Most of the projects I’ve seen so far seem to be more like proof-of-concept experiments—understandable, given the pace at which things are moving. As much as I’d like to adopt such tools, they often introduce dependencies on single-person projects, which feels too risky for my larger projects. Until things stabilize a bit, I’d rather set up a custom flow that gives me more control over the environment.
With that in mind, I’m curious how you approach this in your projects. Specifically:
- What approaches do you use to isolate AI agents? (e.g. container-based solutions like Docker/Kubernetes vs. cloud-based solutions)
- If you’re using the cloud: Are there good alternatives to GitHub Codespaces that offer less vendor lock-in but are still easy to manage?
- How do you balance a simple setup with the need for team collaboration?
- How do you handle large databases or datasets that AI agents might use or generate (e.g. storage, access, performance)?
I’d really appreciate hearing your insights, best practices, or any pitfalls you’ve encountered!
Thanks!
owebmaster•1h ago