So we started this because we needed a way to provision infrastructure for AI agents at scale. Turns out nobody had really solved this properly.
We wanted agents that could just... spin up a VM when they needed one, do their thing, and shut down. Simple idea, but getting it to actually work reliably was a whole journey.
What we ended up building:
Agents that can deploy and control their own virtual machines
An orchestration layer that doesn't fall over when you scale to 100+ agents
Support for pretty much any LLM (GPT-5, Claude, local models, whatever)
Real monitoring because debugging invisible agents is a nightmare
The whole thing is open source. Apache license. No strings attached. We're using it for automated testing and web scraping stuff, but honestly people are probably going to use it for things we haven't even thought of.
If you've ever tried to run computer-use agents in production, you know the pain points. That's what we tried to fix.
GitHub: https://github.com/LLmHub-dev/open-computer-use
PrateekJ17•2h ago
Agents that can deploy and control their own virtual machines An orchestration layer that doesn't fall over when you scale to 100+ agents Support for pretty much any LLM (GPT-5, Claude, local models, whatever) Real monitoring because debugging invisible agents is a nightmare
The whole thing is open source. Apache license. No strings attached. We're using it for automated testing and web scraping stuff, but honestly people are probably going to use it for things we haven't even thought of. If you've ever tried to run computer-use agents in production, you know the pain points. That's what we tried to fix. GitHub: https://github.com/LLmHub-dev/open-computer-use