> Runs in an isolated sandbox Every task runs in a secure, isolated Daytona sandbox.
Oh, so fake open source? Daytona is an AGPL-licensed codebase that doesn't actually open-source the control plane, and the first instruction in the README is to sign up for their service.
> From the "open-swe" README:
Open SWE can be used in multiple ways:
* From the UI. You can create, manage and execute Open SWE tasks from the web application. See the 'From the UI' page in the docs for more information.
* From GitHub. You can start Open SWE tasks directly from GitHub issues simply by adding a label open-swe, or open-swe-auto (adding -auto will cause Open SWE to automatically accept the plan, requiring no intervention from you). For enhanced performance on complex tasks, use open-swe-max or open-swe-max-auto labels which utilize Claude Opus 4.1 for both planning and programming. See the 'From GitHub' page in the docs for more information.
* * *
The "from the UI" links to their hosted web interface. If I cannot run it myself it's fake open-source
How can it be AGPL and not provide full source? AGPL is like the most aggressive of the GPL license variants. If they somehow circumvented the intent behind this license that is a problem.
I hit an error that was not recoverable. I'd love to see functionality to bring all that context over to a new thread, or otherwise force it to attempt to recover.
dabockster•2h ago
> Run asynchronously in the cloud
> cloud
Reality check:
https://huggingface.co/Menlo/Jan-nano-128k-gguf
That model will run, with decent conversation quality, at roughly the same memory footprint as a few Chrome tabs. It's only a matter of time until we get coding models that can do that, and then only a further matter of time until we see agentic capabilities at that memory footprint. I mean, I can already get agentic coding with one of the new Qwen3 models - super slowly, but it works in the first place. And the quality matches or even beats some of the cloud models and vibe coding apps.
And that model is just one example. Researchers all over the world are making new models almost daily that can run on an off-the-shelf gaming computer. If you have a modern Nvidia graphics card, you can run AI on your own computer totally offline. That's the reality.
koakuma-chan•1h ago
dabockster•1h ago
koakuma-chan•1h ago
toshinoriyagi•39m ago
It outperforms those other models, which are not using tools, thanks to the tool use and specificity.
Because it is only 4B parameters, it is naturally terrible at other things I believe-it's not designed for them and doesn't have enough parameters.
In hindsight, "MCP-based methodology" likely refers to its tool-use.
Martinussen•1h ago
merelysounds•49m ago
Also, cloud storage solutions (like archiving or collaboration) have different usage patterns than AI so far.
prophesi•49m ago
Of course, the line will always be pushed back as frontier models incrementally improve, but the quality is night and day between these open models consumers can feasibly run versus even the cheaper frontier models.
That said, I too have no interest in this if local models aren't supported and hope that's down the pipeline just so I can try tinkering with it. Though it looks like it utilizes multiple models for various tasks (planner, programmer, reviewer, router, and summarizer) so that only adds to the difficulty of the VRAM bottleneck if you'd like to load different models per task. So I think it makes sense for them to focus on just Claude for now to prove the concept.
edit: I personally use Qwen3 Coder 30B 4bit for both autocomplete and talking to an agent, and switch to a frontier model for the agent when Qwen3 starts running in circles.