To be honest, it's sort of what I expected governments to be funding right now, but I suppose Chinese companies are a close second.
Is there a better agentic coding harness people are using for these models? Based on my experience I can definitely believe the claims that these models are overfit to Evals and not broadly capable.
They also struggle at translating very broad requirements to a set of steps that I find acceptable. Planning helps a lot.
Regarding the harness, I have no idea how much they differ but I seem to have more luck with https://pi.dev than OpenCode. I think the minimalism of Pi meshes better with the limited capabilities of open models.
I've been testing Qwen3.5-35B-A3B over the past couple of days and it's a very impressive model. It's the most capable agentic coding model I've tested at that size by far. I've had it writing Rust and Elixir via the Pi harness and found that it's very capable of handling well defined tasks with minimal steering from me. I tell it to write tests and it writes sane ones ensuring they pass without cheating. It handles the loop of responding to test and compiler errors while pushing towards its goal very well.
This terminology is still very much undefined though, so my version may not be the winning definition.
[0] https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/visuali...
It's also driving itself crazy with deadpool & deadpool-r2d2 that it chose during planning phase.
That said, it does seem to be doing a very good job in general, the code it has created is mostly sane other than this fuss over the database layer, which I suspect I'll have to intervene on. It's certainly doing a better job than other models I'm able to self-host so far.
I think this is part of the model’s success. It’s cheap enough that we’re all willing to let it run for extremely long times. It takes advantage of that by being tenacious. In my experience it will just keep trying things relentlessly until eventually something works.
The downside is that it’s more likely to arrive at a solution that solves the problem I asked but does it in a terribly hacky way. It reminds me of some of the junior devs I’ve worked with who trial and error their way into tests passing.
I frequently have to reset it and start it over with extra guidance. It’s not going to be touching any of my serious projects for these reasons but it’s fun to play with on the side.
Qwen3.5-35B-A3B means that the model itself consists of 35 billion floating point numbers - very roughly 35GB of data - which are all loaded into memory at once.
But... on any given pass through the model weights only 3 billion of those parameters are "active" aka have matrix arithmetic applied against them.
This speeds up inference considerably because the computer has to do less operations for each token that is processed. It still needs the full amount of memory though as the 3B active it uses are likely different on every iteration.
> Do you feel you could replace the frontier models with it for everyday coding? Would/will you?
Probably not yet, but it's really good at composing shell commands. For scripting or one-liner generation, the A3B is really good. The web development skills are markedly better than Qwen's prior models in this parameter range, too.
The main quirk I've found is that it has a tendency to decide halfway through following my detailed instructions that it would be "simpler" to just... not do what I asked, and I find it has stripped all the preliminary support infrastructure for the new feature out of the code.
Wild times!
I think Alibaba needs to just give these guys a blank check. Let them fill it in themselves. Absent that, I'm pretty sure they'll make their own startup.
I do think it'd be a big loss for the rest of the world though if they close whatever model their startup comes up with.
That's very likely to happen once the gap with OpenAI/Anthropic has been closed and they managed to pop the bubble.
It'd be good if Congress could do something to remove the masks, put cameras on these agents, and for the local governments to stop fighting removal of all people who are here illegally so we can pretend we have borders again.
If they actually thoroughly evicted non-status migrant workers they'd have a outright revolt on their hands from farmers and other businesses that depend on them.
Instead those businesses can now take further advantage of the fear of harassment and/or deportation to drive down compensation and rights.
Contrast with countries like Canada that have a legal temporary foreign agriculture worker program that provides a regulated source of seasonal migrant farm worker labour under a non-citizen temporary status, but with some rights (still often abused). It's notable to me as a Canadian that I don't see this being advocated on any large scale by either party in the US.
Anyways, all this just to say that the jackboot clown theater is the point, not a side effect.
I dont think youre as right as you want to believe. Certainly not as right as I want you to believe
I will say we are winning in accessibility. China doesn’t have much of a ramp game
I wonder if you max out your options in China. It seems the Party is suspicious of ambition and high profile winners. I'm sure you can live comfortably, but there's a ceiling.
Isn't it just straight-up illegal in China to refuse the government from using your model? USA isn't perfect, but at least it has active discourse.
I'm sure it's a very nice place to live if you're content to just stay quiet in society and never put a political sign in your yard or even just talk about the wrong thing with your friend in a WeChat.
Isnt it interesting that you never see someone say "I used this on my Mac and it was useful"
Instead we get "you could put this on your Mac" or "I tried it, and it worked but it was too slow"
I feel like these people are performing an evil when they are making suggestions that cause a waste of money.
I am trying super hard to use cheap models, and outside SOTA models, they have been more trouble than they are worth.
Use case means everything. I doubt this model would fare well on a large codebase, but this thing is incredible.
Maybe Qwen3.5-35B-A3B is that model? This comment reports good results: https://news.ycombinator.com/item?id=47249343#47249782
I need to put that through its paces.
So far none of them have be useful enough at first glance with a local model for me to stick with them and dig in further.
It has been useful for education ("What does this Elixir code do? <Paste file> ..... <general explanation> "then What this line mean?")
as well as getting a few basic tests written when I'm unfamiliar with the syntax. ("In Elixir Phoenix, given <subject under test, paste entire module file> and <test helper module, paste entire file> and <existing tests, pasted in, used both for context and as examples> , what is one additional test you would write?")
This is useful in that I get a single test I can review, run, paste in, and I'm not using any quota. Generally I have to fix it, but that's just a matter of reading the actual test and throwing the test failure output to the LLM to propose a fix. Some human judgement is required but once I got going adding a test took 10 minutes despite being relatively unfamiliar with Elixir Phoenix .
It's a nice loop, I'm in the loop, and I'm learning Elixir and contributing a useful feature that has tests.
raffael_de•2h ago
the qwen is dead, long live the qwen.