This is important because their premium $50 (as opposed to $20 on Claude Pro or ChatGPT Plus) should be justified by the speed. GLM 4.6 is fine but I don't think it's still at the GPT-5/Claude Sonnet 4.5 level, so if I'm paying $50 for it on Cerebras it should be mainly because of speed.
What kind of workflow justifies this? I'm genuinely curious.
It's more expensive to get the same raw compute as a cluster of nvidia chips, but they don't have the same peak throughput.
As far as price as a coder, I am giving a month of the $50 plan a shot. I haven't figured out how to adapt my workflow yet to faster speeds (also learning and setting up opencode).
Every local model I've used and even most open source are just not good
quantization ruins models and some models aren't that smart to begin with.
Think about waiting for compilation to complete: the difference between 5 minutes and 15 seconds is dramatic.
Same applies to AI-based code-wrangling tasks. The preserved concentration may be well worth the $50, especially when paid by your employer.
Any workflow where verification is faster / cheaper than generation. If you have a well tested piece of code and want to "refactor it to use such and such paradigme", you can run n faster model queries and pick the fastest.
My colleagues that do frontend use faster models (not this one specifically, but they did try fast-code-1) to build components. Someone worked out a workflow w/ worktrees where the model generates n variants of a component, and displays them next to each other. A human can "at a glance" choose which one they like. And sometimes pick and choose from multiple variants (something like passing it to claude and say "keep the styling of component A but the data management of component B"), and at the end of the day is faster / cheaper than having cc do all that work.
Takeaway is that this is sonnet-ish model at 10x the speed.
It's like saying Llama 3.2 3B and Gemma 4B are fine tunes of each other because they run at similar speeds on NVidia hardware.
Good luck. Maybe it’ll do well in some self-directed agent loop.
For reference, each new request needs to send all previous messages - tool calls force new requests too. So it's essentially cumulative when you're chatting with an agent - my opencode agent's context window is only 50% used at 72k tokens, but Cerebra's tracking online shows that I've used 1M input tokens and 10k output tokens already.
This is how every "chatbot" / "agentic flow" / etc works behind the scenes. That's why I liked that "you should build an agent" post a few days ago. It gets people to really understand what's behind the curtain. It's requests all the way down, sometimes with more context added, sometimes with less (subagents & co).
Stalin used to say that in war "quantity has a quality all its own". And I think that in terms of coding agents, speed is quality all its own too.
Maybe not for blind vibe coding, but if you are a developer, and is able to understand the code the agent generates and change it, the fast feedback of fast inference is a game changer. I don't care if claude is better than GLM 4.6, fast iteractions are king for me now.
It is like moving from DSL to gigabit fiber FTTH
I've been using GLM 4.6 on Cerebras for the last week or so, since they began the transition, and I've been blown away.
I'm not a vibe coder; when I use AI coding tools, they're in the hot path. They save me time when whipping up a bash script and I can't remember the exact syntax, or for finding easily falsifiable answers that would otherwise take me a few minutes of reading. But, even though GLM 4.6 is not as smart as Sonnet 4.5, it is smart enough. And because it is so fast on Cerebras, I genuinely feel that it augments my own ability and productivity; the raw speed has considerably shifted the tipping point of time-savings for me.
YMMV, of course. I'm very precise with the instructions I provide. And I'm constantly interleaving my own design choices into the process - I usually have a very clear idea in my mind of what the end result should look like - so, in the end, the code ends up how I would have written it without AI. But building happens much faster.
No affiliation with Cerebras, just a happy customer. Just upgraded to the $200/mo plan - and I'll admit that I was one that scoffed when folks jumped on the original $200/mo Claude plan. I think this particular way of working with LLMs just fits well with how I think and work.
This is clearly the future of Software Development, but the models are so good atm that the future is possible now. I'm still getting used to and having to rethink my entire dev workflow for maximum productivity, and whilst I wouldn't unleash AI Agents on a decade old code base, all my new Web Apps will likely end up being AI-first unless there's a very good reason why it wouldn't provide a net benefit.
You are doing embedded development or anything else not as mainstream as web dev? LLMs are still useful but no longer mind blowing and often produce hallucinations. You need to read every line of their output. 1000t/s is crazy but no longer always in a good way.
You are doing stuff which the LLMs haven't seen yet? You are on your own. There is quite a bit of irony in the fact that the devs of llama.cpp barely use AI - just have a look at the development of support for Qwen3-Next-80B [1].
Cerebras makes a giant chip that runs inference at unreal speeds. I suspect they run their cloud service more as an advertising mechanism for their core business: hardware. You can hear the founder describing their journey:
https://podcasts.apple.com/us/podcast/launching-the-fastest-...
alyxya•3h ago
cschneid•3h ago
They've claimed repeatedly in their discord that they don't quantize models.
The speed of things does change how you interact with it I think. I had this new GLM model hooked up to opencode as the harness with their $50/mo subscription plan. It was seriously fast to answer questions, although there are still big pauses in workflow when the per-minute request cap is hit.
I got a meaningful refactor done, maybe a touch faster than I would have in claude code + sonnet? But my human interaction with it felt like the slow part.
alyxya•3h ago