frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Prejudice Against Leprosy

https://text.npr.org/g-s1-108321
1•hi41•1m ago•0 comments

Slint: Cross Platform UI Library

https://slint.dev/
1•Palmik•5m ago•0 comments

AI and Education: Generative AI and the Future of Critical Thinking

https://www.youtube.com/watch?v=k7PvscqGD24
1•nyc111•5m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•6m ago•0 comments

Moltbook isn't real but it can still hurt you

https://12gramsofcarbon.com/p/tech-things-moltbook-isnt-real-but
1•theahura•9m ago•0 comments

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•10m ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•11m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•13m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•14m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•14m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•14m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•15m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•18m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•18m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
3•jerpint•18m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•20m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•23m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•23m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•25m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•27m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•27m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•27m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
3•vkelk•28m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•28m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•29m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•31m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
3•ykdojo•34m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•34m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•36m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
3•mariuz•36m ago•0 comments
Open in hackernews

GPT-OSS 120B Runs at 3000 tokens/sec on Cerebras

https://www.cerebras.ai/blog/openai-gpt-oss-120b-runs-fastest-on-cerebras
48•samspenc•3mo ago

Comments

freak42•3mo ago
I absolutely hate it, when a website says "try this" and after you went through the trouble of weiting something comes up with a sign up link first. Makes me leave instantly to never come back.
schappim•3mo ago
I was doing a demo to my colleagues and had the above.
Alifatisk•3mo ago
Same with groq.com, there is a "try this", and after you enter the prompt, it asks you to sign in. Closed the page.
traceroute66•3mo ago
Headline at the top of the Cerebras page linked to by the OP "Cerebras Raises $1.1B Series G at $8.1B Valuation".

If you're going after the AI money gravy train then you need to wave the "we have $n registered users" carrot on your PPT slides for the investors because registered user == monetization opportunity.

I'm not defending it. I hate being forced to register for shit when I just want to try it or use the free tier.

But it is what it is.

Saline9515•3mo ago
Well if they give it out for free (aka they pay for it), asking you to register is a reasonable ask. It's not a public service funded by taxpayers.
freak42•3mo ago
Yes they can ask, but do it at the beginning not the end of the process, this is a dark pattern and fucking annoying.
magackame•3mo ago
Anyone remember those online psychological tests where you spend an hour on one and in the end you need to pay up to get the result?)))
traceroute66•3mo ago
> do it at the beginning not the end

Exactly this.

If you present me with a form and a submit button then I expect the input to go through and a result to be presented.

If you don't want to present me with results before login, then put the form behind the wall too.

Simple.

traceroute66•3mo ago
> Well if they give it out for free (aka they pay for it), asking you to register is a reasonable ask

They have other options... rate limiting, serving (more) quantized to non-registered etc. etc.

Saline9515•3mo ago
Those options are still not free. And giving a degraded version of your product to free users is a bad way to acquire clients.
cyanydeez•3mo ago
Right, being proud of your money making is not something I consider a consumer focused product unless that customer is other moneyseeking orga, which like cancer, often ends up in a bubble.
anonym29•3mo ago
This is like declaring that a Ferrari dealership offering you a free test drive in a million dollar art exhibit on wheels is evil for asking for your phone number before handing you the keys.

If this was some beat-to-hell, high-mileage used economy car, sure, that would be a pain in the ass, and not worth it. But it's a mistake to place Cerebras into that mental bucket.

You don't even need to use real information to create an account. Just grab a temp-mail disposable address and sign up as fred flintstone or mickey mouse.

If you're a heavy LLM inference user (i.e. if you've ever paid for a $200/mo sub from any of the big AI labs), I can damn near guarantee you will not regret trying out Cerebras.

freak42•3mo ago
You didn't get my point at all.
rpdillon•3mo ago
Would your expectations be more aligned if it's said "free trial"? That might create an expectation of a sign up where "try this" might not.
moralestapia•3mo ago
Off topic but related.

A week ago I went to a launch party for a product that's supposed to "revolutionize design" (a web app w/ an OAI prompt).

No demo, only like two pictures of the actual product. Founder spent like half an hour giving a speech about the future, etc...

"All of you here will get access to it in a couple weeks."

Couple weeks go by ... I "get access". It's a .dmg, 1) What, I open it, it's not even an app, it's an installer ..., I install it, the app opens up and it's a giant red button that takes you to a website to create an account ...

These guys are completely lost.

petesergeant•3mo ago
It’s an absolute beast. I run it via OpenRouter, where I have Groq and Cerebras as the providers. Cheap enough as to be almost free, strong performance, and lightning fast.
jsheard•3mo ago
Cheap enough for now, but of all the companies selling inference at a loss, Cerebras and Groq are probably losing the most per token. Their hardware is ungodly expensive and its reliance on huge amounts of SRAM bottlenecks how much cheaper it can get, since SRAM density is improving at a snails pace at this point.
petesergeant•3mo ago
Not doubting you but anything to back that up? Happy enough to burn VC money until someone shows up who can run it without losing money, either way.
rajman187•3mo ago
They’ve filed a S1 [1] last year when attempting to go public. It showed something like a $60M+ loss for the first 6 months of 2024. The IPO didn’t happen because the CEO’s past included some financial missteps and the banks didn’t want to deal with this. At the time the majority of their revenue came from a single source in Abu Dhabi, as well

[1] https://www.sec.gov/Archives/edgar/data/2021728/000162828024...

petesergeant•3mo ago
> the majority of their revenue came from a single source in Abu Dhabi, as well

I live in UAE, whose continuing enthusiasm in AI investment stretches well beyond short-term profit, so having AD on-board seems like a plus not a minus. I'm sure there are specific exceptions, but generally Emirati money has seemed like smart money.

rpdillon•3mo ago
You're pointing out a bunch of high capex costs (hardware, SRAM), but then concluding that their opEx is greater than their revenue on a per unit basis. Are they really losing money on every token? It seems that using hardware acceleration would decrease inference costs and they could make it up on unit economics over time.

But I'm just reasoning from first principles. I don't have any specific data about them.

aurareturn•3mo ago

  It seems that using hardware acceleration would decrease inference costs and they could make it up on unit economics over time.
Nvidia GPUs are accelerators too. The reason they can do this so fast is because they're storing entire models in SRAM.
rpdillon•3mo ago
There are degrees of acceleration. My understanding, limited as it is, is that groq and cerebras are using highly optimized acceleration to achieve their token generation rates, far beyond that in a regular GPU, and this leads to lower costs per token.

Is this incorrect?

aurareturn•2mo ago
Yes, they're called ASICs on Grog. But Cerebras has more general cores that can do more complex things. Inference is mostly limited by bandwidth though.
7thpower•3mo ago
Switching costs are low, so if that happens we’ll just switch.
KronisLV•3mo ago
The Cerebras GML-4.6 post might also be of (some?/more?) interest to the people here, since it's more useful for programming: https://news.ycombinator.com/item?id=45852751

I don't think that this is a dupe or anything and 3000 t/s is really cool, the other post just has more discussion of Cerebras and people's experiences with using GLM 4.6 for software development.

sunpazed•3mo ago
This is really impressive. At these speeds, it’s possible to run agents with multi-tool turns within seconds. Consider it a feature rich, “non-deterministic API” for your platform or business.
drewbitt•3mo ago
It's a decent general model too - I have it plugged up in llm and raycast since August at great speeds. I wish Cerebras would do MiniMax M2 which should be an upgrade and replacement if it was just faster. It would never be as fast as gpt-oss-120 though
iFire•3mo ago
Does anyone know how much one system costs?