frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
230•theblazehen•2d ago•66 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
694•klaussilveira•15h ago•206 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
962•xnx•20h ago•553 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
5•AlexeyBrin•58m ago•0 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
130•matheusalmeida•2d ago•35 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
66•videotopia•4d ago•6 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
53•jesperordrup•5h ago•24 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
36•kaonwarb•3d ago•27 comments

ga68, the GNU Algol 68 Compiler – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/PEXRTN-ga68-intro/
10•matt_d•3d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
236•isitcontent•15h ago•26 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
233•dmpetrov•16h ago•124 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
32•speckx•3d ago•21 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
335•vecti•17h ago•147 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
502•todsacerdoti•23h ago•244 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
385•ostacke•21h ago•97 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
300•eljojo•18h ago•186 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
361•aktau•22h ago•185 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
8•__natty__•3h ago•0 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
422•lstoll•21h ago•282 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
68•kmm•5d ago•10 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
96•quibono•4d ago•22 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
21•bikenaga•3d ago•11 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
19•1vuio0pswjnm7•1h ago•5 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
264•i5heu•18h ago•215 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
33•romes•4d ago•3 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
63•gfortaine•13h ago•28 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1076•cdrnsf•1d ago•460 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
39•gmays•10h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
298•surprisetalk•3d ago•44 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
154•vmatsiiako•20h ago•72 comments
Open in hackernews

GPT-OSS 120B Runs at 3000 tokens/sec on Cerebras

https://www.cerebras.ai/blog/openai-gpt-oss-120b-runs-fastest-on-cerebras
48•samspenc•3mo ago

Comments

freak42•3mo ago
I absolutely hate it, when a website says "try this" and after you went through the trouble of weiting something comes up with a sign up link first. Makes me leave instantly to never come back.
schappim•3mo ago
I was doing a demo to my colleagues and had the above.
Alifatisk•3mo ago
Same with groq.com, there is a "try this", and after you enter the prompt, it asks you to sign in. Closed the page.
traceroute66•3mo ago
Headline at the top of the Cerebras page linked to by the OP "Cerebras Raises $1.1B Series G at $8.1B Valuation".

If you're going after the AI money gravy train then you need to wave the "we have $n registered users" carrot on your PPT slides for the investors because registered user == monetization opportunity.

I'm not defending it. I hate being forced to register for shit when I just want to try it or use the free tier.

But it is what it is.

Saline9515•3mo ago
Well if they give it out for free (aka they pay for it), asking you to register is a reasonable ask. It's not a public service funded by taxpayers.
freak42•3mo ago
Yes they can ask, but do it at the beginning not the end of the process, this is a dark pattern and fucking annoying.
magackame•3mo ago
Anyone remember those online psychological tests where you spend an hour on one and in the end you need to pay up to get the result?)))
traceroute66•3mo ago
> do it at the beginning not the end

Exactly this.

If you present me with a form and a submit button then I expect the input to go through and a result to be presented.

If you don't want to present me with results before login, then put the form behind the wall too.

Simple.

traceroute66•3mo ago
> Well if they give it out for free (aka they pay for it), asking you to register is a reasonable ask

They have other options... rate limiting, serving (more) quantized to non-registered etc. etc.

Saline9515•3mo ago
Those options are still not free. And giving a degraded version of your product to free users is a bad way to acquire clients.
cyanydeez•3mo ago
Right, being proud of your money making is not something I consider a consumer focused product unless that customer is other moneyseeking orga, which like cancer, often ends up in a bubble.
anonym29•3mo ago
This is like declaring that a Ferrari dealership offering you a free test drive in a million dollar art exhibit on wheels is evil for asking for your phone number before handing you the keys.

If this was some beat-to-hell, high-mileage used economy car, sure, that would be a pain in the ass, and not worth it. But it's a mistake to place Cerebras into that mental bucket.

You don't even need to use real information to create an account. Just grab a temp-mail disposable address and sign up as fred flintstone or mickey mouse.

If you're a heavy LLM inference user (i.e. if you've ever paid for a $200/mo sub from any of the big AI labs), I can damn near guarantee you will not regret trying out Cerebras.

freak42•3mo ago
You didn't get my point at all.
rpdillon•3mo ago
Would your expectations be more aligned if it's said "free trial"? That might create an expectation of a sign up where "try this" might not.
moralestapia•3mo ago
Off topic but related.

A week ago I went to a launch party for a product that's supposed to "revolutionize design" (a web app w/ an OAI prompt).

No demo, only like two pictures of the actual product. Founder spent like half an hour giving a speech about the future, etc...

"All of you here will get access to it in a couple weeks."

Couple weeks go by ... I "get access". It's a .dmg, 1) What, I open it, it's not even an app, it's an installer ..., I install it, the app opens up and it's a giant red button that takes you to a website to create an account ...

These guys are completely lost.

petesergeant•3mo ago
It’s an absolute beast. I run it via OpenRouter, where I have Groq and Cerebras as the providers. Cheap enough as to be almost free, strong performance, and lightning fast.
jsheard•3mo ago
Cheap enough for now, but of all the companies selling inference at a loss, Cerebras and Groq are probably losing the most per token. Their hardware is ungodly expensive and its reliance on huge amounts of SRAM bottlenecks how much cheaper it can get, since SRAM density is improving at a snails pace at this point.
petesergeant•3mo ago
Not doubting you but anything to back that up? Happy enough to burn VC money until someone shows up who can run it without losing money, either way.
rajman187•3mo ago
They’ve filed a S1 [1] last year when attempting to go public. It showed something like a $60M+ loss for the first 6 months of 2024. The IPO didn’t happen because the CEO’s past included some financial missteps and the banks didn’t want to deal with this. At the time the majority of their revenue came from a single source in Abu Dhabi, as well

[1] https://www.sec.gov/Archives/edgar/data/2021728/000162828024...

petesergeant•3mo ago
> the majority of their revenue came from a single source in Abu Dhabi, as well

I live in UAE, whose continuing enthusiasm in AI investment stretches well beyond short-term profit, so having AD on-board seems like a plus not a minus. I'm sure there are specific exceptions, but generally Emirati money has seemed like smart money.

rpdillon•3mo ago
You're pointing out a bunch of high capex costs (hardware, SRAM), but then concluding that their opEx is greater than their revenue on a per unit basis. Are they really losing money on every token? It seems that using hardware acceleration would decrease inference costs and they could make it up on unit economics over time.

But I'm just reasoning from first principles. I don't have any specific data about them.

aurareturn•3mo ago

  It seems that using hardware acceleration would decrease inference costs and they could make it up on unit economics over time.
Nvidia GPUs are accelerators too. The reason they can do this so fast is because they're storing entire models in SRAM.
rpdillon•2mo ago
There are degrees of acceleration. My understanding, limited as it is, is that groq and cerebras are using highly optimized acceleration to achieve their token generation rates, far beyond that in a regular GPU, and this leads to lower costs per token.

Is this incorrect?

aurareturn•2mo ago
Yes, they're called ASICs on Grog. But Cerebras has more general cores that can do more complex things. Inference is mostly limited by bandwidth though.
7thpower•3mo ago
Switching costs are low, so if that happens we’ll just switch.
KronisLV•3mo ago
The Cerebras GML-4.6 post might also be of (some?/more?) interest to the people here, since it's more useful for programming: https://news.ycombinator.com/item?id=45852751

I don't think that this is a dupe or anything and 3000 t/s is really cool, the other post just has more discussion of Cerebras and people's experiences with using GLM 4.6 for software development.

sunpazed•3mo ago
This is really impressive. At these speeds, it’s possible to run agents with multi-tool turns within seconds. Consider it a feature rich, “non-deterministic API” for your platform or business.
drewbitt•3mo ago
It's a decent general model too - I have it plugged up in llm and raycast since August at great speeds. I wish Cerebras would do MiniMax M2 which should be an upgrade and replacement if it was just faster. It would never be as fast as gpt-oss-120 though
iFire•3mo ago
Does anyone know how much one system costs?