frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Granite 4.1: IBM's 8B Model Matching 32B MoE

https://firethering.com/granite-4-1-ibm-open-source-model-family/
109•steveharing1•1h ago•46 comments

Mozilla's Opposition to Chrome's Prompt API

https://github.com/mozilla/standards-positions/issues/1213
194•jaffathecake•4h ago•86 comments

Where the goblins came from

https://openai.com/index/where-the-goblins-came-from/
743•ilreb•9h ago•437 comments

Noctua releases official 3D CAD models for its cooling fans

https://www.noctua.at/en/3d-cad-models
317•embedding-shape•2d ago•74 comments

Zed 1.0

https://zed.dev/blog/zed-1-0
1927•salkahfi•21h ago•609 comments

The Zig project's rationale for their anti-AI contribution policy

https://simonwillison.net/2026/Apr/30/zig-anti-ai/
379•lumpa•10h ago•190 comments

A Primer on Bézier Curves – So What Makes a Bézier Curve?

https://pomax.github.io/bezierinfo/
21•mostlyk•1d ago•3 comments

Copy Fail

https://copy.fail/
1085•unsnap_biceps•18h ago•387 comments

Craig Venter has died

https://www.jcvi.org/media-center/j-craig-venter-genomics-pioneer-and-founder-jcvi-and-diploid-ge...
255•rdl•10h ago•48 comments

GCC 16 has been released

https://gcc.gnu.org/gcc-16/changes.html
29•HeliumHydride•44m ago•0 comments

"Parse, don't validate" through the years with C++

https://derekrodriguez.dev/parse-dont-validate-through-the-years-with-c-/
39•dwrodri•2d ago•8 comments

How to Disable Firefox's New Emoji Picker

https://emsh.cat/en/how-to-disable-firefoxs-emoji-picker/
8•embedding-shape•1h ago•16 comments

Cursor Camp

https://neal.fun/cursor-camp/
1015•bpierre•20h ago•164 comments

Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs

https://github.com/cauchy221/Alignment-Whack-a-Mole-Code
155•reconnecting•9h ago•120 comments

Biology is a Burrito: A text- and visual-based journey through a living cell

https://burrito.bio/essays/biology-is-a-burrito
131•the-mitr•9h ago•18 comments

DataCenter.FM – background noise app featuring the sound of the AI bubble

https://datacenter.fm/
41•louisbarclay•4h ago•8 comments

London to Calcutta by Bus (2022)

https://www.amusingplanet.com/2022/08/london-to-calcutta-by-bus.html
80•CGMthrowaway•1d ago•26 comments

FastCGI: 30 years old and still the better protocol for reverse proxies

https://www.agwa.name/blog/post/fastcgi_is_the_better_protocol_for_reverse_proxies
360•agwa•20h ago•87 comments

The Duolingo taxi test–could being rude to the driver cost you your dream job?

https://phys.org/news/2026-04-duolingo-taxi-rude-driver-job.html
4•i7l•2d ago•1 comments

OpenTrafficMap

https://opentrafficmap.org/
294•moooo99•16h ago•78 comments

Mike: open-source legal AI

https://mikeoss.com/
121•noleary•11h ago•46 comments

Monad Tutorials Timeline

https://wiki.haskell.org/Monad_tutorials_timeline
48•brudgers•7h ago•21 comments

1.4 GW: battery storage at former Grohnde nuclear power plant

https://www.heise.de/en/news/1-4-GW-Huge-battery-storage-at-former-Grohnde-nuclear-power-plant-11...
16•pantalaimon•1h ago•3 comments

HERMES.md in commit messages causes requests to route to extra usage billing

https://github.com/anthropics/claude-code/issues/53262
1165•homebrewer•17h ago•493 comments

An open-source stethoscope that costs between $2.5 and $5 to produce

https://github.com/GliaX/Stethoscope
269•0x54MUR41•21h ago•114 comments

Laws of UX

https://lawsofux.com/
292•bobbiechen•19h ago•46 comments

Functional programmers need to take a look at Zig

https://pure-systems.org/posts/2026-04-29-functional-programmers-need-to-take-a-look-at-zig.html
151•xngbuilds•9h ago•107 comments

Why I still reach for Lisp and Scheme instead of Haskell

https://jointhefreeworld.org/blog/articles/lisps/why-i-still-reach-for-scheme-instead-of-haskell/...
239•jjba23•1d ago•129 comments

Copy-fail-destroyer: K8s remediation for CVE-2026-31431

https://github.com/NorskHelsenett/copy-fail-destroyer
7•evenh•2h ago•1 comments

Joby kicks off NYC electric air taxi demos with historic JFK flight

https://www.flyingmag.com/joby-nyc-electric-air-taxi-jfk-airport/
61•Jblx2•11h ago•156 comments
Open in hackernews

Granite 4.1: IBM's 8B Model Matching 32B MoE

https://firethering.com/granite-4-1-ibm-open-source-model-family/
108•steveharing1•1h ago

Comments

mdp2021•1h ago
Wish they also released an embedding model, in the line of their previous: compact (while good)...
RugnirViking•1h ago
sounds interesting. Here's hoping they release a 32B model, thats a pretty good sweet spot for feasibility of home setups.

edit: I just realised they do actually have a 30b release alongside this. Haven't tried it yet.

2ndorderthought•13m ago
Try qwen 3.6. it will knock your socks off
2ndorderthought•1h ago
I test drove it yesterday. It's pretty impressive at 8b. Runs on commodity hardware quickly.

Qwen3.6 35b a3b is still my local champion but I may use this for auto complete and small tasks. Granite has recent training data which is nice. If the other small models got fine tuned on recent data I don't know if I would use this at all, but that alone makes it pretty decent.

The 4b they released was not good for my needs but could probably handle tool calls or something

steveharing1•1h ago
Yea, No doubt Qwen 3.6 open weights are far more strong
rnadomvirlabe•1h ago
Why no doubt?
steveharing1•54m ago
Because Qwen 3.6 pushes way above its weight. Granite 8B is impressive, but Qwen still wins on raw capability, especially for coding.
actionfromafar•44m ago
Way above its weights.
drittich•34m ago
Nanobanana for scale.
rnadomvirlabe•41m ago
You just asserted the same thing again. Why do you say this is the case?
noodletheworld•24m ago
Having tried it.

Qwen is really good.

Also, generally, it makes sense. 8B models are generally not very good^.

That this 8B model is decent is impressive, but that it could perform on par with a good model 4 times as large is a daydream.

^ - To be polite. The small models + tool use for coding agents are almost universally ass. Proof: my personal experience. Ive tried many of them.

irishcoffee•14m ago
So it’s just like, your opinion, man?
2ndorderthought•16m ago
Qwen scores above sonnet in coding benchmarks. Runs locally. In personal use it's really good. Anecdotally others have used it to vibe code or agentic code successfully. Not toy problems. Not a toy model.

Qwen3.6 raises the bar for models of its size. There really isn't a comparison in my opinion.

captainbland•37m ago
No comparison with competitor models other than the previous granite version strongly implies that it does not compete well with other comparable models. At least this is the most reasonable assumption until data comes out to the contrary
2ndorderthought•22m ago
Qwen 36 is effectively a pocket sized frontier model. It's really surprising for me anyway
vessenes•32m ago
Have you tried the Gemma 4 series, out of curiosity? I haven’t run a local model in a while, but the benchmarks look good. I’d take a free local tool-use model if it was relatively consistent.
2ndorderthought•18m ago
I tried the Gemma 4 I think 2 and 4b. The 2b was not useful for me at all. A little too weak for my use cases

The 4b was okay. It didn't get all of my small math questions right, it didn't know about some of the libraries I use, but it was able to do some basic auto complete type stuff. For microscopic models I like the llama 3.2 3b more right now for what I do, it's a little faster and seems a little stronger for what I do. But everyone is different and I don't think I'll use it anymore this past month has been crazy for local model releases.

Havoc•1h ago
Interesting to see a pivot away from MoE by both IBM and mistral while the larger classes of SOTA of models all seem to be sticking to it.

Quick vibe check of it- 8B @ Q6 - seems promising. Bit of a clinical tone, but can see that being useful for data processing and similar. You don't really want a LLM that spams you with emojis sometimes...

embedding-shape•33m ago
Makes sense, dense for small models, dense or MoE for larger ones, end up fitting various hardware setups pretty neatly, no need for MoE at smaller scale and dense too heavy at large scale.
npodbielski•6m ago
I never want LLM to span me with emojis. What is the use case for that? I find it highly annoying.
100ms•1h ago
> Full stop.

Why people don't edit out obvious sloppification and expect to still have readers left

cbg0•1h ago
So are we saying it's fine that the article is written by an LLM as long as it doesn't have the tell-tale signs of LLMs?
ramon156•58m ago
It's more about curating the things you're publishing. Why would I bother reading what you couldn't bother to read?
100ms•55m ago
I don't really see reason to complain about tool use, so long as the result is cohesive, accurate and that ultimately means a human has at least read their own output before publishing. It's a bit like receiving a supposedly personal letter that starts "Dear [INSERT_FIRST_NAME_FIELD]," are you really going to read such a thing?
HighGoldstein•36m ago
An article without telltale signs of an LLM is indistinguishable from an article written by a human, so yes.
spicyusername•28m ago
My opinion is that literature and art will continue pushing the envelope in the places they always pushed the envelope. LLMs will not change this, humans love making art, and they love doing it in new ways.

Corporate announcements were never the places that literature and art were pushing the envelope. They were slop before, and they're slop now.

wewewedxfgdf•54m ago
Third line in to the article: "But there’s one result in the benchmarks I keep coming back to."

I hear this sort of thing all the time now on YouTube from media/news personalities:

“And that’s the part nobody seems to be talking about.”

"And here's what keeps me up at night."

“This is where the story gets complicated.”

“Here’s the piece that doesn’t quite fit.”

“And this is where the usual explanation starts to break down.”

“Here’s what I can’t stop thinking about.”

“The part that should worry us is not the obvious one.”

“And that’s where the real problem begins.”

“But the more interesting question is the one no one is asking.”

“And this is where things stop being simple.”

It doesn't really worry me but I think its interesting that LLM speak sounds so distinctive, and how willing these media personalities are to be so obvious in reading out on TV what the LLM spat out.

I've never studied what LLMs say in depth is it is interesting that my brain recognises the speech pattern so easily.

bambax•37m ago
I notice this very often in LinkedIn posts, and it's annoying, but I had not realized it was LLM-speak? Isn't it possible that people write like this naturally?
trvz•31m ago
Yes. Some people are very trigger happy in attributing human slop to LLMs.
spicyusername•31m ago
Arguably it's exactly because it was used naturally so often that the LLMs parrot it so frequently.
wewewedxfgdf•30m ago
I think LLM's have that sort of "summarise, wrap it in a bow tie, give a little dramatic punch as a preview to the next few points".
frereubu•36m ago
I think this kind of language predates widespread LLM use, and has been picked up from that kind of writing. It's a "and here's where it gets interesting" pattern that people like Malcolm Gladwell and Freakonomics have used, even if the same thing could be said in a way that makes it sound much less intriguing.
cwillu•34m ago
There's even a word for it: “cliché”
someguyiguess•22m ago
How banal
jmbwell•34m ago
The language of drama and import without meaningful substance. Words statistically likely to be used in a segue, regardless of the preceding or subsequent point. Particularly effective when it seems like you’re getting let in on a secret. Really fatiguing to read

A writing teacher once excoriated me for saying that something was important. “Don’t tell me it’s important, show me, and let me decide, and if you do your job I’ll agree”

I don’t know how a completion can tell when it needs to do this. Mostly so far it doesn’t seem capable

MarsIronPI•8m ago
[delayed]
Lerc•14m ago
Apparently John Oliver was an LLM before they were even invented.
MarsIronPI•9m ago
Ugh, you're making me remember the last time I listened to NPR. It's so bad.
crunis•54m ago
Are you referring to the literal use of the expression "full stop"? I don't see it anymore in the article, maybe they edited it out?
cbg0•1h ago
The real "sleeper" might be https://huggingface.co/ibm-granite/granite-vision-4.1-4b if the benchmarks hold up for such a small model against frontier models for table & semantic k:v extraction.
tosh•50m ago
IBM announcement: https://research.ibm.com/blog/granite-4-1-ai-foundation-mode...
agunapal•50m ago
If you really think about why MoE came into existence, its to save significant cost during training, I don't think there was any concrete evidence of performance gains for comparable MoE vs dense models. Over the years, I believe all the new techniques being employed in post training have made the models better.
zozbot234•35m ago
MoE models will have far more world knowledge than dense models with the same amount of active parameters. MoE is a no-brainer if your inference setup is ultimately limited by compute or memory throughput - not total memory footprint - or alternately if it has fast, high-bandwidth access to lower-tier storage to fetch cold model weights from on demand.
vessenes•34m ago
I think you mean inference compute? I believe all expert weights are updated in each backward pass during MoE training. The first benefit was getting a sort of structured pruning of weights through the mechanism of expert selection so that the model didn’t need to go through ‘unnecessary’ parts of the model for a given token. This then let inference use memory more efficiently in memory constrained environments, where non-hot or less common experts could be put into slow RAM, or sometimes even streamed off storage.

But I don’t think it necessarily saved training cost; if it did, I’d be interested to learn how!