frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

CERN uses tiny AI models burned into silicon for real-time LHC data filtering

https://theopenreader.org/Journalism:CERN_Uses_Tiny_AI_Models_Burned_into_Silicon_for_Real-Time_LHC_Data_Filtering
33•TORcicada•1h ago

Comments

rakel_rakel•1h ago
Hey Siri, show me an example of an oxymoron!

> CERN is using extremely small, custom large language models physically burned into silicon chips to perform real-time filtering of the enormous data generated by the Large Hadron Collider (LHC).

sh3rl0ck•1h ago
There's no mention of SLMs or LLMs, though.

> This work represents a compelling real-world demonstration of “tiny AI” — highly specialised, minimal-footprint neural networks

FPGAs for Neural Networks have been s thing since before the LLM era.

100721•1h ago
Huh? The first paragraph literally says they are using LLMs

> [ GENEVA, SWITZERLAND — March 28, 2026 ] — CERN is using extremely small, custom large language models physically burned into silicon chips to perform real-time filtering of the enormous data generated by the Large Hadron Collider (LHC).

SiempreViernes•41m ago
the site might have fixed it, to me it says "artificial intelligence" instead of LLM, still bad but not" steaming pile of poo on you bank statement" bad
msla•1h ago
Are they some ancient small-scale integration VLSI design? Do they broadcast on a low-frequency VHF band? Face it: Oxymorons like those are part of the technical world. "VLSI" was a current term back when whole CPUs were made out of fewer transistors than we use for register files now, and "VHF" is low frequency even by commercial broadcasting standards.
rakel_rakel•55m ago
haha, yea they are part of it for sure, and I'm not dunking on the use of them, but I rather smile a bit when I stumble upon them.

Like (~9K) Jumbo Frames!

100721•1h ago
Does anyone know why they are using language models instead of a more purpose-built statistical model? My intuition is that a language model would either be overfit, or its training data would have a lot of noise unrelated to the application and significantly drive up costs.
kevmo314•1h ago
This might be some journalistic confusion. If you go to the CERN documentation at https://twiki.cern.ch/twiki/bin/view/CMSPublic/AXOL1TL2025 it states

> The AXOL1TL V5 architecture comprises a VICReg-trained feature extractor stacked on top of a VAE.

LeoWattenberg•58m ago
It's not an LLM, it is a purpose built model. https://arxiv.org/html/2411.19506v1

5 years ago we would've called it a Machine Learning algorithm. 5 years before that, a Big Data algorithm.

t0lo•50m ago
i hate that we're in this linguistic soup when it comes to algorithmic intelligence now.
IanCal•41m ago
We’ve been calling neural nets AI for decades.

> 5 years before that, a Big Data algorithm.

The DNN part? Absolutely not.

I don’t know why people feel the need for such revisionism but AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.

magicalhippo•28m ago
> AI has been a field encompassing things far more basic than this for longer than most commenters have been alive.

When I was 13, having just started programming, I picked up a book from a "junk bin" at a book store on Artificial Intelligence. It must have been from the mid-80s if not older.

It had an entire chapter on syllogism[1] and how to implement a program to spit them out based on user input. As I recall it basically amounted to some string exteaction assuming user followed a template and string concatenation to generate the result. I distinctly recall not being impressed about such a trivial thing being part of a book on AI.

[1]: https://en.wikipedia.org/wiki/Syllogism

dmd•8m ago
… they’re not? Who said they are? The article even explicitly says they’re not?
serendipty01•58m ago
Might be related: https://www.youtube.com/watch?v=T8HT_XBGQUI (Big Data and AI at the CERN LHC by Dr. Thea Klaeboe Aarrestad)

https://www.youtube.com/watch?v=8IZwhbsjhvE (From Zettabytes to a Few Precious Events: Nanosecond AI at the Large Hadron Collider by Thea Aarrestad)

Page: https://www.scylladb.com/tech-talk/from-zettabytes-to-a-few-...

randomNumber7•57m ago
Does string theory finally make sense when we ad AI hallucinations?
quijoteuniv•48m ago
A bit of hype in the AI wording here. This could be called a chip with hardcoded logic obtained with machine learning
killingtime74•42m ago
Is a LLM logic in weights derived from machine learning?
shlewis•40m ago
Well, yes. That's literally what it is.
dmd•9m ago
What what is? The article has nothing to do with LLMs. It even explicitly says they don’t use LLMs.
quijoteuniv•38m ago
Good one… but Is a DB query filter AI? I forgot to say though is sounds like a really cool thing to do
stingraycharles•24m ago
Strictly speaking, expert systems are AI as well, as in, an expert comes up with a bunch of if/else rules. So yes technically speaking even if they didn’t acquire the weights using ML and hand-coded them, it could still be called AI.
FartyMcFarter•36m ago
AI is not a new thing, and machine learned logic definitely counts as AI.
seydor•27m ago
cern has been using neural networks for decades
intoXbox•26m ago
They used a custom neural net with autoencoders, which contain convolutional layers. They trained it on previous experiment data.

https://arxiv.org/html/2411.19506v1

Why is it so hard to elaborate what AI algorithm / technique they integrate? Would have made this article much better

WhyNotHugo•24m ago
Intuitively, I’ve always had an impression that using an analogue circuit would be feasible for neural networks (they just matrix multiplication!). These should provide instantaneous output.

Isn’t this kind of approach feasible for something so purpose-built?

mentalgear•10m ago
That's what Groq did as well: burning the Transformer right onto a chip (I have to say I was impressed by the simplicity, but afterwards less so by their controversial Kushner/Saudi investment) .

Distributed DuckDB-Native DataFrames for Elixir

https://dux.now/
1•gjmveloso•2m ago•0 comments

How to make programming terrible for everyone

https://jneen.ca/posts/2026-03-27-how-to-make-programming-terrible-for-everyone/
1•paroneayea•8m ago•0 comments

OpenTTD for Windows NT RISC

https://virtuallyfun.com/2026/03/28/openttd-windows-nt-risc/
1•jandeboevrie•8m ago•0 comments

Apple Says No iPhone in Lockdown Mode Has Ever Been Hacked

https://www.macrumors.com/2026/03/27/no-iphone-in-lockdown-mode-has-ever-been-hacked/
1•7777777phil•9m ago•0 comments

I left YouTube two years ago. Time to come back [video]

https://www.youtube.com/watch?v=Yz3lSKgz4q8
1•thunderbong•16m ago•1 comments

Nicholas Carlini – Black-hat LLMs – [un]prompted 2026 [video]

https://www.youtube.com/watch?v=1sd26pWhfmg
3•lmc•19m ago•0 comments

Building a guitar trainer with embedded Rust

https://blog.orhun.dev/introducing-tuitar/
1•orhunp_•20m ago•0 comments

Figuring out what to build in a world of agents

https://nickmorley.org/#/post/what-to-build-in-a-world-of-agents
1•cyclecycle•27m ago•1 comments

GIS Lidar mapping chain of custody

https://appliedsystemsinsight.com
1•wesley-Alan•30m ago•0 comments

Why your AI agents will turn against you

https://yoloai.dev/posts/ai-agent-threat-landscape/
2•kstenerud•33m ago•0 comments

134229 Hack Oto

2•winko•37m ago•0 comments

Best iPhone Alternative for Samsung DeX: External Display Browser

https://apps.apple.com/us/app/external-display-browser/id6758286241
2•marianf•37m ago•2 comments

Gridpaper (scientific plotting tool) reaches 1.0

https://gridpaper.org/examples/
3•hnarayanan•38m ago•0 comments

Does A.I. Need a Constitution?

https://www.newyorker.com/magazine/2026/03/30/does-ai-need-a-constitution
3•doe88•41m ago•1 comments

Show HN: Opnsense-filterlog, a TUI for analysing OPNsense firewall logs

https://gitlab.com/allddd/opnsense-filterlog
2•allddd•42m ago•0 comments

Qwen3 512k context via TurboQuant on Mac mini

https://twitter.com/powtac/status/2037813823571194078
3•pow-tac•48m ago•1 comments

Post-Productivity

https://www.generalistcareer.com/p/post-productivity
3•millytamati•54m ago•1 comments

The Adolescence of Technology

https://www.darioamodei.com/essay/the-adolescence-of-technology#fnref:1
2•dgellow•1h ago•0 comments

Claude Mythos: A Cyber Threat

https://www.youtube.com/watch?v=JGubyPD_EU0
5•danebalia•1h ago•0 comments

Prince of Arabia for the Flipper (Cf Prince of Persia)

https://lab.flipper.net/apps/princeofarabia
2•matthewsinclair•1h ago•0 comments

Reverse-Engineering the Apollo 11 Code with AI

https://www.airealist.ai/p/reverse-engineering-the-apollo-11
4•julsimon•1h ago•0 comments

ClickWar Game

https://clickwar.ultimateteam.hu/
2•agerivagyok•1h ago•0 comments

The Comforting Lie of SHA Pinning

https://www.vaines.org/posts/2026-03-24-the-comforting-lie-of-sha-pinning/
3•chillax•1h ago•0 comments

Show HN: Immutable – Audit logs with SHA-256

https://getimmutable.dev/
3•umarey•1h ago•0 comments

I need flash USDT(trc20/erc20/bep20)

2•timiti•1h ago•0 comments

Cat Itecture: Better Cat Window Boxes

https://gwern.net/catitecture
2•gggscript•1h ago•0 comments

One endpoint. Best model. Any task

https://www.codesota.com/api-landing
2•Brosper•1h ago•1 comments

AV1's open, royalty-free promise in question as Dolby sues Snapchat over codec

https://arstechnica.com/gadgets/2026/03/av1s-open-royalty-free-promise-in-question-as-dolby-sues-...
4•pjmlp•1h ago•0 comments

Adults Lose Skills to AI. Children Never Build Them

https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-chil...
5•ndr42•1h ago•2 comments

My heuristics are wrong. What now?

https://brooker.co.za/blog/2026/03/20/ic-leadership.html
2•r4um•1h ago•0 comments