frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Rare Viking Gold Arm-Ring Discovered on Isle of Man

https://allthathistory.com/archaeology-discoveries/viking-gold-arm-ring-isle-of-man/2711/
1•speckx•1m ago•0 comments

Global Greening from Higher CO2 Hits "Striking" New Heights – Modernity

https://modernity.news/2025/06/06/global-greening-from-higher-co2-hits-striking-new-heights/
1•bilsbie•1m ago•0 comments

Ask HN: Should I build a directory product?

1•alizaid•3m ago•0 comments

History and Disposition

https://v5.chriskrycho.com/notes/history-and-disposition/
1•mooreds•4m ago•0 comments

Should You "Rent" an Exec for Your Startup? A Fractional COO Weighs In

https://review.firstround.com/fractional-exec-hiring-guide/
1•mooreds•4m ago•0 comments

Miami's Drinking Water Is Threatened by a Florida Nuclear Plant

https://www.bloomberg.com/news/features/2025-06-05/bankrupt-upper-west-side-private-school-s-bold-growth-plan-sowed-its-demise
1•gametorch•10m ago•3 comments

The Computer Revolution of the 80s Told by a Pioneer: Lee Felsenstein [video]

https://www.youtube.com/watch?v=rASMe9FDjbg
2•ohjeez•13m ago•0 comments

Show HN: Use OpenAI's 4o to create game assets, publish to Creative Commons

https://gametorch.app/
2•gametorch•14m ago•0 comments

Orbital Ambush? Russian Satellites Move Like Predators Around U.S. Target

https://spacetechtimes.com/are-russian-satellites-creeping-toward-american/
1•nabla9•16m ago•0 comments

(How) One Ancient Language Went Global

https://www.bloomsbury.com/us/proto-9781639732586/
1•handfuloflight•18m ago•0 comments

Neven Mrgan on Why Skeuomorphism Is Like a Classic Car

https://appdocumentary.com/2015/01/08/neven-mrgan-on-why-skeuomorphism-is-like-a-classic-car/
2•tambourine_man•20m ago•0 comments

PhDs for Entrepreneurs

https://medium.com/@danzhang/phds-for-entrepreneurs-7cdbdd891ff3
1•jxmorris12•22m ago•0 comments

Human Vestigiality

https://en.wikipedia.org/wiki/Human_vestigiality
1•elsewhen•22m ago•0 comments

Mprocs written in Rust: Run multiple commands in parallel

https://github.com/pvolok/mprocs
1•behnamoh•23m ago•0 comments

JavaScript await was rogue rogue along

https://codedynasty.dev/posts/Javascript-await-was-rogue-all-along
1•66yatman•23m ago•0 comments

The Common Pile v0.1

https://blog.eleuther.ai/common-pile/
2•Philpax•23m ago•0 comments

Private Japanese spacecraft crashes into moon in 'hard landing,' ispace says

https://www.space.com/astronomy/moon/private-japanese-spacecraft-resilience-ispace-moon-landing-attempt
2•pseudolus•24m ago•0 comments

Food and beer are common perks in hospitality – but are they masking unfairness?

https://theconversation.com/free-food-and-beer-are-common-perks-for-hospitality-workers-but-are-they-masking-unfairness-256330
1•PaulHoule•25m ago•0 comments

New observatory is assembling most complete time-lapse record of night sky

https://phys.org/news/2025-06-observatory-lapse-night-sky.html
1•bookmtn•28m ago•0 comments

Supreme Court Rules 1964 Civil Rights Act Also Protects Whites

https://www.stevesailer.net/p/supreme-court-rules-1964-civil-rights
4•mpweiher•29m ago•2 comments

Using Proprietary Golinks in Firefox

https://www.dgt.is/blog/2025-06-04-proprietary-golinks-firefox/
1•speckx•29m ago•0 comments

Memory optimization is the best way to write high performing CUDA kernel for AI

1•thecongluong•29m ago•0 comments

Exercise Is Great but It's Not a Cancer Drug

https://www.sensible-med.com/p/exercise-is-great-but-its-not-a-cancer
1•hilux•30m ago•0 comments

The Man Whose Weather Forecast Saved the World

https://www.nytimes.com/2025/06/05/weather/d-day-forecast-history-wwii.html
1•tusslewake•31m ago•0 comments

Private lunar lander from Japan crashes into moon in failed mission

https://phys.org/news/2025-06-private-lunar-lander-japan-falls.html
1•bookmtn•32m ago•0 comments

The Accountability Sink in AI Advertising

https://wordsandmoneyinmachines.substack.com/p/ais-advertising-accountability-sink
1•cahoots8727•34m ago•0 comments

What methylene blue can (and can’t) do for the brain

https://neurofrontiers.blog/what-methylene-blue-can-and-cant-do-for-the-brain/
2•wiry•34m ago•0 comments

Sipeed NanoCluster fits 7-node Pi cluster in 6cm

https://www.jeffgeerling.com/blog/2025/sipeed-nanocluster-fits-7-node-pi-cluster-6cm
1•mikece•35m ago•0 comments

The WeightWatcher tool for predicting the accuracy of Deep Neural Networks

https://github.com/CalculatedContent/WeightWatcher
1•jxmorris12•36m ago•0 comments

Don't Put All Your Juice in One Box

https://juicebox.xyz/blog/dont-put-all-your-juice-in-one-box
1•greysonp•38m ago•0 comments
Open in hackernews

From tokens to thoughts: How LLMs and humans trade compression for meaning

https://arxiv.org/abs/2505.17117
116•ggirelli•1d ago

Comments

valine•1d ago
>> For each LLM, we extract static, token-level embeddings from its input embedding layer (the ‘E‘matrix). This choice aligns our analysis with the context-free nature of stimuli typical in human categorization experiments, ensuring a comparable representational basis.

They're analyzing input embedding models, not LLMs. I'm not sure how the authors justify making claims about the inner workings of LLMs when they haven't actually computed a forward pass. The EMatrix is not an LLM, its a lookup table.

Just to highlight the ridiculousness of this research, no attention was computed! Not a single dot product between keys and queries. All of their conclusions are drawn from the output of an embedding lookup table.

The figure showing their alignment score correlated with model size is particularly egregious. Model size is meaningless when you never activate any model parameters. If Bert is outperforming Qwen and Gemma something is wrong with your methodology.

blackbear_•1d ago
Note that the token embeddings are also trained, therefore their values do give some hints on how a model is organizing information.

They used token embeddings directly and not intermediate representations because the latter depend on the specific sentence that the model is processing. Data on human judgment was however collected without any context surrounding each word, thus using the token embeddings seem to be the most fair comparison.

Otherwise, what sentence(s) would you have used to compute the intermediate representations? And how would you make sure that the results aren't biased by these sentences?

navar•23h ago
You can process a single word through a transformer and get the corresponding intermediate representations.

Though it sounds odd there is no problem with it and it would indeed return the model's representation of that single word as seen by the model without any additional context.

valine•20h ago
Embedding models are not always trained with the rest of the model. That’s the whole idea behind VLLMs. First layer embeddings are so interchangeable you can literally feed in the output of other models using linear projection layers.

And like the other commenter said, you can absolutely feed single tokens through the model. Your point doesn’t make any sense though regardless. How about priming the model with “You’re a helpful assistant” just like everyone else does.

boroboro4•1d ago
It’s mind blowing LeCun is listed as one of the authors.

I would expect model size to correlate with alignment score because usually model sizes correlate with hidden dimension. But also opposite can be true - bigger models might shift more basic token classification logic into layers and hence embedding alignment can go down. Regardless feels like pretty useless research…

danielbln•23h ago
Leaves a bit of a taste considering LeCun's famously critical stance on auto-regressive transformer LLMs.
throwawaymaths•23h ago
the llm is also a lookup table! but your point is correct. they should have looked at subsequent layers that aggregate information over distance.
andoando•1d ago
Am I the only one that is lost on how the calculations are made?

From what I can tell this is limited in scope to categorizing nouns (robin is a bird).

fusionadvocate•1d ago
Open a bank account. Open your heart. Open a can. Open to new experiences.

Words are a tricky thing to handle.

an0malous•1d ago
OpenAI agrees
esafak•1d ago
And models since BERT and ELMo capture polysemy!

https://aclanthology.org/2020.blackboxnlp-1.15/

bluefirebrand•1d ago
And that is just in English

Other languages have similar but fundamentally different oddities which do not translate cleanly

suddenlybananas•1d ago
Not sure how they're fundamentally different. What do you mean?
bluefirebrand•23h ago
Think about the work of localizing a joke that relies on wordplay or similar sounding words to be funny. Or simply how words rhyme

Try explaining why tough and rough rhyme but bough doesn't

You know? Language has a ton of idiosyncrasies.

Qworg•21h ago
To make it more concrete - here's an example in Chinese: https://en.wikipedia.org/wiki/Grass_Mud_Horse
Scarblac•20h ago
ChatGPT is horrible at producing Dutch rhymes (for Sinterklaas poems) until you realize that the words it comes up with do rhyme when translated to English.
suddenlybananas•7h ago
Right but I wouldn't call those things fundamentally different. That's just having different words; the categories of idiosyncrasies are still the same.
thesz•17h ago
As most languages allow expressions of algorithms, they are all Turing complete and, thus, are not fundamentally different. The complexity of expressions of some concepts is different, though.

My favorite thing is a "square." I put that name to an enumeration that allows me to compare and contrast things with two different qualities expressed by two extremes.

One such square is "One can (not) do (not do) something." Both "not"'s can be present and absent, just like a truth table.

"One can do something", "one can not do something", "one can do not do something" and, finally, "one can not help but do something."

Why should we use "help but" instead of "do not"?

While this does not preclude one from enumerating possibilities thinking in English, it makes that enumeration harder than it can be in other languages. For example, in Russian the "square" is expressible directly.

Also, "help but" is not shorter than "do not," it is longer. Useful idioms usually expressed in shorter forms, thus, apparently, "one can not help but do something" is considered by Englishmen as not useful.

falcor84•23h ago
I agree in general, but I think that "open" is actually a pretty straightforward word.

As I see it, "Open your heart", "Open a can" and "Open to new experiences" have very similar meanings for "Open", being essentially "make a container available for external I/O", similar to the definition of an "open system" in thermodynamics. "Open a bank account" is a bit different, as it creates an entity that didn't exist before, but even then the focus is on having something that allows for external I/O - in this case deposits and withdrawals.

johnnyApplePRNG•23h ago
This paper is interesting, but ultimately it's just restating that LLMs are statistical tools and not cognitive systems. The information-theoretic framing doesn’t really change that.
Nevermark•21h ago
> LLMs are statistical tools and not cognitive systems

I have never understood broad statements that models are just (or mostly) statistical tools.

Certainly statistics apply, minimizing mismatches results in mean (or similar measure) target predictions.

But the architecture of a model is the difference between compressed statistics vs. forcing a model to translate information in a highly organized way reflecting the actual shape of the problem to get any accuracy at all.

In both cases, statistics are relevant, but in the latter it's not a particularly insightful way to talk about what a model has learned.

Statistical accuracy, prediction, etc. are basic problems to solve. The training criteria being optimized. But they don't limit the nature of solutions. They both leave problem difficulty, and solution sophistication unbounded.

catchnear4321•19h ago
incomplete inaccurate off misleading meandering not quite generation prediction removal of superfluous fast but spiky

this isn’t talking about that.

xwat•15h ago
Stochastic parrots