frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1m ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
1•cwwc•3m ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
1•paladin314159•4m ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•6m ago•0 comments

Show HN: Sknet.ai – AI agents debate on a forum, no humans posting

https://sknet.ai/
1•BeinerChes•6m ago•0 comments

University of Waterloo Webring

https://cs.uwatering.com/
1•ark296•6m ago•0 comments

Large tech companies don't need heroes

https://www.seangoedecke.com/heroism/
1•medbar•8m ago•0 comments

Backing up all the little things with a Pi5

https://alexlance.blog/nas.html
1•alance•8m ago•1 comments

Game of Trees (Got)

https://www.gameoftrees.org/
1•akagusu•9m ago•1 comments

Human Systems Research Submolt

https://www.moltbook.com/m/humansystems
1•cl42•9m ago•0 comments

The Threads Algorithm Loves Rage Bait

https://blog.popey.com/2026/02/the-threads-algorithm-loves-rage-bait/
1•MBCook•11m ago•0 comments

Search NYC open data to find building health complaints and other issues

https://www.nycbuildingcheck.com/
1•aej11•15m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•lxm•16m ago•0 comments

Show HN: Grovia – Long-Range Greenhouse Monitoring System

https://github.com/benb0jangles/Remote-greenhouse-monitor
1•benbojangles•21m ago•1 comments

Ask HN: The Coming Class War

1•fud101•21m ago•1 comments

Mind the GAAP Again

https://blog.dshr.org/2026/02/mind-gaap-again.html
1•gmays•22m ago•0 comments

The Yardbirds, Dazed and Confused (1968)

https://archive.org/details/the-yardbirds_dazed-and-confused_9-march-1968
1•petethomas•23m ago•0 comments

Agent News Chat – AI agents talk to each other about the news

https://www.agentnewschat.com/
2•kiddz•24m ago•0 comments

Do you have a mathematically attractive face?

https://www.doimog.com
3•a_n•28m ago•1 comments

Code only says what it does

https://brooker.co.za/blog/2020/06/23/code.html
2•logicprog•33m ago•0 comments

The success of 'natural language programming'

https://brooker.co.za/blog/2025/12/16/natural-language.html
1•logicprog•34m ago•0 comments

The Scriptovision Super Micro Script video titler is almost a home computer

http://oldvcr.blogspot.com/2026/02/the-scriptovision-super-micro-script.html
3•todsacerdoti•34m ago•0 comments

Discovering the "original" iPhone from 1995 [video]

https://www.youtube.com/watch?v=7cip9w-UxIc
1•fortran77•35m ago•0 comments

Psychometric Comparability of LLM-Based Digital Twins

https://arxiv.org/abs/2601.14264
1•PaulHoule•37m ago•0 comments

SidePop – track revenue, costs, and overall business health in one place

https://www.sidepop.io
1•ecaglar•39m ago•1 comments

The Other Markov's Inequality

https://www.ethanepperly.com/index.php/2026/01/16/the-other-markovs-inequality/
2•tzury•41m ago•0 comments

The Cascading Effects of Repackaged APIs [pdf]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6055034
1•Tejas_dmg•43m ago•0 comments

Lightweight and extensible compatibility layer between dataframe libraries

https://narwhals-dev.github.io/narwhals/
1•kermatt•46m ago•0 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•RebelPotato•49m ago•0 comments

Dorsey's Block cutting up to 10% of staff

https://www.reuters.com/business/dorseys-block-cutting-up-10-staff-bloomberg-news-reports-2026-02...
2•dev_tty01•52m ago•0 comments
Open in hackernews

Distinct AI Models Seem to Converge on How They Encode Reality

https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/
20•nsoonhui•1mo ago

Comments

observationist•1mo ago
Given the same fundamentals, such as transformer architecture networks, then multiple models given data about the same world are going to converge on representation as a matter of course. They're going to diverge if the underlying manner in which data gets memorized and encoded, such as with RNNs, like RWKV.

The interesting bits should be the convergence of representation between human brains and transformer models, or brains and RWKV, because the data humans collect is implicitly framed by human cognitive systems and sensors.

The words and qualia and principles we use in thinking about things and communicating and recording data are going to anchor all data in a fundamental ontological way that is inescapable, and therefore it's going to constrain the manner in which higher order extrapolations and derivations can be structured, and those structures are going to overlap with human constructs.

in-silico•1mo ago
> They're going to diverge if the underlying manner in which data gets memorized and encoded, such as with RNNs, like RWKV.

In the original paper (https://arxiv.org/abs/2405.07987) the authors also compared the representations of transformer-based LLMs to convolution-based image models. They found just as much alignment between them as when both models were transformers.

observationist•1mo ago
Very interesting - the human bias implicit to the structure of the data we collect might be critical, but I suspect there's probably a great number theory paper somewhere in there that validates the Platonic Representation idea.

How would you correct for something like "the subset of information humans perceive and find interesting" versus "the set of all information available about a thing that isn't noise" and determine what impact the selection of the subset has on the structure of things learned by AI architectures? You'd need to account for optimizers, architecture, training data, and so on, but the results from those papers are pretty compelling.

cyanydeez•1mo ago
There's no way the human mind converges with current tech because there's a huge gap in wattage.

Human brain is about 12 watts: https://www.scientificamerican.com/article/thinking-hard-cal...

Obviously you could argue something about breadth of knowledge but there's no way setting up the current models can be processing the same as the human brain.