frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Imperative

https://pestlemortar.substack.com/p/imperative
1•mithradiumn•52s ago•0 comments

Show HN: I decomposed 87 tasks to find where AI agents structurally collapse

https://github.com/XxCotHGxX/Instruction_Entropy
1•XxCotHGxX•4m ago•1 comments

I went back to Linux and it was a mistake

https://www.theverge.com/report/875077/linux-was-a-mistake
1•timpera•5m ago•1 comments

Octrafic – open-source AI-assisted API testing from the CLI

https://github.com/Octrafic/octrafic-cli
1•mbadyl•7m ago•1 comments

US Accuses China of Secret Nuclear Testing

https://www.reuters.com/world/china/trump-has-been-clear-wanting-new-nuclear-arms-control-treaty-...
1•jandrewrogers•7m ago•1 comments

Peacock. A New Programming Language

1•hashhooshy•12m ago•1 comments

A postcard arrived: 'If you're reading this I'm dead, and I really liked you'

https://www.washingtonpost.com/lifestyle/2026/02/07/postcard-death-teacher-glickman/
2•bookofjoe•13m ago•1 comments

What to know about the software selloff

https://www.morningstar.com/markets/what-know-about-software-stock-selloff
2•RickJWagner•17m ago•0 comments

Show HN: Syntux – generative UI for websites, not agents

https://www.getsyntux.com/
3•Goose78•18m ago•0 comments

Microsoft appointed a quality czar. He has no direct reports and no budget

https://jpcaparas.medium.com/ab75cef97954
2•birdculture•18m ago•0 comments

AI overlay that reads anything on your screen (invisible to screen capture)

https://lowlighter.app/
1•andylytic•19m ago•1 comments

Show HN: Seafloor, be up and running with OpenClaw in 20 seconds

https://seafloor.bot/
1•k0mplex•20m ago•0 comments

Tesla turbine-inspired structure generates electricity using compressed air

https://techxplore.com/news/2026-01-tesla-turbine-generates-electricity-compressed.html
2•PaulHoule•21m ago•0 comments

State Department deleting 17 years of tweets (2009-2025); preservation needed

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•sleazylice•21m ago•1 comments

Learning to code, or building side projects with AI help, this one's for you

https://codeslick.dev/learn
1•vitorlourenco•22m ago•0 comments

Effulgence RPG Engine [video]

https://www.youtube.com/watch?v=xFQOUe9S7dU
1•msuniverse2026•23m ago•0 comments

Five disciplines discovered the same math independently – none of them knew

https://freethemath.org
4•energyscholar•24m ago•1 comments

We Scanned an AI Assistant for Security Issues: 12,465 Vulnerabilities

https://codeslick.dev/blog/openclaw-security-audit
1•vitorlourenco•25m ago•0 comments

Amazon no longer defend cloud customers against video patent infringement claims

https://ipfray.com/amazon-no-longer-defends-cloud-customers-against-video-patent-infringement-cla...
2•ffworld•25m ago•0 comments

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
2•rhcm•28m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•29m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
3•samizdis•33m ago•1 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•33m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
2•Critlist•35m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•38m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•38m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•40m ago•2 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
2•walterbell•44m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•45m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
2•_august•46m ago•0 comments
Open in hackernews

Distinct AI Models Seem to Converge on How They Encode Reality

https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/
20•nsoonhui•1mo ago

Comments

observationist•1mo ago
Given the same fundamentals, such as transformer architecture networks, then multiple models given data about the same world are going to converge on representation as a matter of course. They're going to diverge if the underlying manner in which data gets memorized and encoded, such as with RNNs, like RWKV.

The interesting bits should be the convergence of representation between human brains and transformer models, or brains and RWKV, because the data humans collect is implicitly framed by human cognitive systems and sensors.

The words and qualia and principles we use in thinking about things and communicating and recording data are going to anchor all data in a fundamental ontological way that is inescapable, and therefore it's going to constrain the manner in which higher order extrapolations and derivations can be structured, and those structures are going to overlap with human constructs.

in-silico•1mo ago
> They're going to diverge if the underlying manner in which data gets memorized and encoded, such as with RNNs, like RWKV.

In the original paper (https://arxiv.org/abs/2405.07987) the authors also compared the representations of transformer-based LLMs to convolution-based image models. They found just as much alignment between them as when both models were transformers.

observationist•1mo ago
Very interesting - the human bias implicit to the structure of the data we collect might be critical, but I suspect there's probably a great number theory paper somewhere in there that validates the Platonic Representation idea.

How would you correct for something like "the subset of information humans perceive and find interesting" versus "the set of all information available about a thing that isn't noise" and determine what impact the selection of the subset has on the structure of things learned by AI architectures? You'd need to account for optimizers, architecture, training data, and so on, but the results from those papers are pretty compelling.

cyanydeez•1mo ago
There's no way the human mind converges with current tech because there's a huge gap in wattage.

Human brain is about 12 watts: https://www.scientificamerican.com/article/thinking-hard-cal...

Obviously you could argue something about breadth of knowledge but there's no way setting up the current models can be processing the same as the human brain.