frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•1m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
1•tosh•2m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•2m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•5m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
3•sakanakana00•8m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•10m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•11m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•12m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
3•Nive11•13m ago•4 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•16m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•19m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•22m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•23m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•28m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•30m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•32m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•32m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•33m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•39m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•44m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•46m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•50m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•52m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•58m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•1h ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•1h ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
3•senekor•1h ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
2•myk-e•1h ago•0 comments
Open in hackernews

Distinct AI Models Seem to Converge on How They Encode Reality

https://www.quantamagazine.org/distinct-ai-models-seem-to-converge-on-how-they-encode-reality-20260107/
20•nsoonhui•1mo ago

Comments

observationist•4w ago
Given the same fundamentals, such as transformer architecture networks, then multiple models given data about the same world are going to converge on representation as a matter of course. They're going to diverge if the underlying manner in which data gets memorized and encoded, such as with RNNs, like RWKV.

The interesting bits should be the convergence of representation between human brains and transformer models, or brains and RWKV, because the data humans collect is implicitly framed by human cognitive systems and sensors.

The words and qualia and principles we use in thinking about things and communicating and recording data are going to anchor all data in a fundamental ontological way that is inescapable, and therefore it's going to constrain the manner in which higher order extrapolations and derivations can be structured, and those structures are going to overlap with human constructs.

in-silico•4w ago
> They're going to diverge if the underlying manner in which data gets memorized and encoded, such as with RNNs, like RWKV.

In the original paper (https://arxiv.org/abs/2405.07987) the authors also compared the representations of transformer-based LLMs to convolution-based image models. They found just as much alignment between them as when both models were transformers.

observationist•4w ago
Very interesting - the human bias implicit to the structure of the data we collect might be critical, but I suspect there's probably a great number theory paper somewhere in there that validates the Platonic Representation idea.

How would you correct for something like "the subset of information humans perceive and find interesting" versus "the set of all information available about a thing that isn't noise" and determine what impact the selection of the subset has on the structure of things learned by AI architectures? You'd need to account for optimizers, architecture, training data, and so on, but the results from those papers are pretty compelling.

cyanydeez•4w ago
There's no way the human mind converges with current tech because there's a huge gap in wattage.

Human brain is about 12 watts: https://www.scientificamerican.com/article/thinking-hard-cal...

Obviously you could argue something about breadth of knowledge but there's no way setting up the current models can be processing the same as the human brain.