frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
1•ArtemZ•1m ago•0 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•2m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
1•LiamPowell•3m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
2•duxup•6m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•7m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•19m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•21m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
2•savrajsingh•22m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•24m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•28m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•32m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
1•g1raffe•35m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•41m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
2•rolph•45m ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•47m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•52m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•52m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•56m ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•56m ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•57m ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•58m ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
4•thread_id•1h ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1h ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•1h ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
2•paladin314159•1h ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1h ago•0 comments
Open in hackernews

Show HN: Mapping AI narratives by M.I.N.D. structural alignment

https://nextarcresearch.com/misc/mind_ai_narratives.html
1•neilgsmith•1mo ago

Comments

neilgsmith•1mo ago
OP, I built this chart as a way to stress-test AI narratives using a simple structural framework.

The “thing” I’m showing is the mapping itself: it separates two questions that often get conflated: (1) how structurally aligned a public entity is with long-horizon AI value creation, and (2) how much of that story already appears to be priced in.

The x-axis (M.I.N.D.) is a composite structural-alignment score (Material, Intelligence, Network, Diversification, inspired by the “Last Economy” framing). Scores are synthesized per entity after a skills/assets/capabilities analysis and a review of analyst research, using an LLM as a structured aggregation tool rather than an oracle. Roughly speaking: Material captures control over scarce physical inputs, Intelligence reflects leverage over computation and models, Network captures ecosystem and data flywheels, and Diversification reflects exposure across multiple AI value paths.

The y-axis (valuation tension) is a rough proxy for expectation saturation. I’m treating it as a secondary signal; the primary thing I’m testing is whether structural alignment and narrative intensity decouple in interesting ways.

One weakness I’m actively unsure about is the M.I.N.D. formulation itself. Multiplying the four dimensions strongly penalizes any missing leg, which may or may not reflect how value actually compounds in AI systems. If that assumption is wrong, the framework will systematically mislead.

I’m especially interested in: - whether these four dimensions are the right ones - whether multiplication is the right way to combine them - where this framework would clearly fail

Happy to answer questions or clarify assumptions.

MrCoffee7•1mo ago
Do you have any references that further explain what MIND and the "Last Economy" concepts are? Also, any references on "valuation tension" or "expectation saturation" as I do not understand what you are trying to measure?
neilgsmith•1mo ago
Thanks for the question - in brief, I'm trying to gather opinions as to whether M.I.N.D. (see below) is truly an effective metric to evaluate: "if AI capabilities keep improving and diffusing, how well positioned is this entity to capture second-order value from that process?".

M.I.N.D. / "Last Economy"

The "Last Economy" framing comes from Emad Mostaque's book of the same name and is a way of thinking about where long-run value concentrates when intelligence becomes abundant. M.I.N.D. is the operationalization of that idea from the book and positioned as a better "yardstick" than current metrics like GDP or other traditional, scarcity-oriented financial metrics. For background on the broader thesis, Emad has written and spoken about it publicly here: https://ii.inc/web/the-last-economy. [It's a quick read for those familiar with the AI space and IMHO an important and relatively accessible read for anyone planning to live in the future].

At a high level he outlines:

- Material: control over scarce physical inputs that AI depends on (energy, fabs, supply chains, hardware)

- Intelligence: leverage over computation, models, or inference at scale

- Network: data, ecosystems, distribution, or flywheels that compound usage

- Diversification: exposure across multiple AI value paths rather than a single bet

The specific choice to multiply the dimensions (rather than add them) is also from his formulation: it encodes the assumption that missing one leg meaningfully caps long-run alignment. That assumption is very much up for debate, but the better an entity (country, company, person etc.) can score along the dimensions the better prepared they are for the Last Economy future governed by more physical than metabolic processes, and the ability to convert energy into computation.

I do want to stress that this chart is my interpretation, not an official formulation.

Valuation tension / expectation saturation I'm not trying to introduce a standard valuation metric here, and there isn't a single reference I'd point to. The idea is closer to a sentiment / expectation proxy than intrinsic value. Concretely, I'm asking: how optimistic does current pricing appear relative to a longer-horizon narrative based on how well a company may thrive or suffer in The Last Economy scenario? To keep it interpretable, I approximate that using:

- a relative long-term opportunity estimate (2030 horizon, directionally based on a creative, scenario driven process)

- divided by price position within the 52-week range as a proxy for how much optimism or skepticism is already expressed

It's intentionally blunt and debatable. I'm treating it as a secondary axis — useful for highlighting where narratives feel "fully priced" versus where they don't — not as a valuation model.

I realise there is a lot of context underlying my question. Thanks for your patience and interest.