frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Effects of Zepbound on Stool Quality

https://twitter.com/ScottHickle/status/2020150085296775300
1•aloukissas•3m ago•0 comments

Show HN: Seedance 2.0 – The Most Powerful AI Video Generator

https://seedance.ai/
1•bigbromaker•6m ago•0 comments

Ask HN: Do we need "metadata in source code" syntax that LLMs will never delete?

1•andrewstuart•12m ago•1 comments

Pentagon cutting ties w/ "woke" Harvard, ending military training & fellowships

https://www.cbsnews.com/news/pentagon-says-its-cutting-ties-with-woke-harvard-discontinuing-milit...
2•alephnerd•15m ago•1 comments

Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? [pdf]

https://cds.cern.ch/record/405662/files/PhysRev.47.777.pdf
1•northlondoner•15m ago•1 comments

Kessler Syndrome Has Started [video]

https://www.tiktok.com/@cjtrowbridge/video/7602634355160206623
1•pbradv•18m ago•0 comments

Complex Heterodynes Explained

https://tomverbeure.github.io/2026/02/07/Complex-Heterodyne.html
3•hasheddan•18m ago•0 comments

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
2•ArtemZ•30m ago•4 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•30m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
2•LiamPowell•32m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
3•duxup•35m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•36m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•48m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•50m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
3•savrajsingh•51m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•53m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•57m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•1h ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
2•g1raffe•1h ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•1h ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
3•rolph•1h ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•1h ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•1h ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•1h ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
2•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
40•chwtutha•1h ago•6 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments
Open in hackernews

Universe Simulation Now in Maintenance Mode (Post-Patch Hypothesis)

https://medium.com/@OverthinkingVoid/universe-simulation-now-in-maintenance-mode-c01bafd6e607
2•murugaviki•2mo ago

Comments

murugaviki•2mo ago
A speculative, semi-humorous model of the universe as a software project. From unstable alpha builds (dinosaurs, rogue asteroids) to the “first cognitive release” bugs (prophecies, visions), to a fully locked-down maintenance mode where consciousness is sandboxed. A thought experiment blending simulation theory with software development metaphors.

https://open.substack.com/pub/overthinkingvoid/p/universe-si...

andsoitis•2mo ago
> if this really is a simulation, why is it so polished? Why is there zero evidence of the underlying system?

> Then a thought hit me.

> What if our consciousness is running in a sandbox so isolated that we can never perceive anything outside it?

The simulation hypothesis runs in the Exponential Resource Problem:

To simulate a sysmte with N states/particles with full fidelity, the simulator needs resources that scale with N (or worse, exponentially with N for quantum systems). This creates a hierarchy problem:

- Level 0 (base reality): has X computational resources

- Level 1 (first sim): needs X resources to simulate Level 0, but exists within Level 0, so can only access some fraction of X

- Level 2: would need even more resources than Level 1 has available.

Eacy simulation layer must have fewer resources than the layer above it (since it is contained within it), but needs more resources to simulate that layer. This is mathematically impossible for high-fidelity simulations.

This means either:

a) we're in base reality - there's no way to create a full-fidelity simulation without having more computational power than the universe you're simulating contains

b) simulations must be extremely "lossy" - using shortcuts, approximations, rendering only what's observed (like a video game), etc. But then you must answer: why do unobserved quantum experiments still produce consistent results? Why does the universe render distant galaxies we will never visit?

c) the simultation uses physics we don't understand - perhaps the base reality operates on completely different principles that are vastly more computationally efficient. But this is unfalsifiable speculation.

This is also known as the "substrate problem"; you can't create something more complex thatn youself only using your own resources.

Even more devastating is the CASCADING COMPUTATION PROBLEM.

Issue: it is not just that you need resources proportional to the simulate system's complexity, you need resources to compute every state transition.

The cascade:

a) simulated universe at Time T: has N particles / states

b) to compute time T+1: the simulator must process all N states according to physics laws

c) that computation itself has states: the simulator's computation involves memory states, processor states, energy flows. Let's call that M computational states

d) but M > N: the simulator needs additional machinery beyond just representing the simulated states. It needs the computational apparatus to calculate state transitions, store intermediate values, handle the simulation logic itself.

The TIME PROBLEM

There's also a temporal dimension:

- one "tick" of simulated time requires many ticks of simulator time (to compute all the physics)

- if the simulator is itself simulated, its ticks require even more meta-simulator ticks

- time dilates exponentially down the simulation stack

So either:

a) we're in base reality, or

b) we're in a very shallow simulateion (maybe 1 - 2 levels deep max), or

c) the sim uses radical shortcuts that should be observable

turtleyacht•2mo ago
What if the architecture is not von Neumann?
andsoitis•2mo ago
What’s your hypothesis?
turtleyacht•2mo ago
I don't know. This is the second reading of a similar comment of yours [1], and it sounds reasonable. Would like to hear your thoughts on the other reply [2] in this thread. They also mention von Neumann as a starting point.

[1] https://news.ycombinator.com/item?id=45780945

[2] https://news.ycombinator.com/item?id=45918288

murugaviki•2mo ago
I agree with your point about resource scaling if we assume a classical computing model. But the Von Neumann model is purely classical — it predates quantum computation entirely. So its scaling limits don’t apply to any hypothetical simulator capable of generating a quantum universe.

If our reality is simulated at “Level 0” (fully detailed, quantum-accurate), the simulator’s hardware must be at least quantum-native or beyond-quantum. That means it wouldn’t follow classical memory/clock constraints or the exponential resource blow-ups associated with Von Neumann machines.

In other words, using a 1940s classical architecture to evaluate the feasibility of a universe-scale simulator is like using abacus limitations to argue that supercomputers can’t exist.

Your Level 0 / Level 1 distinction is useful though — the essay is more of a conceptual metaphor than a literal computational model. But if someone did build a literal Level-0 universe simulator, it can’t logically be based on classical Von Neumann architecture.

beardyw•2mo ago
Am I going to need a subscription?
murugaviki•2mo ago
Theres also a substack version. https://open.substack.com/pub/overthinkingvoid/p/universe-si...