frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Was going to share my work

1•hiddenarchitect•1m ago•0 comments

Pitchfork: A devilishly good process manager for developers

https://pitchfork.jdx.dev/
1•ahamez•1m ago•0 comments

You Are Here

https://brooker.co.za/blog/2026/02/07/you-are-here.html
1•mltvc•5m ago•0 comments

Why social apps need to become proactive, not reactive

https://www.heyflare.app/blog/from-reactive-to-proactive-how-ai-agents-will-reshape-social-apps
1•JoanMDuarte•6m ago•1 comments

How patient are AI scrapers, anyway? – Random Thoughts

https://lars.ingebrigtsen.no/2026/02/07/how-patient-are-ai-scrapers-anyway/
1•samtrack2019•6m ago•0 comments

Vouch: A contributor trust management system

https://github.com/mitchellh/vouch
1•SchwKatze•6m ago•0 comments

I built a terminal monitoring app and custom firmware for a clock with Claude

https://duggan.ie/posts/i-built-a-terminal-monitoring-app-and-custom-firmware-for-a-desktop-clock...
1•duggan•7m ago•0 comments

Tiny C Compiler

https://bellard.org/tcc/
1•guerrilla•9m ago•0 comments

Y Combinator Founder Organizes 'March for Billionaires'

https://mlq.ai/news/ai-startup-founder-organizes-march-for-billionaires-protest-against-californi...
1•hidden80•9m ago•1 comments

Ask HN: Need feedback on the idea I'm working on

1•Yogender78•10m ago•0 comments

OpenClaw Addresses Security Risks

https://thebiggish.com/news/openclaw-s-security-flaws-expose-enterprise-risk-22-of-deployments-un...
1•vedantnair•10m ago•0 comments

Apple finalizes Gemini / Siri deal

https://www.engadget.com/ai/apple-reportedly-plans-to-reveal-its-gemini-powered-siri-in-february-...
1•vedantnair•11m ago•0 comments

Italy Railways Sabotaged

https://www.bbc.co.uk/news/articles/czr4rx04xjpo
3•vedantnair•11m ago•0 comments

Emacs-tramp-RPC: high-performance TRAMP back end using MsgPack-RPC

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•fanf2•13m ago•0 comments

Nintendo Wii Themed Portfolio

https://akiraux.vercel.app/
1•s4074433•17m ago•1 comments

"There must be something like the opposite of suicide "

https://post.substack.com/p/there-must-be-something-like-the
1•rbanffy•19m ago•0 comments

Ask HN: Why doesn't Netflix add a “Theater Mode” that recreates the worst parts?

2•amichail•20m ago•0 comments

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•26m ago•2 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•28m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•28m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•29m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•30m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•31m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•31m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•32m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•34m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
5•codexon•34m ago•2 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•35m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•39m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•40m ago•0 comments
Open in hackernews

Formal Proof: LLM Hallucinations Are Structural, Not Statistical (Coq Verified)

https://philpapers.org/rec/SCHTIC-17
2•ICBTheory•2mo ago

Comments

ICBTheory•2mo ago
Author here.

This paper is Part III of a trilogy investigating the limits of algorithmic cognition. Given the recent industry signals regarding "scaling plateaus" (e.g., Sutskever etc.), I attempt to formalize why these limits appear structurally unavoidable.

The Thesis: We model modern AI as a Probabilistic Bounded Semantic System (P-BoSS). The paper demonstrates via the "Inference Trilemma" that hallucinations are not transient bugs to be fixed by more data, but mathematical necessities when a bounded system faces fat-tailed domains (alpha ≤ 1).

The Proof: While this paper focuses on the CS implications, the underlying mathematical theorems (Rice’s Theorem applied to Semantic Frames, Sheaf Theoretic Gluing Failures) are formally verified using Coq.

You can find the formal proofs and the Coq code in the companion paper (Part II) here:

https://philpapers.org/rec/SCHTIC-16

I’m happy to discuss the P-BOSS definition and why probabilistic mitigation fails in divergent entropy regimes.

wiz21c•2mo ago
Since we can't avoid hallucinations, maybe we can live with them ?

I mean, I regularly use LLM's and although, sometimes, they go a bit mad, most of the time they're really helpful

ICBTheory•2mo ago
I'd say that conclusion is a manifestation of pragmatic wisdom.

Anyway: I agree. The paper certainly doesn't argue that AI is useless, but that autonomy in high-stakes domains is mathematically unsafe.

In the text, I distinguish between operating on an 'Island of Order' (where hallucinations are cheap and correctable, like fixing a syntax error in code) versus navigating the 'Fat-Tailed Ocean' (where a single error is irreversible).

Tying this back to your comment: If an AI hallucinates a variable name — no problem, you just fix it. But I would advise skepticism if an AI suggests telling your boss that 'his professional expertise still has significant room for improvement.'

If hallucinations are structural (as the Coq proof in Part II indicates), then 'living with them' means ensuring the system never has the autonomy to execute that second type of decision.