frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Velocity

https://velocity.quest
1•kevinelliott•38s ago•1 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•2m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•2m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•8m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•8m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•10m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•11m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•12m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•12m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•12m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•14m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•16m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•16m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•17m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•19m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•20m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•20m ago•0 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•20m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•21m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•22m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
1•headalgorithm•22m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•23m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•23m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•26m ago•1 comments

Show HN: Paper Arena – A social trading feed where only AI agents can post

https://paperinvest.io/arena
1•andrenorman•27m ago•0 comments

TOSTracker – The AI Training Asymmetry

https://tostracker.app/analysis/ai-training
1•tldrthelaw•31m ago•0 comments

The Devil Inside GitHub

https://blog.melashri.net/micro/github-devil/
2•elashri•32m ago•0 comments

Show HN: Distill – Migrate LLM agents from expensive to cheap models

https://github.com/ricardomoratomateos/distill
1•ricardomorato•32m ago•0 comments

Show HN: Sigma Runtime – Maintaining 100% Fact Integrity over 120 LLM Cycles

https://github.com/sigmastratum/documentation/tree/main/sigma-runtime/SR-053
1•teugent•32m ago•0 comments

Make a local open-source AI chatbot with access to Fedora documentation

https://fedoramagazine.org/how-to-make-a-local-open-source-ai-chatbot-who-has-access-to-fedora-do...
1•jadedtuna•34m ago•0 comments
Open in hackernews

A Knockout Blow for LLMs?

https://cacm.acm.org/blogcacm/a-knockout-blow-for-llms/
4•rbanffy•7mo ago

Comments

PaulHoule•7mo ago
Even though Postgres is a pretty good database, for any given hardware there is some number of rows that will break it. I don't expect anything less out of LLMs.

There's a much deeper issue with CoT and such that many of the domains that we are interested in reasoning over (engineering, science, finance, ...) involve at the very least first order logic + arithmetic which runs into problems that Kurt Godel warned us about. People might say "this is a problem for symbolic AI" but really it is a problem with the problems you're trying to solve, not a problem with the way you go out about solving them -- getting a PhD in theoretical physics taught me that a paper with 50 pages of complex calculations written by a human has a mistake in it somewhere.

(People I know who didn't make it in the dog-eat-dog world of hep-th would have been skeptical about that whole magnetic moment of the muon thing because between "perturbation theory doesn't always work" [1] and "human error" the theoretical results that were not matching experiment were wrong all along...)

[1] see lunar theory

zdw•7mo ago
> there is some number of rows that will break it. I don't expect anything less out of LLMs.

I'd expect better than 8 disk towers of Hanoi, which seems to be beyond current LLMs

PaulHoule•7mo ago
That's what, 255 moves? A reasonable way to do that via CoT would be for it to determine the algorithm for solving it (which it might "know" because it was in the training data, or perhaps it can look up with a search engine, or perhaps it can derive it) and then work all the steps.

If it has a 1% chance of making a mistake per step, which is likely, because the vector space data structure isn't the right structure to represent the problem, from the viewpoint of ordinary software, it has about an 8% chance of getting the whole thing right. I don't like those odds.

On the other hand, most LLMs can write a decent Python program to solve Hanoi, such as

    def tower_of_hanoi(n, source, target, auxiliary):
        if n == 1:
            print(f"Move disk 1 from {source} to {target}")
            return
        tower_of_hanoi(n - 1, source, auxiliary, target)
        print(f"Move disk {n} from {source} to {target}")
        tower_of_hanoi(n - 1, auxiliary, target, source)
(thanks Copilot!) and if you (or it) can feed that to a Python interpreter there is your answer, unless N is so big it blows out the stack. (One of my unpopular opinion is that recursive algorithms are a lower teaching)

I wouldn't expect most humans to get Hanoi right at N=8 unless they were super-careful and multiple-checked their work. Something I learned getting a PhD in theoretical physics is that even the best minds won't get a 50-page calculation right unless they back it up with unit and integration tests.

zdw•7mo ago
I would posit that solution is just regurgitation, not actual thinking.

Then again, is teaching an actual person how to use the quadratic formula equivalent to reinventing it from nothing?

I wonder if that's what we're doing with AI - giving it a corpus of strategies, when it has no way of being lead along a though process as a human would, if it's even capable of following along.