frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•8s ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•53s ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•53s ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•1m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•1m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•2m ago•0 comments

Velocity

https://velocity.quest
1•kevinelliott•2m ago•1 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•4m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•4m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•10m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•11m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•12m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•14m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•14m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•15m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•15m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
2•samasblack•17m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•18m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•19m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•20m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•22m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•22m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•22m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•23m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•23m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•24m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
2•headalgorithm•25m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•25m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•25m ago•1 comments

Ask HN: What are the word games do you play everyday?

1•gogo61•28m ago•1 comments
Open in hackernews

Is AGI Paradoxical?

https://www.shayon.dev/post/2025/172/is-agi-paradoxical/
4•shayonj•7mo ago

Comments

PaulHoule•7mo ago
If you want to impress the HN crowd try something other than “implement user authentication” and getting 50 perfect lines.

If you have a framework stacked up to do it and you are just connecting to it maybe, but I’d expect it to take more than 50 lines in most cases, and if somebody tried to vibe code it I’d expect the result to be somewhere between “it just doesn’t work” to “here’s a ticket where you can log in without a username and password”

shayonj•7mo ago
Fair! I was going for more generic example for the intro and eventually segue into the greater point and questions the post is trying to make. It does touch on few other examples like the AlphaFold, later on.
PaulHoule•7mo ago
I think the basic argument you're following is an old one.

At one point, playing chess was considered to be intelligent, but early in the computer age it was realized that alpha-beta search over 10 million positions or so would beat most people. Deep Blue (and later Stockfish) later tuned up and scaled up that strategy to be superhuman.

Once one task falls, people move the goalposts.

There are some things that people aren't good at at all, like figuring out how proteins fold. When I was in grad school in the 1990s there were intense effort to attract bright graduates students to a research program that, roughly, assumed that "proteins fold themselves" to the minimum energy configuration in water. Those assumptions turned out to be wrong. Metastable states are important [1] and proteins don't just fold, they get folded. At the time it was thought the problem was tough because the search space was beyond astronomical and it remains beyond astronomical. Little progress was made.

The best method we've got yet to interpret a protein sequence is to compare it to other protein sequences and I think Alphafold is basically doing that with transformer magic as opposed to suffix tree magic.

Godlike intelligence might not be all it is cracked up to be. Theology has long studied with questions like "if God is so almighty how did he create screw-ups like us?" No matter how smart you are you aren't going to be able to predict the weather much longer than we can now because of the mathematics of chaos. The problems Kurt Godel talks about, such as the undecidability of first-order logic plus arithmetic [3] are characteristic of the problem, not the way we go about solving them.

[1] https://en.wikipedia.org/wiki/Prion

[2] https://en.wikipedia.org/wiki/Chaperone_(protein)

[3] A real shame because if we want to automate engineering, or software development, or financial regulation FOL + arithmetic is the natural representation language

proc0•7mo ago
This article seems to be conflating AI with deep neural networks and its associated architectures like LLMs and transformers. It could well be that the path forward is a completely different foundational paradigm. It would still be AI but it wouldn't use neural networks. I mention this because the question of whether AGI is possible is not dependent on the current technology. Maybe LLMs can't reach AGI but a different system can.

This is highlighted in statements like this one:

> For AI to truly transcend human intelligence, it would need to learn from something more intelligent than humans.

Just imagine a human with a brain the size of a large watermelon. If the brain is like a computer (let's assume functional computationalism), then larger brain size means more computation. This giant brain human would have an IQ of 300+ and could singlehandedly usher in a new age in human history... THIS is the analog of what AGI is supposed to be (except a lot more because we can have multiple copies of the same genius).

Circling back to the article, this means that an AGI by definition would have the capacity to surpass human intelligent just like a genius human would, given that the AGI is processing information the way human minds process information. It wouldn't just synthesize data like current LLMs, it would actually be a creative genius and discover new things. This isn't to say LLMs won't be creative or discover new things, but the way in which they get there is completely different and more akin to a narrow AI for pattern matching rather than a biological brain which we know for sure has the right kind of creativity to discover and create.

shayonj•7mo ago
That’s a good distinction and thank you! AGI is indeed orthogonal to LLMs today.