frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

A broker-less distributed messaging system from the previous century

https://aivarsk.com/2025/06/22/brokerless-distributed-messaging/
1•aivarsk•3m ago•0 comments

Did Contexts Kill Phoenix?

https://arrowsmithlabs.com/blog/did-contexts-kill-phoenix
1•mitchbob•8m ago•1 comments

The Art of Hanakami, or Flower-Petal Folding

https://origamiusa.org/thefold/article/art-hanakami-or-flower-petal-folding
1•s4074433•9m ago•0 comments

The Fine Art of Nesting

https://roberthoward.com.au/fine-art-nesting/
1•s4074433•10m ago•0 comments

Ask HN: How is US entering war affecting your AGI timelines?

1•ozzyphantom•11m ago•1 comments

Taking the wind out of dangerous cyclones

https://reporter.anu.edu.au/all-stories/taking-the-wind-out-of-dangerous-cyclones
1•geox•15m ago•0 comments

Show HN: REPL is the memory layer for multi-agent AI apps – Sherlog‑MCP

https://github.com/GetSherlog/Sherlog-MCP
2•teenvan_1995•16m ago•0 comments

Children in England growing up 'sedentary, scrolling and alone', say experts

https://www.theguardian.com/society/2025/jun/11/children-sedentary-scrolling-alone-lack-of-play-england
2•PaulHoule•18m ago•0 comments

Conscience and the New Cartography of War

https://blogs.timesofisrael.com/conscience-and-the-new-cartography-of-war/
1•bryanrasmussen•19m ago•1 comments

Beach Boys Bassist Carol Kaye Refuses Rock Hall of Fame Induction

https://www.guitarplayer.com/guitarists/carol-kaye-on-rock-hall-of-fame-induction
1•bookofjoe•21m ago•0 comments

Bill Gates: 'Welcome to the next phase of the Alzheimer's fight'

https://www.gatesnotes.com/home/home-page-topic/reader/welcome-to-the-next-phase-of-the-alzheimers-fight
2•MilnerRoute•29m ago•0 comments

Edward Burra's tour of the 20th century

https://www.newstatesman.com/culture/2025/06/the-english-painters-relish-for-subcultures-took-him-across-genres-and-continents
1•prismatic•33m ago•0 comments

USAF B-2 Spirit Bombers Have Beds

https://simpleflying.com/usaf-b-2-spirit-bombers-beds/
3•neom•35m ago•0 comments

Radio Garden

https://radio.garden/?2025
2•LeoPanthera•40m ago•0 comments

Bluetooth Jammer

https://github.com/EmenstaNougat/ESP32-BlueJammer
1•yeknoda•40m ago•0 comments

Wait, Why Is Israel Allowed to Have Nukes?

https://www.currentaffairs.org/news/wait-why-is-israel-allowed-to-have-nukes
44•shinryudbz•41m ago•9 comments

CTO's at Meta, Open AI, Palantir Became Lieutenant Colonels in the Army

https://americancitizen2025.substack.com/p/ctos-at-meta-open-ai-palantir-became
3•brie22•44m ago•0 comments

Claude's Token Efficient Tool Use on Amazon Bedrock

https://community.aws/content/2trguomubYb8f3JNzCeBgNvassc/claude-token-efficient-tool-use-on-amazon-bedrock
1•Topfi•45m ago•0 comments

Tell HN: Sam and Jony Announcement 404s

3•mellosouls•47m ago•2 comments

I built a CLI tool to scaffold Django apps like in NestJS or Larave

https://github.com/nathanrenard3/django-smartcli
1•nathanrenard3•48m ago•1 comments

Call for more thatching courses to save 'uniquely Irish craft'

https://www.rte.ie/news/politics/2025/0618/1519182-thatching-courses/
2•austinallegro•49m ago•0 comments

The Void IDE, Open-Source Alternative to Cursor, Released in Beta

https://www.infoq.com/news/2025/06/void-ide-beta-release/
2•rmason•50m ago•0 comments

Show HN: I made weekend project (Active Directory Security Assessment Tool)

https://adsecurityassessment.com
1•Shazeb•52m ago•0 comments

I Sing the Electric Body – On Syntax (2024)

https://hedgehogreview.com/issues/the-varieties-of-travel-experience/articles/i-sing-the-electric-body
1•rntn•55m ago•0 comments

I wrote my PhD Thesis in Typst

https://fransskarman.com/phd_thesis_in_typst.html
2•todsacerdoti•57m ago•0 comments

I Miss YC – Kanye East

https://twitter.com/jonan_zz/status/1935833004200673719
2•contingencies•58m ago•0 comments

Building my own paper tape punch

https://unimplementedtrap.com/paper-tape-punch
2•todsacerdoti•1h ago•0 comments

Otus Lisp

https://yuriy-chumak.github.io/ol/
2•smartmic•1h ago•0 comments

Owl Lisp

https://haltp.org/posts/owl.html
2•smartmic•1h ago•0 comments

Long-time rivals Bill Gates and Linus Torvalds meet

https://www.tomshardware.com/software/operating-systems/long-time-rivals-bill-gates-and-linus-torvalds-meet-for-the-first-time-have-dinner-no-major-kernel-decisions-were-made-but-maybe-next-dinner
4•LorenDB•1h ago•0 comments
Open in hackernews

Is AGI Paradoxical?

https://www.shayon.dev/post/2025/172/is-agi-paradoxical/
3•shayonj•4h ago

Comments

PaulHoule•4h ago
If you want to impress the HN crowd try something other than “implement user authentication” and getting 50 perfect lines.

If you have a framework stacked up to do it and you are just connecting to it maybe, but I’d expect it to take more than 50 lines in most cases, and if somebody tried to vibe code it I’d expect the result to be somewhere between “it just doesn’t work” to “here’s a ticket where you can log in without a username and password”

shayonj•4h ago
Fair! I was going for more generic example for the intro and eventually segue into the greater point and questions the post is trying to make. It does touch on few other examples like the AlphaFold, later on.
PaulHoule•3h ago
I think the basic argument you're following is an old one.

At one point, playing chess was considered to be intelligent, but early in the computer age it was realized that alpha-beta search over 10 million positions or so would beat most people. Deep Blue (and later Stockfish) later tuned up and scaled up that strategy to be superhuman.

Once one task falls, people move the goalposts.

There are some things that people aren't good at at all, like figuring out how proteins fold. When I was in grad school in the 1990s there were intense effort to attract bright graduates students to a research program that, roughly, assumed that "proteins fold themselves" to the minimum energy configuration in water. Those assumptions turned out to be wrong. Metastable states are important [1] and proteins don't just fold, they get folded. At the time it was thought the problem was tough because the search space was beyond astronomical and it remains beyond astronomical. Little progress was made.

The best method we've got yet to interpret a protein sequence is to compare it to other protein sequences and I think Alphafold is basically doing that with transformer magic as opposed to suffix tree magic.

Godlike intelligence might not be all it is cracked up to be. Theology has long studied with questions like "if God is so almighty how did he create screw-ups like us?" No matter how smart you are you aren't going to be able to predict the weather much longer than we can now because of the mathematics of chaos. The problems Kurt Godel talks about, such as the undecidability of first-order logic plus arithmetic [3] are characteristic of the problem, not the way we go about solving them.

[1] https://en.wikipedia.org/wiki/Prion

[2] https://en.wikipedia.org/wiki/Chaperone_(protein)

[3] A real shame because if we want to automate engineering, or software development, or financial regulation FOL + arithmetic is the natural representation language

proc0•3h ago
This article seems to be conflating AI with deep neural networks and its associated architectures like LLMs and transformers. It could well be that the path forward is a completely different foundational paradigm. It would still be AI but it wouldn't use neural networks. I mention this because the question of whether AGI is possible is not dependent on the current technology. Maybe LLMs can't reach AGI but a different system can.

This is highlighted in statements like this one:

> For AI to truly transcend human intelligence, it would need to learn from something more intelligent than humans.

Just imagine a human with a brain the size of a large watermelon. If the brain is like a computer (let's assume functional computationalism), then larger brain size means more computation. This giant brain human would have an IQ of 300+ and could singlehandedly usher in a new age in human history... THIS is the analog of what AGI is supposed to be (except a lot more because we can have multiple copies of the same genius).

Circling back to the article, this means that an AGI by definition would have the capacity to surpass human intelligent just like a genius human would, given that the AGI is processing information the way human minds process information. It wouldn't just synthesize data like current LLMs, it would actually be a creative genius and discover new things. This isn't to say LLMs won't be creative or discover new things, but the way in which they get there is completely different and more akin to a narrow AI for pattern matching rather than a biological brain which we know for sure has the right kind of creativity to discover and create.

shayonj•3h ago
That’s a good distinction and thank you! AGI is indeed orthogonal to LLMs today.