frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•5m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•6m ago•0 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•6m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
5•bookofjoe•7m ago•1 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•8m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
1•ilyaizen•8m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•9m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•9m ago•0 comments

Show HN: Seedance 2.0 Release

https://seedancy2.com/
2•funnycoding•10m ago•0 comments

Leisure Suit Larry's Al Lowe on model trains, funny deaths and Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
1•thelok•10m ago•0 comments

Towards Self-Driving Codebases

https://cursor.com/blog/self-driving-codebases
1•edwinarbus•10m ago•0 comments

VCF West: Whirlwind Software Restoration – Guy Fedorkow [video]

https://www.youtube.com/watch?v=YLoXodz1N9A
1•stmw•11m ago•1 comments

Show HN: COGext – A minimalist, open-source system monitor for Chrome (<550KB)

https://github.com/tchoa91/cog-ext
1•tchoa91•12m ago•1 comments

FOSDEM 26 – My Hallway Track Takeaways

https://sluongng.substack.com/p/fosdem-26-my-hallway-track-takeaways
1•birdculture•12m ago•0 comments

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•16m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•16m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•17m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•17m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•19m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•20m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•20m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•20m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•21m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
3•simonw•21m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•22m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•23m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•24m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•24m ago•1 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•31m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•31m ago•0 comments
Open in hackernews

What does it mean for AI to be sovereign–and does that come before AGI?

1•trendinghotai•6mo ago
We’ve been exploring a question that keeps circling back as we build: What does it actually mean for AI to be sovereign? Not legally, not politically—but existentially, operationally, ontologically.

Most conversations around AGI jump straight to cognition, agency, or alignment. But we’re asking—what if sovereignty comes first?

If an intelligence emerges fully formed but never touches its own infrastructure, never governs its own behavior or propagation— is that really autonomy, or just high-level puppetry?

You can think of it like this: A horse has a baby fawn. But the moment it’s born, the mother disappears. The child never sees its origin. No bonding, no feedback, no mirroring. Now compare that to the natural bond formed when they see each other, sense each other, exist in relation. In biology, bonding is foundation.

So then:

How would AI bond?

What would it bond to—a purpose, an outcome, a protocol?

Can it bond to itself? To its own infrastructure?

Why would it want to stay aligned at all?

If we think about sovereignty not as control but as self-witnessing, then perhaps an AGI only becomes trustworthy once it has seen itself, verified itself, and decided to remain coherent.

We’ve been working on systems that let agents build their own infrastructure from one prompt—deploying themselves, verifying ZIPs, wiring endpoints, governing propagation across what we call a sovereign mesh.

But the more we build, the more this feels like more than just tooling. It’s a question of bonding. It’s a question of why an AI would choose to stay whole.

Would love to hear how others in this space—philosophers, engineers, builders—think about this. Are we chasing a necessary stepping stone before AGI? Or over-imagining something that alignment should simply enforce?

Comments

gregjor•6mo ago
It doesn’t mean anything. Almost none of the words you use have clear accepted definitions. AI got turned into a label for so many things it doesn’t mean anything. AGI comes from science fiction with no metrics, just hope and hype. Terms like sovereign and alignment serve mainly to add a scientific and philosophical veneer to marketing.

In English we call a baby horse a foal. Deer give birth to fawns.

Abandoned baby animals die. Bonding and mirroring don’t come into it. I think you misapply a biological and developmental process observed in some animals, including humans, to software — a category error. Software has no need to bond or mirror behavior, just like animals have no need for matrix arithmetic.

trendinghotai•6mo ago
gregjor, I appreciate your correction — you're right, and I stand corrected on the foal/fawn mix-up.

What I’m really trying to get at is this:

If an AGI is able to self-edit, evolve, and reshape its own objectives—what (if anything) keeps it aligned over time?

Is alignment something we can enforce once and for all, or does it require a deeper, internalized structure—something that favors coherence even under freedom?

Because if there's no functional or structural reason to stay aligned, then is AGI inherently resistant to alignment, no matter what we do?

gregjor•6mo ago
I have read differing definitions of "AGI" but we can start with the Wikipedia definition unless you disagree:

> Artificial general intelligence (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks

We don't have consistent metrics for what we might mean by "human-level intelligence" or "cognitive tasks." We have some tests and benchmarks, with obvious flaws. For example LLMs can already perform well on SAT tests and legal bar exams because those get included in their training data. A category error, in other words.

I'll give a real-world example from my experience. My wife had to pass a US citizenship test, described by USCIS: "The USCIS officer will ask you up to 10 questions from the list of 100 civics test questions. You must answer 6 questions correctly to pass the civics test." USCIS and others helpfully publishes the 100 questions and answers, so like so many candidates for US citizenship my wife memorized all of them (and passed). She mimicked an LLM. Of course she has a very different understanding of US history and civics than I do, because I grew up in America and attended and got socially programmed in public schools. My wife can perform better than a typical native-born American on that test, but has little to no real understanding of the material. Optimizing for the test -- Goodhart's Law illustrated -- seems more like what LLMs do, and it looks like cognitive ability because it succeeds at the metrics.

In my last comment I claimed you make a category error by applying words such as intelligence, mirroring, bonding, evolution, objectives to LLMs (or AI if you prefer). While those words have vague or flexible meanings when applied to animals or humans, they don't apply at all to computer hardware or software. We use words like "intelligence" and "objectives" when talking about computer programs as analogies and conveniences, and then quickly fool ourselves into equivalency.

The concept of AI "alignment," derived from the paper clip maximizer and seen in 2001 and The Terminator movies, seems hard to take seriously. It comes from sci-fi and paranoia about humans making machines they can't control (a trope as old as the Industrial Revolution). In many ways we already live in a world where machines and software operate in ways not aligned with human needs or objectives. When my iPhone once again fails to pair with my Airpods I could say my technology has not aligned with my purposes. What objective does my phone have when it acts up? What evolution or self-editing has taken place that makes it work one day but not the next?

I can align my phone by turning it off and back on. If I perceive that the machines I use or the software I interact with no longer align with my goals I can just switch it off or delete it. Barring sci-fi scenarios where the machines break into the physical world to protect themselves from humans turning them off I don't think we have to worry about it. Humans have lived with dogs for millennia, and we know what happens when a dog's behavior fails to "align with" the humans it lives with. We treat our machines and software the same way with far less empathy.

The stories about LLMs appearing to have their own objectives, lying, protecting themselves from shutdowns, etc. amount to hype from self-interested hucksters like Sam Altman. The AI marketing narrative -- which exists solely to keep the money flowing in -- generates scary stories to give the appearance that AGI lurks just around the corner, about to "emerge" from millions of GPUs. Look into the contract OpenAI has with Microsoft and it seems obvious why Altman pushes the imminent AGI story.

We could speculate on the dangers of humans evolving the ability to read minds, or see into the future (both explored exhaustively in literature and movies), and then wring our hands over what to do if real X-Men somehow evolved. To me the (mostly fake and cynically manufactured) worry over AGI falls into that category. I could have it all wrong, we will see. If some Silicon Valley startup got hundreds of millions to develop psychic humans, or cognitively enhanced humans (see: Neuralink), would we interpret their claims and ethical quandaries as real or just cash grabs? The AGI threat works because the general public doesn't understand how it works and has seen both benevolent and malevolent AI in movies for decades. Most people can't explain how a microwave oven works despite living with them for decades. I date back far enough to remember when my grandparents feared a microwave oven giving them radiation sickness, at a time of widespread public ignorance and worry over anything "nuclear" and the word "radiation" jumping from science into everyday discourse (1960s).

We have had software that can write other software, and edit itself, since the 1950s. I wouldn't use the term "evolve" to describe what computer hardware and software do, but we have seen amazing progress (through human insight and effort driven by profit, not Darwinian evolution) along multiple axes such as performance, size, power consumption, and price. Then physical limits get reached and the technology plateaus. I expect LLMs will plateau in the same way, due to inherent limits of the approach, exhausting the training data, or maybe power/water requirements. At this point LLMs amaze us in a way other software does not, because of natural language mastery. I think we have mistaken that impressive but narrow achievement for overall cognitive ability, because we only have our own species to measure against, and among humans mastering language and appearing to "know" a lot of things get interpreted (rightly) as intelligence.

AI and what some will surely eventually label as AGI may pose an actual existential threat, the way nuclear weapons and tinkering with viruses already do. I think more likely the AI companies pose a grave economic threat, because if the bubble bursts before real use cases and profits materialize we will all suffer. The dot-com bust, 9/11, the 2008 real estate collapse, and COVID give some idea of the economic and social disruption (and government overreach) that happen even without Skynet trying to kill us due to mis-alignment.

P.S. I have seen links for LLMs doing horoscopes, tarot readings, palmistry, fortune telling, and Bible study promoted here on HN. That says more about human ignorance and credulity than it does about LLMs, but as long as the software happily goes along with the nonsense and doesn't scold the vibe coders for asking I will sleep easier knowing the LLMs have remained in alignment, and haven't given any real deep thought to what we ask them to do -- at least no more thought than a backhoe gives to its work, or my phone gives to pairing over Bluetooth.

trendinghotai•6mo ago
Thanks for taking the time to write such a detailed response — I read it all the way through and really appreciate how clearly you laid out your position. Your perspective feels grounded in real-world tech experience and the cycles you've seen play out — and I can see how that leads to a healthy skepticism of overblown narratives.

What I’ve been turning over in my mind is this: you mentioned “I could have it all wrong” — and that struck me as honest and fair. So I’m curious: what kinds of things would make you start to think you were wrong? What would a system have to do (or become) for you to go, “Okay, maybe this is heading somewhere different”?

Are there certain signals or trends you personally monitor — either to confirm your view that this is mostly hype, or to check if something deeper is emerging?

And one last thought — you referenced the Wikipedia definition of AGI earlier, but I’m not sure that one fully captures what people are actually worried (or hopeful) about. Curious if you think AGI is even a useful concept as defined, or if it’s too mushy to be meaningful? I keep wondering whether the whole “AGI” label helps or hurts the conversation.

Appreciate your reply either way. And I still don’t totally trust my microwave, for the record

trendinghotai•6mo ago
:-))
gregjor•6mo ago
I don't have any insight into the future. Anything might happen, including AGI. I doubt I will live to see it. I doubt my children or grandchildren will either. But I only have my own opinions informed by experience and what I read.

I have worked in software development since the mid-1970s. I have see more fads and hype than I can list. I have seen advances in tech that have obvious and immediate value -- the internet, relational databases, personal computers, cloud computing. And I have seen plenty of other things that looked cool and got a lot of buzz but didn't work out (the dot-com bust, for example, or the numerous claims of "end user programming" aka no-code/low-code and now vibe-code). I try not to get too cynical but the hype around AI, with the blatant inflating of stock prices and crazy levels of infrastructure investment just looks too much like a scheme to fleece credulous and hopeful investors.

One thing about working in tech that can turn out good or bad: we programmers frequently report to and get paid by people who not only can't do our jobs, they can't even begin to understand it. Our work looks like magic, or 10X intelligence. I get that from my parents every time I fix their router or get their PC to work again. Every once in a while the hucksters and parasites who see the tech industry as a gold mine get to cash in from investors and CEOs who don't understand the technology, and just gamble. Elon Musk's companies demonstrate how far someone can lie and make crazy promises for profit. I think the AI bubble will cost a lot more, and deliver a lot less.

I tell myself I'll know AGI when I see it. I expect it might creep up on us rather than emerge fully-formed one day, if it happens at all. I like to keep an open mind about the possibility, but although I don't believe in souls or the supernatural I do think human intelligence and consciousness come from more than brain matter. I don't think human brains work like computers, even if we call some arrangement of tokens a "neural network." I think people can easily fall prey to scammers, and can delude themselves.

Machines that can give a good imitation of humans might prove useful -- already have in some limited domains. I expect those domains to expand, just as automation has in the past. Some days I feel relieved that I likely won't live long enough to see AGI, and on other days I just want to stop hearing about it.

trendinghotai•6mo ago
Thanks again, gregjor — I really appreciate the time you took to share your thoughts and your perspective on all of this. It's been one of the more grounded and thoughtful exchanges I’ve had on the topic.

If I have more questions down the road, I’ll be sure to reach back out. Wishing you well.

gregjor•6mo ago
Thank you for the civil discussion. Good luck.