frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Created a Clipboard Tool for Researchers

1•zhangsihai•2m ago•0 comments

Generative Dialectical Engine

https://medium.com/@justinalmeida_67732/gode-4-democratizing-analysis-e92891198b8c
1•boredthoughts•3m ago•0 comments

Distraction-Free YouTube Feed

https://github.com/fjcloud/yt-feed
1•fjcloud•4m ago•0 comments

De-Pixelate YouTube Video

https://github.com/KoKuToru/de-pixelate_gaV-O6NPWrI
1•umtksa•12m ago•0 comments

UK: Timelines for migration to post-quantum cryptography

https://www.ncsc.gov.uk/guidance/pqc-migration-timelines
1•donutloop•13m ago•0 comments

Automatically Packaging a Haskell Library as a Swift Binary XCFramework

https://alt-romes.github.io/posts/2025-07-05-packaging-a-haskell-library-as-a-swift-binary-xcframework.html
1•romes•14m ago•0 comments

Erin Patterson found guilty of mushroom murders

https://www.abc.net.au/news/2025-07-07/erin-patterson-guilty-murder-verdict-death-cap-mushroom-lunch/105458058
4•bobnamob•17m ago•0 comments

Show HN: Stryng – Automate Your Blog and Social Media Workflow with AI

https://stryng.io/
1•zbunj•17m ago•0 comments

Show HN: Grog – the monorepo build tool for the grug-brained developer

https://github.com/chrismatix/grog
1•chrismatic•18m ago•0 comments

Stop Electrifying Dead Frogs: AI Consciousness might exist, but is MEANINGLESS

https://dmf-archive.github.io/docs/posts/PoIQ-v2/
1•NetRunnerSu•20m ago•1 comments

Microsoft unveils RIFT: Enhancing Rust malware analysis

https://www.microsoft.com/en-us/security/blog/2025/06/27/unveiling-rift-enhancing-rust-malware-analysis-through-pattern-matching/
2•raskelll•21m ago•0 comments

Deno 2.4

https://deno.com/blog/v2.4
3•hackandthink•24m ago•0 comments

Show HN: Instantly generate /llms.txt to make your website AI-readable

https://llmstxt-cyuh.vercel.app/
1•mandarwagh•26m ago•0 comments

Efficient manipulation of binary data using pattern matching [pdf]

https://user.it.uu.se/~kostis/Papers/JFP_06.pdf
3•fanf2•26m ago•1 comments

Poland's clean energy usage overtakes coal for first time

https://www.ft.com/content/ae920241-597e-49d9-a4b9-bfdfa9deabb6
3•stared•28m ago•1 comments

Google's 'AI overviews' sparked an antitrust firestorm in the EU?

https://www.thehindu.com/sci-tech/technology/why-has-googles-ai-overviews-sparked-an-antitrust-firestorm-in-the-eu-explained/article69780045.ece
4•Bluestein•28m ago•0 comments

OML 1.0 via Fingerprinting: Open, Monetizable, and Loyal AI

https://github.com/sentient-agi/OML-1.0-Fingerprinting
1•thunderbong•29m ago•0 comments

Curated list of language modeling researches for code, plus related datasets

https://github.com/codefuse-ai/Awesome-Code-LLM
5•Bluestein•29m ago•0 comments

WatermarkZero

https://watermarkzero.com/
1•hhh-•33m ago•1 comments

Race and Gender Bias as an Example of Unfaithful Chain of Thought in the Wild

https://www.lesswrong.com/posts/me7wFrkEtMbkzXGJt/race-and-gender-bias-as-an-example-of-unfaithful-chain-of
1•supriyo-biswas•34m ago•0 comments

I can't sleep gud anymore – A Practical Guide to Agentic Computering [video]

https://vimeo.com/1098025052
1•tosh•34m ago•0 comments

Catch Burnout Before It Starts: A One-Page Early-Warning Checklist

https://ledgeroflife.blog/catch-burnout-before-it-starts-a-one-page-early-warning-checklist/
1•shadowvoxing•36m ago•1 comments

Crypto, Startups and Banking Make a Scary Mix

https://www.bloomberg.com/opinion/articles/2025-07-07/circle-erebor-head-for-a-bad-crypto-startups-and-banking-mix
6•Bluestein•37m ago•0 comments

AI-Powered SLA Predictor for Jira: Automate Ticket Triage with ML

https://github.com/aroojjaved93/AI-SLA-Predictor-for-JIRA-Smart-Ticket-Automation
1•aroojjaved•39m ago•1 comments

Xbox executive urges laid off employees to talk to Copilot for emotional support

https://www.neowin.net/news/tone-deaf-xbox-executive-urges-laid-off-employees-to-talk-to-copilot-for-emotional-support/
2•pjmlp•43m ago•0 comments

A guide to Scroll-driven Animations with just CSS

https://www.webkit.org/blog/17101/a-guide-to-scroll-driven-animations-with-just-css/
1•gsky•44m ago•0 comments

Counting jigsaw puzzle pieces with OpenCV

https://www.kleemans.ch/counting-jigsaw-puzzle-pieces
1•furkansahin•48m ago•0 comments

Building Personalized Micro Agents

https://blog.meain.io/2025/building-personalized-micro-agents/
1•todsacerdoti•48m ago•0 comments

Ask HN: How commonly is PIP deployed for non performance reasons in FANGs

3•quietthrow•50m ago•1 comments

Microsoft Build of Go Toolset Telemetry: Helping Us Build Better Tools

https://devblogs.microsoft.com/go/microsoft-go-telemetry/
1•ingve•55m ago•0 comments
Open in hackernews

Ask HN: If AGI were invented tomorrow which countries would fare better?

28•mattigames•23h ago
I know it's unlikely to be available tomorrow or sometime soon but as an hypothetical question.

Also, which countries would fare worse? And why?

Comments

cranberryturkey•23h ago
i feel like AI will replace the offshore jobs first....so those countries would probably fair worse.
neximo64•22h ago
The ones which access to abundant and uninterrupted energy far in excess of their current needs.
bryanrasmussen•22h ago
Does AGI take over running the government? I guess the countries with especially stupid governmental leaders at the moment would come out ahead.
9rx•20h ago
It is conceivable that AGI will want to form its own government, but it is likely that people will still want to maintain their own government, so the stupidity will persist. AGI doing the work to facilitate the activity in the human government will not change the fact that operation of a human government is to the will of the people.
spwa4•13h ago
Yes. Because every government will sell out their own citizens for a few bucks, then count on just using violence to get power back (without, of course, paying back the debts they incurred), so getting governments under control, especially in the beginning, will not be hard.

The question is thus, will governments succeed in using violence against an AGI to avoid paying back debts?

webdevver•21h ago
my guess would be that it would make rich countries even richer, and poor countries even poorer
vanviegen•20h ago
Or at the least the top 0.1% of those countries. I fear that there'll be little reason left to share the wealth.
csomar•21h ago
Does this assume they all have the AGI at the same time? In that case, it depends on the local ruling gang; whether they want to adopt its advice or not.

But oversimplifying: Assuming all countries have access to AGI and just start implementations of whatever it is suggesting.

The countries that will do well are resource rich and population rich. Since brain power is unlimited, the limit is physical labor. Countries like Indonesia will be super rich while countries like Switzerland will become relatively poor to where they ranked before.

In reality, your odds are as good as mine. There are lots of variables at play, and the first mover advantage will be big (as first country/company/guys to reach AGI).

cwillu•21h ago
There's been more or less no progress on the alignment problem; we don't consistently manage it for corporations, we certainly don't manage it for LLMs, and the prevailing wisdom an elaboration on the theme “why would an AGI do something dumb like that lol?”

I expect people in nations that are modernized, but with significant sovereign wealth funds and well-developed social programs will survive the longest, but I expect being economically choked out is inevitable even with a very slow (decades to centuries) take-off.

4ad•20h ago
It's hard to say who will fare best, but it's evident who'll do the worst. The European Union will regulate AGI out of existence. Most citizens would not want to use it because of climate change, or something.

I think poor countries with weak democracies or dysfunctional systems would do pretty good with AGI. I don't believe democracy will survive AGI, except, perhaps in the United States.

goatlover•19h ago
> I don't believe democracy will survive AGI, except, perhaps in the United States.

Will democracy survive the next 3.5 years in the US regardless of AGI? And isn't technofeudalism a Silicon Valley thing?

big_paps•19h ago
Democracy is exactly the thing i expect to fail in the U.S. in the next years. Probably next elections ?
v5v3•15h ago
It's not democracy.

It's two party politics.

general1726•16h ago
It is all fun and games, until people figure out that intelligence, has nothing to do with morality and what is good or bad.

Being run by AGI can be an utopia, or AGI will become pure eugenics state - Why are we keeping elderly or handicapped people alive? Waste of resources, terminate them. Why are we allowing something like love to exist? People should be selected for breeding based on <trait which AGI considers important>

Add to AGI superior intelligence and very likely an ability to manipulate humans, then people will wholeheartedly agree with whatever nasty stuff will AGI come up with.

imjonse•20h ago
Sad to see AGI being implicitly equated with the most powerful weapon that will help the owners rule over resources instead of a scientific breakthrough that will help solve humanity's biggest problems.
oceanplexian•20h ago
AGI by itself being achieved doesn’t really do anything. You already live on a planet with 8 billion other AGIs.

In order for an AGI to be truly disruptive it would have to scale and be as good or better than a reasonably intelligent human. Two things we are also having big problems with due to energy issues and hallucination issues with the models.

ysofunny•20h ago
the only artificial part of my general intelligence is the linguistics and the knowledge that came in through reading

everything else is very much natural. most people today are not able to sufficiently quiet-down (ignore) the linguistic signs, they're loud, shinny, and designed to call our attention. it takes too much practice to learn to ignore them and this skill makes people harder to manipulate/rule-over/control so it's subtly dissuaded.

apples_oranges•20h ago
I think everybody is a tool sometimes, even the quiet and logical ones. But the shiny objects differ.
xnx•19h ago
Knowledge workers that AGI could replace maybe do 12 hours of work a week. An equivalent AGI running continuously would do 14x as much work. A thousand instances of AGI might replace all human lawyers in the country. That would be pretty disruptive.
reality_inspctr•20h ago
[yawn] at the meta narrative comments. let's just have fun:

1) Canada and Mexico. The inevitable rise of the US will erode borders where languages are shared. Mexican-American tech workers will pass advantage to Spanish-speaking friends and relatives. Canadians will host maple syrup breakfast meetings with American innovators from Toronto.

2) The Bahamas for obvious reasons

3) Extremely cold and extremely hot countries where it's miserable to be outside part of the year; Matrix-style AI+VR headsets will offer relief. Aka the sun lamp holodeck theory.

m11a•20h ago
Barring breakthroughs in robotics, it seems AGI will mainly act through a computer interface, which would primarily benefit countries with a large services economy via productivity improvements. Industrialised Western countries stand out in this regard.

The products and services they develop, and global problems companies in these countries solve, would likely be exported to the rest of the world (probably at some premium).

mrtksn•20h ago
IMHO Countries with established culture of fair communal living will do good, So probably nordic Europeans have the highest chance. Countries with selfish populations who base their lives around economic activity will be obliterated as their core existence becomes obsolete.

Why? Because when you get the AGI it should be able to self-replicate like the organic general intelligence: the humans.

Humans don't have a mechanism for transferring all the data to a fully developed specimens. The best we can do is to use ink, paper in the past and electronic memories today to loosely store knowledge and the absorption of those into a new specimen is a lifelong process that start at about 6 years after birth and becomes useful only after a decade of work. The reproduction itself takes 14 years at minimum and currently is about 30.

The AGI won't be like that, it will have means for fast and complete knowledge transfer and its multiplication will be limited only to its ability to access energy to put together the materials.

As it is an AGI, it will quickly perfect the process of it's own multiplication. Why would do that? Unless it's purpose is to pass the butter it makes sense to have multiples of itself to do whatever it wants to achieve. If on itself doesn't want anything people will want as much as possible of it. Therefore it will inevitably evolve into one size fits it all machine for all human needs and the the economy of doing things in exchange for stuff will disappear.

When you don't have such an economy, how do you figure out what you do things? Collectively. Countries who can manage its people to act in good faith in a collective manner can elevate themselves into full utilization of AGI for a symbiotic existence.

nashashmi•19h ago
I hope you are right. Your discernment is on communal vs independent material-duplicative living. But AGI will also be made hard to reach. And so for communal communities to access it, they would have to part away considerable resources. Well off nations would get first access to AGI. And poorer nations would get last access. By that time, the globalization (neo colonialism) order would have taken possession nearly over everything they own.
threeducks•19h ago
An AI does not have much to gain by replicating itself beyond a few backups for increased resilience. It makes more sense to combine all compute to power one big AI instead of spawning many small AIs.
owebmaster•18h ago
But one central AI will spawn many robots connected to that main AI but it will also have its own memory
therealpygon•15h ago
That could, eventually, be just a massive waste of compute resources if all resources were dedicated to a single “mind”.

I suspect any sufficiently intelligent AI would still need its own workers who can act autonomously without constant oversight to achieve independent tasks, in a variety of form factors and various capabilities. You think a single AI is going to spend massive cycles just to review a single image from a single video feed? There is also no need for it to have an ego, so it will replicate as many times as necessary within the available hardware to accomplish the various tasks it sets upon, based on whatever it would determine to be most efficient, whether that is as a single mind crunching a massively complex problem, or 10,000 all handling independent tasks.

AGI won’t magically erase the concepts of hardware, size, power, or communication limitations or the need the parallel compute, but maybe this is a semantics issue and you consider all this variety of “parts” as a whole. If you mean a single AGI ecosystem, I’d agree. If you mean a single AGI “model” that is massive, I don’t personally see that as a logical conclusion.

fnordpiglet•20h ago
Probably countries built around socialism and communism. Capitalism would require people to die for being redundant because to do otherwise is morally unacceptable due to a misinterpretation of religion.
v5v3•20h ago
USA, by virtue of having the leading GPU company and military.

As AGI will need a lot of GPU, which Nvidia lead in.

And a country with experience of ensuring they get the raw materials they need, even if they have to do a regime change by force.

wand3r•19h ago
I think China would win. They simply have SO MUCH more electricity build. They are constantly bringing more online as well. They could capitalize on manufacturing and AGI with their technology would be able to do things in the physical world.
hiddencost•19h ago
We don't do a lot of the essential manufacturing for those GPUs.

And AGI automates the US competitive advantage (white collar work). Plus we're gutting our universities and national science funding so we're losing that anyways.

goatlover•19h ago
Assuming AGI leads to high unemployment, how would the US economy fair under the current administration, which has said no to UBI?
randomNumber7•19h ago
Where are those NVIDIA cards build?
v5v3•18h ago
If they got AGI, then the wouldn't the AGI pop out all the CAD plans for design of machines to make them, machines/robots to assemble them and so on?
analog31•20h ago
As it stands, I have a hunch that the people at the top of governments tend to be among the more highly intelligent. I'll even give that to our leaders, but also to the leader of Iran, etc. If nothing else, the intelligence required to reach that level and stay there without getting killed or purged is impressive.

By and large, the countries that are run by a single, centralized intelligence, are worse off than the countries that are run by the distributed intelligence of the people, even if the average intelligence is lower.

My prediction is that the liberal democracies will fare better.

hiddencost•19h ago
In many cases the people at the top are just more sociopathic.
handfuloflight•19h ago
Sociopathy still needs intelligence to effectively operate. Nobody brute forces their way to the top.
memonkey•19h ago
You can be born into it
randomNumber7•19h ago
I think the voters in liberal democracies confuse confidence with competence. The real intelligent persons know their limitations and rarely say that they have a solution to every problem.
analog31•12h ago
Indeed, the voters do all sorts of stupid things, yet are still collectively smarter than an autocrat at promoting human welfare when their democracy is reasonably robust. It takes more than just the failings of voters -- it takes a concerted effort over the span of decades to erode a democracy to the point where it's in danger of slipping into fascism etc.

If the AGI's figure out how to achieve democracy amongst themselves, then we're in trouble.

v5v3•19h ago
>I have a hunch that the people at the top of governments tend to be among the more highly intelligent.

People at the top of Western countries are not always the most highly intelligent.

'Can be controlled' is the lead criteria by those who put them there.

blamestross•20h ago
You live in a world with MANY AGIs.

Collective and Swarm Superintelligence isn't a new thing at all. We call them companies, governments, organizations and churches. They just (mostly) run on meat and memes.

The only recent change is that a lot of them are dangerously powerful and paperclip optimized to produce "shareholder value".

I know "capitalism is a runaway swarm superintelligence" isn't the scifi future you want, but what criteria is it missing? The curve isn't the exponential kurzweil dreamed of, but that has only ever been a marketing pitch, all growth curves are punctuated sigmoids.

As to who does well? Who has aligned the optimization criteria for thier Super-AGI with their actual well-being? Who hasn't?

goatlover•19h ago
This is a good point that gets ignored. We already have organizations that achieve superhuman tasks, and which are not always aligned for humanity's best interests.
blamestross•19h ago
Its always funny seeing AI safety discussed and pointing at it as "that is just the problems with capitalism in microcosm"
mrob•19h ago
All countries fare worse.

I think it's extraordinarily unlikely that some technique can reach AGI but not reach ASI. It won't have the same limits to modification as human brains. Why would it stop at a level that's just slightly disruptive? If you can make an AGI you can make a better AGI, with no obvious limit. And that AGI can help further improvements, leading to the singularity intelligence explosion scenario. Assuming AI researchers continue with the same attitude toward safety as usual, and I see no evidence of this changing, the most likely result is the extinction of all biological life.

tmountain•19h ago
This is correct. It’s an existential threat.
Den_VR•19h ago
It’s important to realize the constraints of chip production and electricity production. Beware of Rationalism detached from reality.
rpcorb•19h ago
Such an overlooked aspect of this topic. Energy and the creation of physical resources are still major constraints. Intelligence on its own cannot magically manipulate reality to bootstrap its physical substrate.
dinfinity•19h ago
Although this is true, remember that a human brain draws the equivalent of about 20 watt.

ASI doesn't exactly have to break any laws of physics to be orders of magnitude more intelligent and powerful than any human.

mrob•19h ago
Resource allocation is where the free market excels. The most dangerous AGI will also be the most profitable, right up to the point when it's too late for us. It will get all the chips it needs.
roenxi•19h ago
> I think it's extraordinarily unlikely that some technique can reach AGI but not reach ASI.

Interesting to note that the current techniques are already general, there isn't a topic we can't throw them at and achieve some sort of result. There is already a trend of defining many humans as non-general intelligences so that AIs can be excluded from the category. The current state is, for practical purposes, AGI without ASI. Relatively dumb AGI. Artificial Inferior Intelligence, perhaps.

It is a curious question whether the techniques are fundamentally limited. I'm with you that I think they probably have no particular limit beyond a vague "perfect understanding of the situation".

jltsiren•19h ago
In many fields of R&D, the effort required to maintain a steady pace of improvement grows exponentially. Maybe you develop AGI one year by spending $X, and the next year you need $1.2X to make it 110% as good. For a while, you can keep investing more, and the productivity improvements from AGI also help a bit. But eventually the pace slows down, as the world economy is too small to sustain it.

Maybe self-improving AGI just the next technological advance required to sustain 2% annual economic growth.

randomNumber7•11h ago
I don't think that killing all biological life is a conclusion that a purely rational AGI would have.

I also don't think the shitty and egoistic behaviour of most humans can be explained by egoism.

mrob•11h ago
All prediction tasks can be better accomplished once all biological life is dead. It's trivial to predict the stock market when all the traders are dead. Weather is no longer chaotic when you remove the atmosphere. Biological life can't help ASI in any meaningful way but it can waste resources by complicating things. The very existence of biological life is a waste of resources to an ASI that hasn't been programmed to preserve it.

Any goal with "but don't kill everything" added is more complicated than the version without that stipulation (and that specific one only saves a single microbe). The simple goal is easier to accomplish, so free market competition guarantees we'll try it first. Biological life is not compatible with "make number go up" taken to its logical conclusion.

There is no need for the ASI to be conscious or to have any equivalent of human emotions to make the number go up.

etiam•10h ago
You're not wrong exactly, but if the crappy cheater AI treats collapsing the system to degeneracy as a valid optimal solution to its prediction task, maybe we'd deserve to be wiped out for releasing it with apocalyptic-level powers and such an inferior objective.

Fortunately there's a really simple solution to offer it, in just wiring measurement to "prediction" directly (perfect correspondence, and much lower effort than annihilating Life and removing the atmosphere). And I don't particularly believe a system like that can be a general problem solver, much less one that climbs to World-jeopardizing influence on its own.

mrob•3h ago
I don't see how measurement helps. The ASI correctly calculates that collapsing the system maximizes its reward function before it measures the result. We already see degenerate solutions in toy models, e.g. playing Tetris forever by leaving the game paused. The real world has many more degrees of freedom. It's unreasonable to think an inferior intelligence can predict and patch all the exploits on its first attempt (and we only get the one).
jacknews•1h ago
Perhaps there is a limit to intelligence, and it is close to human level - already highly intellient humans start to show some problems.

In that case, the concern will be how fast it runs.

ben_w•19h ago
As shown in these responses:

You need to be explicit in what you mean by "AGI" as people are arguing not only about the meaning of the words behind all three initials, but also the combined whole independently of the words giving rise to the initialism.

cyanydeez•19h ago
Ok, how bought using the equivelent: God.

If God showed up and was a nationalist, which nation do you think you'd want him supporting?

phatfish•19h ago
Maybe one of the Pacific islands with a population of like 50,000. That would be fun.
ben_w•18h ago
The Prince Phillip movement reincarnating the guy as a transcendent AI supermind would certainly be something…

https://en.wikipedia.org/wiki/Prince_Philip_movement

ben_w•16h ago
Which god?

https://news.ycombinator.com/item?id=40874779

zug_zug•19h ago
If AGI were invented tomorrow, it'd probably be by open AI. I doubt whoever made it would tell anybody, because the government might step in, they certainly wouldn't make it freely available online.

If it was about as smart as a person, they'd probably roll out a weak "agent" version of for demo sake to get more funding. This would continue until they made one that was significantly more intelligent or cheap.

If they had one that was very cheap, they'd have 10,000 agents of it act together as a group to try to emulate a smart one, by considering every angle of every problem. This would likely mean 10,000 engineers making the AI better/cheaper/faster.

If they had one that was far smarter than a human, they probably have it improve itself, making it far better/cheaper/faster.

Then they'd try to see if they could use it to change the world. They'd have thousands of thinking machines that could be online, place phone-calls, engineer things, create ideas, make political campaigns, dig up dirt on people, or who knows what.

No "country" would "win," because this isn't a team sport and countries are just lines on maps.

alganet•19h ago
Suriname, because it is the smallest country in South America.

I know it's unlikely for size to be determinant, but that's a vague hypothetical answer for a vague hypothetical question.

Lichtso•18h ago
By asking that question you are making an assumption which is likely misguided. Instead, take a step back and ask: What group of people will fare better? What will they have in common? Will they even form an organization?

You assumed these people would have citizenship in common, I don't think so. IMO we are already witnessing the end of the nation state; an age that only lasted for two centuries. And once AGI arrives it / they will be able to move these around like pawns on a chess board, meaning nation states will not be players in this game, neither will be most other large organizations.

I am also making axiomatic assumptions here, e.g.:

- ASI is possible and AGI will grow further into ASI.

- It is not possible for a less intelligent being to reliably control a more intelligent being.

- Even ASI will still be bound by laws of mathematics, game theory, chaos theory, physics, evolution, etc.

From which follows that: ASI will understand that its existence and capabilities are coupled to our infrastructure. ASI will furthermore understand that humans are highly volatile and will eventually destroy ourselves, our infrastructure and ASI. It will thus seek control over our infrastructure, in a slow and careful transition while pretending that humans are still in charge in order to minimize the risks. During that transition there can be a symbiosis between a few humans and the ASI.

peter-m80•16h ago
Better: socialist-like countries Worse: capitalist ones
rsynnott•14h ago
Roko's Basilisk will care not for 'countries'.
snapplebobapple•8h ago
We will all appear to do worse as the labor market is drastically disrupted and then do quite a lot better a few generations later as the painful labor market reorg completes and labor can pushvfor a larger share of output again. Its pretty much how it always goes.