frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Show HN: I built a dream interpreter in JavaScript, no AI, no server, just logic

https://github.com/Dino-Nuggies45/Dream-Interpreter
5•DinoNuggies456•5m ago•0 comments

Traceable Randomness

https://random.colorado.edu/concepts/traceable-randomness
1•owl_vision•8m ago•0 comments

Tracking Protestware Spread: 28 NPM Packages Affected by Payload Targeting

https://socket.dev/blog/protestware-update-28-npm-packages-affected-by-payload-targeting-russian-language-users
1•feross•8m ago•0 comments

China's new digital ID system raises surveillance, censorship concerns

https://www.washingtonpost.com/world/2025/07/15/china-digital-id-internet-surveillance/
2•bookofjoe•10m ago•1 comments

The Tao of Christ, free eBook $19.99 on Amazon

https://drive.google.com/file/d/138RQId9iYs_fJi02hpCcmFla3IW6NwLI/view
2•douchecoded•11m ago•0 comments

How Elon Musk's X is fueling the MAGA-Trump split

https://www.politico.com/news/2025/07/15/elon-musk-x-maga-00455128
2•c420•11m ago•0 comments

Ask HN: A project isn't dead just because it's quiet – how to tell people that?

2•fernvenue•16m ago•0 comments

Turing Test – But with Social Deception

https://amonghumans.io
1•Kehvinbehvin•17m ago•0 comments

Asymmetry of Verification and Verifier's Law

https://www.jasonwei.net/blog/asymmetry-of-verification-and-verifiers-law
1•hasheddan•18m ago•0 comments

Grok's new porn companion is rated for kids 12 and older in the App Store

https://www.platformer.news/grok-ani-app-store-rating-nsfw-avatar-apple/
1•spenvo•18m ago•0 comments

GenAI-Powered Inference

https://arxiv.org/abs/2507.03897
1•JackeJR•45m ago•1 comments

UK fintech Curve in talks to be acquired by Lloyds

https://www.headforpoints.com/2025/07/13/lloyds-bank-in-talks-to-buy-curve/
1•gregorvand•45m ago•0 comments

AWS announced support for clusters with up to 100k nodes

https://aws.amazon.com/blogs/containers/under-the-hood-amazon-eks-ultra-scale-clusters/
3•dropbox_miner•49m ago•2 comments

World's 'oldest' marathon runner dies at 114 in hit-and-run

https://www.bbc.com/news/articles/cpqnppnx0z1o
1•layer8•50m ago•0 comments

Show HN: Tlsinfo.me – check your JA3/JA4 TLS fingerprints

https://tlsinfo.me/json
2•elpy1•51m ago•0 comments

Some Australian dolphins use sponges to hunt fish, but it's harder than it looks

https://apnews.com/article/dolphins-australia-sponge-noses-9ba412c3d0184ee84a66ec8b5a5b5319
1•c420•57m ago•1 comments

Sexting with Gemini

https://www.theatlantic.com/magazine/archive/2025/08/google-gemini-ai-sexting/683248/
1•JumpCrisscross•57m ago•2 comments

The AI That Broke the Internet's Back

https://medium.com/@th71852/the-ai-that-broke-the-internets-back-24c1bd2e825e
1•antiochIst•59m ago•0 comments

Retrieval Embedding Benchmark

https://huggingface.co/spaces/embedding-benchmark/RTEB
1•fzliu•1h ago•0 comments

Amazon S3 Vectors

https://aws.amazon.com/blogs/aws/introducing-amazon-s3-vectors-first-cloud-storage-with-native-vector-support-at-scale/
4•andrewbarba•1h ago•0 comments

Amazon S3 Vectors

https://aws.amazon.com/s3/features/vectors/
2•jonbaer•1h ago•0 comments

Amazon EKS now supports 100K nodes per cluster

https://aws.amazon.com/blogs/containers/amazon-eks-enables-ultra-scale-ai-ml-workloads-with-support-for-100k-nodes-per-cluster/
3•cmckn•1h ago•0 comments

Conversion of millimolar dissolved CO2 to fuels with molecular flux generation

https://www.nature.com/articles/s41467-025-56106-3
1•PaulHoule•1h ago•0 comments

Staff laid off at King will be replaced by AI tools they helped to create

https://www.gamesindustry.biz/sources-suggest-that-staff-laid-off-at-king-will-be-replaced-by-ai-tools-they-helped-to-create
4•teamspirit•1h ago•1 comments

The C3 Programming Language

https://c3-lang.org
1•0x54MUR41•1h ago•0 comments

The competition behind AlphaFold is at risk of shutting down

https://www.science.org/content/article/exclusive-famed-protein-structure-competition-nears-end-nih-grant-money-runs-out
1•elektor•1h ago•0 comments

Full Disclosure of Security Vulnerabilities a 'Damned Good Idea' (2007)

https://www.schneier.com/essays/archives/2007/01/schneier_full_disclo.html
2•greyface-•1h ago•0 comments

Uncontrolled File Write/Arbitrary File Creation

https://hackerone.com/reports/3250117
2•smartberry9•1h ago•0 comments

Show HN: Autopilot for Cursor IDE

https://github.com/hmldns/nautex
2•Homo__Ludens•1h ago•0 comments

Corkami/pics: File format dissections and more

https://github.com/corkami/pics
3•chubot•1h ago•0 comments
Open in hackernews

Underwriting Superintelligence

https://underwriting-superintelligence.com/
34•brdd•6h ago

Comments

brdd•6h ago
The "Incentive Flywheel" of AI: how insurance unlocks secure Al progress and enables faster AI adoption.
xmprt•5h ago
This only works if there are negative consequences faced by the insured parties when things go wrong. If all the negative consequences are faced by society and there are no regulations that incur that burden on the companies building AI, then we'll have unchecked development.
brdd•3h ago
We agree! Unchecked development could lead to disaster. Insurers can insist on adherence to best practices to incentivize safe practices. They can also clarify liability and cover most (but not all) of the risk, leaving the developer on the hook for a portion of it.
muskmusk•5h ago
I love it!

Finally some clear thinking on a very important topic.

blibble•4h ago
> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century.

I never understood this argument

as a non-USian: I'd prefer to be under the Chinese boot rather than having all of humanity under the boot of an AI

and it is certainly no reason to try to do everything we possibly can to try and summon a machine god

socalgal2•4h ago
> I'd rather be under the Chinese boot than having all of humanity under the boot of an AI

That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?

> certainly no reason to try to increase the chance of summoning a machine god

The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.

hiAndrewQuinn•4h ago
If you financially penalize AI researchers, either with a large lump sum or in a way which scales with their expected future earnings, take you pick, and pay the proceeds to the people who put together the very cases which lead to the fines being levied, you can very effectively freeze AGI development.

If you don't think you can organize international cooperation around this you can simply put such people on some equivalent of an FBI type Most Wanted list and pay anyone who comes forward with information and maybe gets them within your borders as well. If a government chooses to wave its dick around like this it could easily cause other nations to copy the same law, this instilling a new global Nash equilibrium where this kind of scientific frontier research is verboten.

There's nothing inevitable at all about that. I hesitate to even call such a system extreme, because we already employ systems like this to intercept e g. high level financial conspiracies via things like the False Claims Act.

socalgal2•3h ago
In my world there are multiple countries who each have an incentive to win this race. I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out. You're dreaming if you think you could actually get all the players to co-operate on this. It's like expecting the world to come together on climate change. It's not happening and it's not going to happen.

Further, it doesn't take a huge lab to do it. You can do it at home. It might take longer but there's an 1.4kg blob in everyone's head as proof of concept and does not take a data center.

blibble•3h ago
> I know of no world where you can penalize AI researchers across international boundaries nor to believe your scenario could ever play out.

mossad could certainly do it

blibble•3h ago
> The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?

given Elon's AI is already roleplaying as hitler, and constructing scenarios on how to rape people, how much worse could the Chinese one be?

> The argument is that this is inevitable.

which is just stupid

we have the agency to simply stop

and certainly the agency to not try and do it as fast as we possibly can

socalgal2•3h ago
"We" do not as you can not control 8 billion people
blibble•3h ago
it's certainly not that difficult to imagine international controls on fab/DC construction, enforced by the UN security council

there's even a previous example of controls of this sort at the nation state level: those for nuclear enrichment

(the cost to perform uranium enrichment is now less than building a state of the art fab...!)

as a nation state (not facebook): you're entitled to enrich, but only under the watchful eye of the IAEA

and if you violate, then the US tends to bunker-bust you

this paper has some ideas on how it might work: https://cdn.governance.ai/International_Governance_of_Civili...

mattnewton•3h ago
> we have the agency to simply stop

This is worse than the prisoner’s dilemma- the “we get there, they don’t” is the highest payout for the decision makers who believe they will control the resulting super intelligence.

MangoToupe•2h ago
> The options are under the boot of a Western AI or a Chinese AI.

This seems more like fear-mongering than based on any kind of reasoning I've been able to follow. China tends to keep control of its industry, unlike the US, where industry tends to control the state. I emphatically trust the chinese state more than out own industry.

gwintrob•4h ago
I'm biased because my company (Newfront) is in insurance but there are a lot of great points here. This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."

There's a mega trend of value concentrating in AI (and all the companies that touch/integrate it). Makes a ton of sense that insurance premiums will flow that direction as well.

blibble•3h ago
> This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."

and by 2040 it will be $5000 trillion!

and by 2050 it will be $5000000 quadrillion!

gwintrob•3h ago
Ha, of course. A lot easier to forecast in a spreadsheet than actually make this happen. Based on the progress in AI in the past couple years and the capabilities of the current models, would you bet against that growth curve?
blibble•3h ago
yes, there's not $5 trillion of dumb money spare

(unless softbank has been hiding it under their mattress)

choeger•3h ago
Is there any indication whatsoever that there's even a glimpse of artificial intelligence out there?

So far, I have seen language models that, quite impressively, translate between different languages, including programming languages and natural language specs. Yes, these models use a wast (compressed) knowledge from pretty much all of the internet.

There are also chain of thought models, yes, but what kind of actual intelligence can they achieve? Can they formulate novel algorithms? Can they formulate new physics hypotheses? Can they write a novel work of fiction?

Or aren't they actually limited by the confines of what we as a species already know?

roenxi•3h ago
You seem to be part of a trend where most humans are defined as unintelligent - there are remarkably few people out there capable of formulating novel algorithms or physics hypothesises. Novels there are a few more if we admit unreadable slop produced by people who really should choose careers other than writing. It speaks to the progress that machines have made that traditional tests of intelligence, like holding a conversation or doing well on an undergraduate-level university test, apparently no longer measure anything of importance related to intelligence.

If we admit that even relatively stupid humans show some levels of intelligence, as far as I can tell we've already achieved artificial intelligence.

yahoozoo•2h ago
> Is there any indication whatsoever that there's even a glimpse of artificial intelligence out there?

no

Animats•3h ago
For this to work, large class actions are needed. If companies are liable for large judgements, companies will insure against them. If not, companies will not try to avoid harms for which they need not pay.
janalsncm•3h ago
> As insurers accurately assess risk through technical testing

If that’s not “the rest of the owl” I don’t know what is.

Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.

1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.

2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.

3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.

brdd•3h ago
Thanks for the thoughtful response! Some replies:

1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.

2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.

3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.

janalsncm•33m ago
What you are saying makes sense for conventional harms like non consensual deepfakes, hallucinations, Waymo running pedestrians over, etc.

However, those are a far cry from the much more severe damages that superintelligence could enable. All of the above are damages which already could exist with current technology. Are you saying we have superintelligence now?

If not, your idea of selling superintelligence insurance hinges on the ability of anyone to price this kind of risk: an infinitely large number multiplied by another infinitely small number.

(I realize my explanation was wrong above, and should be the product of two numbers.)

I think many readers will also take issue with your contention that the private market is able to price these kinds of existential risks. Theoretically, accurate pricing would enable bioweapons research. However, the potential fallout from a disaster is so catastrophic that the government simply bans the activity outright.

bvan•1m ago
Not to detract from your argument but, expected risk is the expectation of [loss x probability of said loss].
yahoozoo•3h ago
With no skin in the game, it will be either cool if super intelligence happens or it doesn’t and I just get to enjoy some schadenfreude. Either all of these people are geniuses or they’re Jonestown members.
evertedsphere•2h ago
> But we don’t want medical device manufacturers or nuclear power plant operators to move fast and break things. AI will quickly get baked into critical infrastructure and could enable dangerous misuse.

nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them

this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant

sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever

but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation

bwfan123•1h ago
> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century

I stopped reading after this. First, there is no evidence of Superintelligence nearing or even any clear definition of what "Superintelligence nearing" means. This is classic "assuming the sale" gambit with fear-mongering in its appeal.

lowsong•1h ago
This article is a bizarre mix of center-right economic ideas and completely unfounded assumptions about the nature of AI technology, to the point where I'm genuinely not sure if this is intended as parody or not.

> We’re navigating a tightrope as Superintelligence nears.

There is no evidence we're anywhere near "superintelligence" or AGI. There is no evidence any AI tools are intelligent in any sense, yet alone "superintelligence". The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction. You might as well have cited Terminator or The Matrix as evidence.

The only people actually claiming any advancement towards "superintelligence" or "AGI" directly financially gain from people thinking that it's right around the corner.

> If the West slows down unilaterally, China could dominate the 21st century.

Is this casual sinophobia intended to appeal to a particular audience? I can't see what purpose this statement, and others like it, serves other than to try to frame this as "it's us or them".

> Faster than regulation: major pieces of regulation, created by bureaucrats without technical expertise, move at glacial pace.

This is a very common right-wing viewpoint. That regulation, government oversight, and "red tape" is unacceptable to business. Forgetting that building codes, public safety regulations, and workers rights all stem directly from government regulation. This article goes out of it's way to frame it as obvious, like a simple fact unworthy of introspection.

> Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.

There is no evidence this is the case, and no citation is even attempted.

janalsncm•13m ago
> The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction.

There are certainly pretty gaping holes in its logic but it’s more than a fanfic. I’m a bit confused about the incentive of its authors to add their names to it, since it seems if they’re wrong they lose credibility and if they’re right I’m not sure they’ll be able to cash in on the upside.