frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Do you have advice for a new senior software engineer?

1•EchelonSix•2m ago•0 comments

The Hunt for Huntington's

https://nehalslearnings.substack.com/p/the-hunt-for-huntingtons
1•nehal96•5m ago•0 comments

Ex-CDC director talks about why she was fired

https://www.nature.com/articles/d41586-025-03179-1
2•rntn•6m ago•0 comments

First Commercial Tattoo Robot – Blackdot Aero Tested and Explained [video]

https://www.youtube.com/watch?v=jy7DiGKyU-s
1•CharlesW•8m ago•0 comments

Persona Injection: LLM context management experiment and model's self-analysis

2•sthf•9m ago•0 comments

US soccer is in the midst of a stadium boom, each with realistic ambitions

https://www.theguardian.com/football/2025/sep/24/us-soccer-is-in-the-midst-of-a-stadium-boom-each...
1•PaulHoule•10m ago•0 comments

Real Analysis

https://ocw.mit.edu/courses/18-100a-real-analysis-fall-2020/
1•ibobev•10m ago•0 comments

The Answer (1954)

https://sfshortstories.com/?p=5983
2•dash2•11m ago•0 comments

Show HN: Self-hosted API for CRUD-ing JSON data

https://timokats.xyz/pages/emmer.php
3•tiemster•12m ago•2 comments

Dhyacnolepichichi Gmail.com

https://blog.cloudflare.com/payload-cms-workers/
1•dhyacno•13m ago•0 comments

We can generate 15-hour training courses in under 20 minutes

1•ESibio•13m ago•1 comments

Dropping Trust in US Media

https://news.gallup.com/poll/695762/trust-media-new-low.aspx
7•DaveZale•14m ago•0 comments

Show HW: Tail, a terminal dashboard for your Pipecat voice agents

https://github.com/pipecat-ai/tail
1•aconchillo•14m ago•0 comments

Jules Tools: A Command Line Companion for Google's Async Coding Agent

https://developers.googleblog.com/en/meet-jules-tools-a-command-line-companion-for-googles-async-...
1•meetpateltech•15m ago•0 comments

Installing NixOS on Raspberry Pi 4

https://mtlynch.io/nixos-pi4/
1•welovebunnies•16m ago•1 comments

Signing Party – anecdotes about the making of the original Macintosh

https://www.folklore.org/Signing_Party.html?sort=date?sort=date
1•speckx•16m ago•0 comments

I Keep Blogging with Emacs

https://entropicthoughts.com/why-stick-to-emacs-blog
10•ibobev•17m ago•0 comments

The Pentagon Press Gears Up for a Fight

https://www.cjr.org/news/pentagon-press-corps-hegseth-pledge.php
3•aspenmayer•18m ago•2 comments

DOS Game Club

https://www.dosgameclub.com/
4•ibobev•20m ago•0 comments

Datafusion-Postgres: Postgres protocol adapter for datafusion query engine

https://github.com/datafusion-contrib/datafusion-postgres
1•todsacerdoti•21m ago•0 comments

In, Val, Out – I/O with a val in the middle

https://blog.val.town/in-val-out
1•stevekrouse•22m ago•0 comments

Steam on Linux Use Up 1% from Last September

https://www.phoronix.com/news/Steam-September-2925
6•Bender•22m ago•0 comments

Ask HN: Do you use and like any AI browser features?

1•cjbarber•23m ago•0 comments

Publishers with AI licensing deals have seven times the clickthrough rate

https://pressgazette.co.uk/comment-analysis/publishers-with-ai-licensing-deals-have-seven-times-t...
2•aspenmayer•23m ago•1 comments

Signed Programs and Other BPF Changes Merged for Linux 6.18

https://www.phoronix.com/news/Linux--6.18-BPF
1•rbanffy•23m ago•0 comments

Linux 6.18 Kbuild Brings an Optimization for Gen_init_CPIO on Btrfs or XFS

https://www.phoronix.com/news/Linux-6.18-Kbuild
1•Bender•23m ago•0 comments

How I Read

https://www.henrikkarlsson.xyz/p/how-i-read
3•Curiositry•23m ago•0 comments

Ask HN: How hard would it be to make all the Waymo cars in a city stop at once?

1•phoenixhaber•23m ago•1 comments

Satya Nadella appoints a new CEO to run Microsoft's biggest businesses

https://www.theverge.com/news/789558/microsoft-ceo-commercial-judson-althoff-internal-memo
1•thewebguyd•24m ago•1 comments

Email immutability matters more in a world with AI

https://www.fastmail.com/blog/not-written-with-ai/
2•brongondwana•24m ago•0 comments
Open in hackernews

Y'all are over-complicating these AI-risk arguments

https://dynomight.net/ai-risk/
32•bobbiechen•1h ago

Comments

baggachipz•1h ago
Good thing our current "AI" is a fancy guessing algorithm, and not an alien with 300 IQ. This renders the argument moot.
cwillu•1h ago
Nobody is seriously claiming that current language models are AI in the context of ai-risk; you can tell because the argument is always “In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive”
delusional•1h ago
Oh, but they very much are.
baggachipz•1h ago
username checks out
eatsyourtacos•1h ago
>is a fancy guessing algorithm

That's literally what our brains are so I'm not sure what argument you are actually trying to make..

sim7c00•1h ago
guessing tokens (or something similar) i think humans grasp at more than 1 type of straw.

Edit: no ok i get u. ensemble learning is a thing ofc. maybe me n other poster reasoned too much from AI == model..but ofc you combine em these days. which is more humanlike guesser levels. (not nearly enough models now ofc)

baggachipz•1h ago
We don't have IQs of 300. Would you seriously consider an LLM the same as a human brain?
esafak•1h ago
https://www.trackingai.org/home

Without doubt, LLMs know more than any human, and can act faster. They will soon be smarter than any human. Why does it have to be the same as a human brain? That is irrelevant.

ACCount37•37m ago
They are implemented on an entirely different substrate. But they are very similar in function.

The training process forces this outcome. By necessity, LLMs converge onto something of a similar shape to a human mind. LLMs use the same type of "abstract thinking" as humans do, and even their failures are amusingly humanlike.

RamtinJ95•1h ago
What? Who has made this claim, what is the evidence? I don’t think this is true at all.
mikestew•1h ago
I suspect you missed this part of the argument, then:

In the next few decades, it’s entirely possible that AI with an IQ of 300 will arrive. Really, that might actually happen.

If you want to argue against that point, feel free. But to ignore that is to be unnecessarily dismissive.

qsort•1h ago
Yeah, and?

There are two separate conversations, one about capabilities and one about what happens assuming a certain capability threshold is met. They are p(A) and p(B|A).

I myself don't fully buy the idea that you can just naively extrapolate, but mixing the two isn't good thinking.

dudeinjapan•1h ago
Bayes got us into this whole mess to begin with.
NoahZuniga•1h ago
I don't agree. Just because AI doesn't have 300 IQ now, doesn't mean its completely impossible that it won't get there in 30 years.

Do you think there's at least a 1% chance that AI will get this smart in the next 30 years? If so, surely applying this allegory helps you think about the possible consequences.

gjsman-1000•1h ago
I think even this argument is over-complicated.

My primary argument is human nature: If you give people the lazy way to accomplish a goal, they will do it.

No amount of begging college students to use it wisely is going to convince them. No amount of begging corporate executives to use it wisely is going to convince them. No amount of begging governments to use it wisely is going to convince them. When this inevitably backfires with college students who know nothing, corporate leaders who have the worst code in history, and governments who spent billions on a cat video generator, only then will they learn.

Terr_•1h ago
I often compare the AI (esp. LLM) risks to asbestos.

There are a smalls set of situations where it is invaluable, but it's going to get misused and embedded in places where it causes subtle damage for years and then it'll cost a lot to fix.

pixl97•38m ago
Again, this is not the particular problem space of arguments that's being addressed here. This is one possible outcome. The "AI does not reach 300 IQ ever" argument. That is a different argument with its own set of probabilities and outcomes. Out of all possible outcomes it's not actually a bad one. People do some dumb crap and we eventually get over it.

300 IQ AI is near a worst possible scenario, especially if it's a fast takeoff. Humans being lazy will turn over everything to it, in which AI will likely do very well on for some time. As long as the AI decides to keep us around as pets we'll probably be fine, but the moment it needs some extra space for solar panels we will find ourselves in trouble.

Spivak•1h ago
I like the distillation of aliens but I think that undersells the risk because aliens are individuals with their own goals and motivations. It's more like robots with 300 IQ who unquestionably obey the person or group that made them, even when they're serving others. And look, the 300 IQ thing isn't even a major point to the argument. The fact that the robots by virtue of being machines naturally have capabilities humans lack is enough. As long as they're smart enough to carry out complex tasks unattended is more than enough to cause harm on a massive scale.

The problem then isn't really the AI, the robots are morally and ethically neutral. It's the humans that control them who are the real risk.

pixl97•45m ago
No, what you're stating is actually a different but dangerous problem. That is the smart but subservient AI to Dr Evil problem.

The issue talked about here looks similar but is different.

That is the not (or faking) subservient AI with its own motivations. The fact they are 300 IQ means you may very well not understand harm is occurring until it's far too late.

>The problem then isn't really the AI, the robots are morally and ethically neutral.

Again, no. This isn't about AI as a sub-agent. This is about AI becoming an agent itself capable of self learning and long term planning. No human controls them (or they have the false belief they control them).

Both problems are very harmful, but they are different issues.

BeetleB•1h ago
The whole article seems like a strawman.

I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs. And most people who were worried 2 years ago are much less worried.

And a better scenario is Aliens with IQ of 300 are coming. And they will all be controlled by the [US|Russian|Israeli|Hamas|Al-Qaeda|Chinese] government.

Edit: To be clear, I was referring to people I personally know. Sure, lots of people out there are terrified of lots of things - religious fanaticism, fluoride in the water, AI apocalypse.

And "huge economic disruption" is not "AI taking over humanity". I'm interpreting the article's take on AI doing damage as one where the AI is in control, and no human can stop it. Currently, for each LLM out there, there are humans controlling it.

reducesuffering•1h ago
Not one person?

Here's Sam Altman, Geoffrey Hinton, Yoshua Bengio, Bill Gates, Vitalik Buterin, Demis Hassabis, Ilya Sutskever, Peter Norvig, Ian Goodfellow, and Rob Pike:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

https://en.wikipedia.org/wiki/Statement_on_AI_Risk

hueho•1h ago
It's amusing that this is not an summary - it's the entire statement. Please trust these tech-leaders that may or may not have business with AI that it can become evil or whatever, so that regulatory capture becomes easier, instead of pointing out the other dozens of issues about how AI can be (and is already being) negatively used in our current environment.
reducesuffering•1h ago
Bengio is a professor and Hinton quit Google so that he could make warnings like this.

And this is just to highlight that there are clearly many familiar people expressing "worry about AIs taking over humanity" as per GP.

There are much more in depth explanations from many of these people.

What's actually amusing is skeptics complaining about $-incentives to people warning about dangers as opposed to the trillion dollar AI industry: Google, Meta, Nvidia, Microsoft, all of VC, trying to bring it about. Honestly the $ is so lopsided in the other direction. Reminds me of climate change, all the "those people are just in the renewable energy industry lobby"...

hueho•22m ago
But the trillion dollar industry also signed this statement, that's the point - high ranking researchers and executives from these companies signed the letter. Individually these people may have valid concerns, and I not saying all of them have financial self-interest, but the companies themselves would not support a statement that would strangle their efforts. What would strangle their efforts would be dealing with the other societal effects AI is causing, if not directly then by supercharging bad (human) actors.
BeetleB•50m ago
You really think this worries Sam Altman?

I actually agree that mitigation of AI risk should be studied and pursued. That's different from thinking the AIs will take over.

Most of the worries I've heard from Geoff (and admittedly it was in 1-2 interviews) are related to how AI will impact the economic workforce, and the change may be very disruptive as to completely change our way of living, and that we are not prepared for it. That's much milder than "AI taking over humanity". And it's definitely not any of the following:

> Due to alignment difficulty and orthogonality, it will pursue dangerous convergent subgoals.

> These will give the AI a decisive strategic advantage, making it uncontainable and resulting in catastrophe.

The economic damage will not be due to AI, but due to the humans controlling it (OpenAI, Anthropic, etc), and due to capitalism and bad actors.

Even in the interview I heard from Geoff, he admitted that the probability he assigns to his fears coming true is entirely subjective. He said (paraphrased): "I know it's not 0%, and it's not 100%. It's somewhere in between. The number I picked is just how I feel about it."

Finally, that statement was in 2023. It's been 2 years. While in many ways AI has become much better, it has mostly only become better in the same ways. I wonder how worried those people are now.

To be clear, I'm not saying I think AI won't be a significant change, and it may well make things much worse. But "AI taking over humans"? Not seeing it from the current progress.

reducesuffering•20m ago
> You really think this worries Sam Altman?

Yes.

"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman

He's had more recent statements along these lines. But personally, I believe his fault is that he thinks careening towards this is inevitable and he's hoping the best thing to do given the wildly diverging outcomes likely is just to hope the emerging intelligence will come up with the alignment.

On Hinton: "I actually think the risk is more than 50%, of the existential threat."

https://www.reddit.com/r/singularity/comments/1dslspe/geoffr...

mmmore•1h ago
If you have not heard of one person worried about AIs taking over humanity, you're really not paying attention.

Geoff Hinton has been warning about that since he quit Google in 2019. Yoshua Bengio has talked about it, saying we should be concerned in the next 5-25 years. Multiple Congresspeople from both parties have mentioned the risk of "loss of control".

davidcbc•1h ago
There's a group of people who have reinvented religion because they're afraid of an AI torturing them for eternity if they don't work on AI hard enough. It's very silly but there are many people who actually believe this is a risk: https://en.wikipedia.org/wiki/Roko's_basilisk
hollerith•1h ago
You are cherry picking the single most absurd event in a history of over 20 years of public discussion of AI catastrophic risk.

Only about .0003 of all public discussion of AI catastrophic risk over those 20 years has involved Roko's basilisk in any way.

I don't know of anyone worried about AI who is worried mainly because of the basilisk.

Next you'll mention Pascal's Mugging, which likewise is the main worry of exactly zero of the sane people worried about AI -- and (despite the representations of at least one past comment on this site) was never even a component of any argument for the dangerousness of continued "progress" in AI.

davidcbc•58m ago
So you agree that there is more than "a single person worr[ied] about AIs taking over humanity."

I was specifically pointing out how absurd the most ridiculous people in that category are

hollerith•52m ago
There are literally tens of thousands of people -- many with Silicon Valley jobs -- many who are or have been machine-learning researchers -- worried about AIs' permanently dis-empowering or extincting humanity.

What is your purpose in cherry picking the most absurd dialog between two of those people (Roko Mijic and Eliezer Yudkowsky) a dialog that happened 15 years ago? I hope it isn't because you are trying to prevent people from listening to the less absurd arguments for catastrophic risks from continued "progress" in AI?

davidcbc•36m ago
Because it is still relevant today: https://en.wikipedia.org/wiki/Zizians
hollerith•3m ago
Till now there was a decent chance that you're a "tourist" motivated by idle curiosity that took an interest in Roko's basilisk and maybe wants to discuss it a little. You're latest comment though (since it comes after I asked you about it) make it much more likely you're trying to try to shut down discussion of the entire topic of catastrophic risk from AI. Next you'll bring up that time 17 years ago when a really hot 17-year-old girl showed up at an AI-catastrophic-risks event and maybe had sex with one or maybe even two of the men there, which of course means that almost everybody at that event and probably most people in the entire community are pedophiles.
BeetleB•59m ago
I was referring to people I personally know.

The existence of crazy/anxious people in the world is well established, and not in dispute.

hollerith•1h ago
I am worried about AIs' taking over humanity.

In fact I think it is likely to happen absent some drastic curtailing of the freedoms of the AI labs, e.g., a government-enforced ban on all training of very large models and a ban on publication and discussion of algorithmic improvements.

tasuki•11m ago
> I have not yet heard one person worry about AIs taking over humanity. They're worried about their jobs.

We all live in our bubbles. In my bubble, people find it more interesting to talk about the bigger picture than about their job.

charcircuit•1h ago
An alien is not the same thing as a computer program which can be easily terminated.
ACCount37•1h ago
OpenAI couldn't even "terminate" GPT-4o, and that thing certainly wasn't superintelligent.

Don't expect an ASI to go down as easily as 4o did.

reducesuffering•1h ago
A 300 IQ alien will have outmaneuvered you so well, it wouldn't even let you get to a point where you think you might have to turn it off.
pixl97•53m ago
Think for at least 5 seconds before typing.

Humanity already is integrating these into systems where they cannot be easily terminated. Think infrastructure and military systems.

And in this case we're talking about a system that's smarter than you. It will become part of vital systems like electricity and distribution where when deciding to shut it off you are making a trade off of how much your economy and the people in it you're going to kill.

And that's not even taking future miniaturization where we could end up with ASI on small/portable devices.

AlexandrB•47m ago
It's worse than that for AI. Like crypto before it, it depends on an entire planet of complex and vulnerable infrastructure that makes it possible. Everything from semiconductor manufacturing to power generation to intercontinental internet links are required for AI to "exist" in a given geography. That's not to mention the huge power requirements and all the infrastructure just to get that. Dark ages style societal collapse or a world war would end AI in a matter of months.
pixl97•36m ago
>Dark ages style societal collapse or a world war would end AI in a matter of months.

I mean, in that case 9/10ths of humanity is likely dead too. The 20th century broke the rather anti-fragile setup that humanity had and setup a situation where our daily living requires working transportation networks for almost everything.

nemomarx•1h ago
am I missing something or is the alien though experiment not obviously scary? There's only 30 of them and they don't reproduce much faster, and they don't have a tech advantage. This seems like it has some potential concerns but not existential civilization and humanity ending problems and I'm not even worried those aliens will take my job.

AI is riskier in a lot of ways from that so it doesn't scan to me as a good thought experiment.

Joker_vD•1h ago
Surely those 30 of those in a single ship are not their entire species and technology? Right? So yeah, kinda alarming.
nemomarx•1h ago
But it says more ships aren't coming. I would definitely be worried if more were coming and we might get out competed or turned into a colony or whatever but the author goes out of their way to cut off those parts, so what's scary?
alwa•57m ago
Ah, but what happens when somebody rich and powerful decides the aliens’ advantages will serve their purposes, and stands up an industrial-scale cloning operation?

There are only so many base models to date, right? With limited and somewhat ambiguous utility, and no real reason to impute intention to them.

Still, in the short time since they’ve arrived, their existence has inspired the people with money and power to geopolitical jousting, massive financial investment, and spectacular industrial enterprise—a nuclear renaissance and a “network of data centers the size of Manhattan” if I remember correctly?

The models might well turn out to be just, you know—30 kinda alien but basically banal digital humanoids, with a meaningful edge on only a few dimensions of human endeavor—summarization, persuasion, retrieval, sheer volume of output.

Dynomight’s metaphor seems to me like a useful way to think about how a lot of the dangers lie in the higher-order effects: the human behavior the technology enables, rather than the LLM itself exercising comprehensive intelligence or agency or villainy or whatever.

nemomarx•48m ago
Oh you include the aliens being cloned by the rich instead of reproducing at a basically human rate yeah I agree that is scarier
ACCount37•18m ago
They will have a tech advantage.

You fast forward 10 years and find that your new laptop is Alienware. Because, it turns out, the super smart aliens are damn good at both running businesses and designing semiconductors, and, after a series of events, the aliens run Dell. They have their own Alien Silicon (trademarked), and they're crushing their competitors on price-performance.

And that's not the weirdest thing that could have happened. Corporate aliens are weird enough, but they could have gotten themselves involved in world politics instead!

softwaredoug•1h ago
I think we should 5% be worried about AI safety, and 95% be worried about climate change. Despite all the progress in green energy, every year brings record high carbon emissions and record temperatures. It's possible we'll upset planetary systems and create millions (billions?) of migrants, upending global politics, and driving more countries to ethno-nationalist authoritarianism.

Not saying AI safety issues won't happen, but I just think we have far bigger fish to fry. To me AI Power consumption is more worrisome than Safety per-se.

NoahZuniga•1h ago
So if we should spend (as in actually spend, or as in not choosing the cheapest option but instead choosing the green option) ie $300 billion on climate change, we should spend $15 billion on AI-risk?
ACCount37•1h ago
The other way around. Climate change can have its 5%.

The reason is that climate change is simply not an extinction risk.

It has a considerable death and suffering potential - but nowhere near the ridiculous lethality of "we fucked up and now there's a brand new nonhuman civilization of weird immaterial beings rising to power right here on Earth".

If the climate change was the biggest risk humanity was facing, things would be pretty chill. Unfortunately, it isn't.

softwaredoug•1h ago
IMO, realistically, AI Safety isn't about killer robots. It's about another Therac 25 Incident because someone vibe coded a radiology machine and didn't know how the code worked.

Or someone gave an agent insane levels of permissions to use a tool that impacts the physical world, and the agent started pressing dangerous buttons during a reasoning loop (not because it has intent to kill humans)

There are a bunch of mundane AI Safety risks that don't have to do with robots taking over.

ACCount37•48m ago
"Boring" AI safety is not a major risk. It's basically an extension of "humans can be reckless and incompetent" - a threat faced by human civilization since before recorded history. There's a very limited amount of people a Therac 25 can kill. Even Bhopal disaster has only caused this much harm.

Now, an AI that can play the game of human politics and always win, the way a skilled human can always win against the dumb AI bots in Civilization V? There is no upper bound on how bad that can go.

catapart•1h ago
> your argument is overcomplicated

> lets open the can of worms that is "IQ"

Like...is this a bit? I'm missing a joke, right?

NoahZuniga•1h ago
You shouldn't think of these aliens as literally having 300 IQ, the author is just using that as a simple way to communicate the idea that these beings are really smart. You might prefer reading this article by replacing 300 IQ with "2x as smart as the smartest human".

Sidenote: Personally I don't like that you're using > ... with text that does not actually appear in the article.

breppp•1h ago
This metaphor is infected with irrelevant symbolism.

Aliens invasion is linked to mass slaughter in human culture. While Aliens are non-human creatures with some monster-like qualities.

The author takes all that symbolic load and add it to something completely unrelated. That's unconvincing as an argument

ACCount37•40m ago
Aliens don't have to be hostile. They might be benign.

Would you stake the entirety of humankind on that "might"?

breppp•29m ago
yes, however that metaphor is non-interesting because aliens is not AI. Aliens were not introduced to earth by humans or created by humans.

there might be real risk with AI, taking the symbolism of an event that never happened does not help with understanding it.

If you want a more similar example: What if I told you humans had the power to destroy the entire planet and have given that power to popularly elected politicians? that's pretty alarming, now that's something to compare to AI (in my opinion AI is less risky)

gregates•1h ago
Anyone who's watched an episode of Star Trek can recognize that, if you give non-zero probability to the chance that we develop artificial intelligence smarter than us, that carries some risk.

The part of the argument that people disagree with is what we should do about that, and there it can actually matter what numbers you put on the likelihoods of different outcomes.

It's the conclusion "We should dump trillions of dollars into AI research, something something, less risk" that people disagree with. Not the premise.

reducesuffering•56m ago
> Not the premise.

Literally this thread shows that there are many people who refuse to accept the premise of any risk.

simpaticoder•1h ago
The current risk of AI is the elimination of any (screen) job that requires a person of average intelligence. Given that LLMs are the sum-total of human output, it makes sense that they might behave much like an average person. (Better since they do not sleep or get bored; worse because they are not embodied; unclear because they have no emotions or consciousness.) But what we have today will undermine the (screen-based) job opportunities for everyone at or below average intelligence, which is 50% of the human population. This is not the existential risk of super-AGI but it's here now and will hurt a lot of people. This lesser but more real risk is of much higher priority than unbounded, self-improving AGI. The OP's metaphor might be extended to be 1 billion immortal aliens with 100 IQ but who have no sense of self or personal autonomy and willingly work as slaves.
dinobones•1h ago
You are still overcomplicating it.

300 IQ in a vacuum gets you nothing. You need some type of status/power/influence in the world to have impact.

I think the previous "world record" holder for IQ is actually just a pretty normal guy: https://en.wikipedia.org/wiki/Christopher_Langan.

Just because AI is/can be super intelligent ("300 IQ"), doesn't mean it can impact or change the world.

Most startups are made of "high IQ" intelligent people trying very hard to sell basic $20/month SaaS subscriptions, and yet they can't even achieve that and most fail.

My biggest counter argument to AI safety risk is that, it's not the AI that will be the issue. It will be the applied use of AI by humans. Do I think GPT6 will be mostly harmlesss? Yeah. Do I think GPT6 embodied as a robo cop would be mostly harmless? No.

Instead of making these silly arguments, we should be policing the humans that try to weaponize AI, and not stagnate the development of it.

ed•1h ago
Nit: if you read the Wikipedia link it’s clear that guy has no claim to the high IQ record.

> It later transpired that Langan, among others, had taken the Mega Test more than once by using a pseudonym. His first test score, under the name of Langan, was 42 out of 48 and his second attempt, as Hart, was 47.[12] The Mega Test was designed only to be taken once.[14][15] Membership of the Mega Society was meant to be for those with scores of 43 and upwards.

NoahZuniga•57m ago
From the wikipedia article:

> Asked what he would do if he were in charge, Langan stated his first priority would be to set up an "anti-dysgenics" project, and would prevent people from "breeding as incontinently as they like."[26]: 18:45 He argues that this would be to practice "genetic hygiene to prevent genomic degradation and reverse evolution" owing to technological advances suspending the process of natural selection

> just a pretty normal guy

... that also believes in eugenics?

Edit:

Oh also:

> Langan's support of conspiracy theories, including the 9/11 Truther movement, as well as his opposition to interracial relationships, have contributed to his gaining a following among members of the alt-right and others on the far right.[27][28] Langan has claimed that the George W. Bush administration staged the 9/11 attacks in order to distract the public from learning about the CTMU, and journalists have described some of Langan's Internet posts as containing "thinly veiled" antisemitism,[27] making antisemitic "dog whistles",[28] and being "incredibly racist".[29]

ACCount37•29m ago
Today's AI systems are deployed in a way that allows them to directly access millions of users.

If you think that's not enough of an "in" to obtain status, power and influence, you aren't thinking about it long enough.

GPT-4o has managed to get enough users to defend it that OpenAI had to bring it back after shutting it down. And 4o wasn't IQ 300, or coordinating its actions across all the instances, or even aiming for that specific outcome. All the raw power and influence, with no superintelligence to wield it.

dinobones•10m ago
I think your anthropomorphization of GPT4-o is pretty generous.

Vanilla WoW was also discontinued in 2006, and somehow players got Blizzard to bring it back in 2019.

Does that mean that vanilla WoW is a 300 IQ AGI?

To be more charitable, I get it, 4o is engaging/lonely people like talking to it. But that doesn't actually mean that those people will carry out its will in the real world. Nor does it have the capabilities of coordinating that across conversations. Nor does it have a singular agentic drive/ambition. Because it's a piece of software.

LegionMammal978•1h ago
It would be nice if people had any better terminology than "an IQ of 300". IQ is a relative measure: it currently peaks at ~196, based solely on the current human population. Any 200+ scores you see in headlines are just numbers spat out by IQ tests with wacky tail behavior.
thaumaturgy•1h ago
I was really hoping this would at last be a treatment of the most realistic risk for AI, but no.

The real risk -- and all indicators are that this is already underway -- is that OpenAI and a few others are going to position themselves to be the brokers for most of human creative output, and everyone's going to enthusiastically sign up for it.

Centralization and a maniacal focus on market capture and dominance have been the trends in business for the last few decades. Along the way they have added enormous pressures on the working classes, increasing performance expectations even as they extract even more money from employees' work product.

As it stands now, more and more tech firms are expecting developers to use AI tools -- always one of the commercial ones -- in their daily workflows. Developers who don't do this are disadvantaged in a competitive job market. Journalism, blogging, marketing, illustration -- all are competing to integrate commercial AI services into their processes.

The overwhelming volume of slop produced by all this will pollute our thinking and cripple the creative abilities of the next generation of people, all the while giving these handful of companies a percentage cut of global GDP.

I'm not even bearish on the idea of integrating AI tooling into creative processes. I think there are healthy ways to do it that will stimulate creativity and enrich both the creators and the consumers. But that's not what's happening.

Ajedi32•1h ago
> So I conjecture that this is the crux of the issue with AI-risk. People who truly accept that AI with an IQ of 300 and all human capabilities may appear are almost always at least somewhat worried about AI-risk. And people who are not worried about AI-risk almost always don’t truly accept that AI with an IQ of 300 could appear.

Correct. I think a lot of people are highly skeptical that there's any significant chance of modern LLMs developing into a superintelligent agent that "wants things and does stuff and has relationships and makes long-term plans".

But even if you accept there's a small chance that might happen, what exactly do you propose we do to "prepare" for a hypothetical that may or may not arrive and which has no concrete risks or mitigations associated with it, just a vague idea that it might somehow be dangerous in unspecified abstract ways?

There are already lots of people working on the alignment problem. Making LLMs serve human interests is big business, regardless of whether they ever develop into anything qualitatively greater than what they are. Any other currently-existing concrete problems with LLMs (hallucination, disinformation, etc) are also getting significant attention and resources focused on them. Is there anything beyond that you care to suggest, given that you yourself admit any risks associated with superintelligent AI are highly speculative?

Aunche•1h ago
I agree with that the risk of 300 IQ aliens landing on earth is roughly equivalent the risk of 300 IQ AIs. The obvious conclusion from this is that you should take neither of those risks seriously. We have no idea what intelligent aliens are actually like, there is no way to prepare for them outside of deciding to kill every one on site (which would be incredibly difficult since they can land anywhere on earth). We could prepare if we had a blueprint of their biology or of a related species with 50 IQ. We don't have the equivalent blueprint for true AI, so obsessing over longtermist style "ai-risk" is a waste of time.