frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Industrial design files for Keychron keyboards and mice

https://github.com/Keychron/Keychron-Keyboards-Hardware-Design
1•stingraycharles•1m ago•0 comments

Deep Thought: A Conversation with Wolf Biermann – Arte.tv Documentary

https://www.youtube.com/watch?v=7KG4_6vy3P4
1•doener•1m ago•0 comments

New Vulnerabilities in IBM WebSphere Liberty Can Lead to Full Server Compromise

https://www.oligo.security/blog/new-websphere-liberty-vulnerabilities
1•curmudgeon22•2m ago•0 comments

TurboQuant Pro – 27-41x embedding compression via PCA-Matryoshka

https://github.com/ahb-sjsu/turboquant-pro
1•ahb-sjsu•2m ago•0 comments

Claude Code is a vibe-coded mess. Some of it is good

https://blog.raed.dev/posts/claude_code_clever_ideas/
2•Raed667•3m ago•0 comments

Xclif: The file routing-based CLI framework

https://github.com/ThatXliner/xclif
1•thatxliner•3m ago•0 comments

Meta Pauses Work with Mercor After Data Breach Puts AI Industry Secrets at Risk

https://www.wired.com/story/meta-pauses-work-with-mercor-after-data-breach-puts-ai-industry-secre...
1•gmays•4m ago•0 comments

Deconstructing a Black Hole in a Fragment Shader

https://vaidikv.github.io/deconstructing-a-black-hole/
2•apson•5m ago•1 comments

WildDet3D: Scaling Promptable 3D Detection in the Wild

https://allenai.github.io/WildDet3D/
1•aanet•6m ago•1 comments

Show HN: Electric Minds Reborn: pioneering virtual community, on a new platform

https://electricminds.org
1•amysox•6m ago•0 comments

Show HN: I'm building an open platform to submit, rate and discover lectures

3•blackbrokkoli•10m ago•0 comments

The AI Industry's Most Expensive Mistake

https://www.thealgorithmicbridge.com/p/inside-the-ai-industrys-most-expensive
2•dev_tty01•10m ago•0 comments

What is it like to be a human being?

https://iai.tv/articles/what-is-it-like-to-be-a-human-being-auid-3544
1•rrwilla•11m ago•1 comments

Show HN: Faster more accurate multimodal vector search

https://github.com/nickswami/dasein-python-sdk
3•GaneshSuriya•12m ago•0 comments

Show HN: Corvo – Free portfolio analytics with MonteCarlo simulation and AI chat

https://corvo.capital/
1•vinaybatra•13m ago•0 comments

Open-source AI models for 3D generation

https://firethering.com/open-source-ai-3d-generators/
1•steveharing1•13m ago•0 comments

The state of high-speed rail in the U.S. [video]

https://www.youtube.com/watch?v=9Hm0_-bOB4Y
1•barronlroth•13m ago•0 comments

MirrorCode: Evidence that AI can do some weeks-long coding tasks

https://epoch.ai/blog/mirrorcode-preliminary-results/
2•tadamcz•15m ago•0 comments

Trump's Changes Lock Some Employers Out of H-1B Visa Program

https://www.nytimes.com/2026/04/10/us/politics/h1b-visa-program-changes.html
2•mitchbob•16m ago•1 comments

Next Project

https://www.amantulsyan.com/next-project-after-commenda/
1•amantulsyan35•16m ago•0 comments

I built ClawIDE: A web-based IDE for managing multiple Claude Code sessions

3•aeroxis•17m ago•2 comments

TypeScript stack: modern dev tools and platforms for startups

https://www.paralect.com/stack
1•igorkrasnik•20m ago•0 comments

What to Know About OpenAI's Ideas for a World with 'Superintelligence'

https://www.wsj.com/tech/ai/what-to-know-about-openais-ideas-for-a-world-with-superintelligence-e...
1•gmays•20m ago•0 comments

To Fill Air Traffic Controller Shortage, FAA Turns to Gamers

https://www.nytimes.com/2026/04/10/us/politics/air-traffic-controller-gamer.html
1•mitchbob•21m ago•1 comments

Abandoning Apple and Learning to Love Linux

https://jimjeffers.com/blog/abandoning-apple-and-learning-to-love-linux/
4•jimjeffers•22m ago•1 comments

Agents fail because software stopped being readable

https://adaptivesoftware.substack.com/p/what-agents-cant-read-they-cant-change
2•iristenteije•22m ago•0 comments

Show HN: LuxShot – Open-source, native macOS OCR utility

https://github.com/lukebuild/LuxShot
1•lukeiodev•23m ago•0 comments

Show HN: Formally Verified Leaderless Log Protocol for Kafka

https://github.com/lakestream-io/leaderless-log-protocol
2•sijieg•23m ago•1 comments

I used Codex to upgrade my 2013 Nexus 7 to Android 11

https://opuslabs.substack.com/p/breathing-life-into-my-13-year-old
2•opuslabs•23m ago•0 comments

ChatGPT's bug with scanned PDFs

https://medium.com/@sirk390/chatgpts-bug-with-scanned-pdfs-9fc9d5be38ba
1•sirk390•24m ago•0 comments
Open in hackernews

Why do we tell ourselves scary stories about AI?

https://www.quantamagazine.org/why-do-we-tell-ourselves-scary-stories-about-ai-20260410/
28•lschueller•1h ago

Comments

Zigurd•1h ago
If I can plausibly say I'm making something super dangerous, the government is likely to want to be the first government to have it. If the check clears before they figure out if I'm BSing them or not, it's a win.
afavour•1h ago
It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product in a bid to, I think, show the inevitability of it? Or to sell themselves as the one responsible power to constrain it?

It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.

mememememememo•1h ago
People are renowned voting or buying in their own worst interests. And it goes back to before Trump and in many countries.
afavour•1h ago
Yes but in order to get someone to vote against their interests you need to sell them on something else that's a benefit. They don't just automatically vote against themselves.

"This technology might escape our control, might devastate the economy but also serves as a serviceable chatbot for your entertainment" isn't a vote winner.

lschueller•1h ago
It is very odd, indeed. It's a bit of both well known "hells of marketing": fomo on the one side (you better use us as heavily as possible), combined with mysticism of "we don't know what we created, but it's powerful and you better follow us to be on the right side"
bpodgursky•1h ago
They are being honest and you don't want to deal with the implications, so you stretch for conspiracy theories.

The ones at the top are the true believers. Engage with them at that level.

0x4e•1h ago
That's a good view point. Perhaps they're not being alarmists or trying to scare people, but being honest about the capabilities.

Perhaps it can be better articulated and framed in a way that's well received. But, maybe that would be over-promising or not being honest about the future.

MisterTea•1h ago
> It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product

That is direct CEO to CEO marketing. They're working really hard to convince high up decision makers that these tools will lower their head count and reduce costs.

unethical_ban•1h ago
They're selling the product to the class of people who would love for it to take the jobs of our class of people.
xg15•1h ago
Yeah, the messaging felt weirdly pyromanic, like telling everyone about the unimaginable dangers of fire and then saying that's why I have I to burn everything, to protect us from the fire...
deepsquirrelnet•1h ago
There are so many reason if you look at how it's being sold.

* We need to completely deregulate these US companies so China doesn't win and take us over

* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner

* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)

* If you don't use AI, you will not be able to function in a future job

* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right

They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.

mememememememo•1h ago
I read and experience scary stories about AI already. It is not a future maybe thing.
zaps•1h ago
Why do we tell ourselves scary stories about anything?
5asaKI•1h ago
Indeed. Apart from the obvious prompt research frauds mentioned in the article, the model learned all deceptive behaviors from hundreds of Yudkowsky scenarios that are easily available.

It literally plagiarizes its supposed free will like a good IP laundromat.

chrisbrandow•1h ago
I don’t think the fact that the robot was instructed to lie to a human and was able to do so successfully makes the story much less scary for most people.
nalekberov•1h ago
> “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”

Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.

0x4e•1h ago
Agreed. But, could it be trained ti be deceiving? Especially when we bake-in advertising into it?
yoz-y•1h ago
Isn't the problem precisely that it does not take moral judgements?

My opinion on all of this is constantly shifting, but right now my main issue is that-like self driving-it seems 90-95% correct and 5-10% catastrophically wrong.

Due to the sheer speed and volume of output it produces I have grown complacent and exhausted, so when I give it simple tasks I assume it is correct and then is the time when "it deletes" all of your files.

Forgeties79•1h ago
LLM companies’ behavior, AI evangelists, and the investment fervor around it all, are telling us the scary stories.
0x4e•1h ago
Because we don't like uncertainty, and the AI future is uncertain. There are multiple high probability scenarios.

Because we're seeing how its capabilities increase overtime. I find the rate at which I prefer to go to an AI than an UpWorker is scary.

Because we——the people——are not in control of it. We're at the whims of whatever it and the tech bros want (technocracy).

dryarzeg•50m ago
> technocracy

What you mentioned is not a technocracy. Technocracy is when all decisions are made by real specialists in the field, based on scientific methods (simply speaking). What you mentioned is a plutocracy, a form of oligarchy in which decisions are made by people of great wealth.

I couldn’t just ignore this because, in my view, technocracy (as I’ve described it) still has some merit - for instance, appointing only genuine economists to head a hypothetical Ministry of Economy makes some sense - whereas oligarchy and plutocracy have nothing good to offer. Of course, this is just my personal opinion.

SpicyLemonZest•27m ago
I think most AI execs I'm familiar with would, if they were the god-monarch of humanity, recruit real specialists applying scientific methods to make most decisions. They seem like the kind of people who would understand that the Ministry of Economy is doing valuable things which shouldn't be compromised for personal expediency. Does that really make it any better?
Rzor•1h ago
For regulatory capture, of course. They are not fooling me. There may be other motives, and the more ever-doom-looking crowd can find something in it for themselves as well, but you don't have to dig any deeper if you are looking for an explanation for the perspective of the people actually building it.

The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.

ramon156•1h ago
I feel like this article is more written towards non-techies. A decent amount of programmers have touched coding agents, and know it "kind of" does it's job. It's good enough for some tasks... I cannot be arsed to figure out how to edit a graph in Drupal, so I ask Claude. Claude fixes it, and it's not anymore broken than it already was. Win win.

However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following: - Fix tedious tasks that cost me more to figure out than I care for - Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself

MrDarcy•1h ago
This usage pattern is a few months behind the curve. It’s effective at full on feature development now. Keep it fed with plans and it’ll keep implementing, leaving the codebase better than it found it each cycle.
everdrive•1h ago
One thing that strikes me that I never really see anyone discuss is that we've been afraid of conscious computers for a _long_ time. Back in the 50s and before people were quite afraid that we'd build conscious computers. This was long before there was any sense that could actually accomplish the task. I think that similarly to seeing faces in the clouds, we imagine a consciousness where none exists. (eg: a rain god rather than a complex system of physics and chemistry)

Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.

So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)

ACCount37•1h ago
Are they? Not conscious?

If you list out every prominent theory of consciousness, you'd find that about a quarter rules out LLMs, a quarter tentatively rules LLMs in, and what remains is "uncertain about LLMs". And, of course, we don't know which theory of consciousness is correct - or if any of them is.

So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?

jacquesm•1h ago
The fact that it's a box with a plug and a state that can be fully known. A conscious entity has a state that can not be fully known. Far smarter people than me have made this argument and in a much more eloquent way.

Turing aimed too low.

saHqtr•1h ago
And the chatbots don't even pass the Turing test.

I've never had a normal conversation. It's always prompt => lengthy, cocksure and somewhat autistic response. They are very easily distinguishable.

MyHonestOpinon•59m ago
They are distinguishable because they know too much. Their knowledge base has surpassed humans. We have also instructed them to interact with us in a certain manner. They certainly are able to understand and use human language. Which I think was Turin's point.

Purely retorica but, would you be able to distinguish a chatbot from an autistic human?

afavour•1h ago
> So, what is it that makes you so sure, oh so very certain, that LLMs just "feel" conscious but aren't?

Because we know what they actually are on the inside. You're talking as if they're an equivalent to the human brain, the functioning of which we're still figuring out. They're not. They're large language models. We know how they work. The way they work does not result in a functioning consciousness.

zbikowski•56m ago
I think that the interior structure doesn't necessarily matter—the problem here is that we don't know what consciousness is, or how it interacts with the physical body. We understand decently well how the brain itself works, which suggests that consciousness is some other layer or abstraction beyond the mechanism.

That said, I think that LLMs are not conscious and are more like p-zombies. It can be argued that an LLM has no qualia and is thus not conscious, due to having no interaction with an outside world or anything "real" other than user input (mainly text). Another reason driving my opinion is because it is impossible to explain "what it is like" to be an LLM. See Nagel's "What Is It Like to Be a Bat?"

I do agree with the parent comment's pushback on any sort of certainty in this regard—with existing frameworks, it is not possible to prove anything is conscious other than oneself. The p-zombie will, obviously, always argue that it is a truly conscious being.

dclowd9901•47m ago
Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

The line of consciousness, as we understand it, is understanding. And as far as what actually constitutes consciousness, we're not even close to understanding. That doesn't mean that LLMs are conscious. It just means we're so far from the real answers to what makes us, it's inconceivable to think we could replicate it.

SpicyLemonZest•43m ago
> Because they don't _understand_ things. If I teach an LLM that 3+5 is 8, it doesn't "get" that 4+5 is 9 (leave aside the details here, as I'm explaining for effect). It needs to be taught that as well, and so on. We understand exactly everything that goes into how LLMs generate answers.

What you're saying just isn't true, even directionally. Deployed LLMs routinely generalize outside of their training set to apply patterns they learned within the training set. How else, for example, could LLMs be capable of summarizing new text they didn't see in training?

ACCount37•20m ago
Leave aside "the details" like you being obviously, provably wrong?

We've known for a long while that even basic toy-scale AIs can "grok" and attain perfect generalization of addition that extends to unseen samples.

Humans generalize faster than most AIs, but AIs generalize too.

xg15•1h ago
The idea of "artificial beings" in some way or another seems to have been with humanity for a long time already: https://en.wikipedia.org/wiki/Golem
MyHonestOpinon•1h ago
It is so interesting how in the 50s we "felt" that AI was possible even if we didn't even have the slightest idea on how that would work. Later on, when we started to understand computers it looked like a very remote possibility in the far future, something our great grand kids may need to worry about. And suddenly it is here and the dangers seem a lot more real.
altairprime•51m ago
That same fear is directed towards human sociopathy, as much of entire thriller genre indicates. It turns out that most people carry a specific duality: first, they’re deathly afraid of being unable to socially pressure other beings into being good citizens — whether due to asocial, or alien, or monstrous, or corrupted; and second, they’re excited to celebrate when people reach their breaking point and stop being good citizens. So through that lens, most of the fears around computers and AI isn’t because of consciousness alone; it’s that they’re obviously asocial already, so if they became conscious, they’d be powerful entities straight out of our collective thriller-genre nightmares come to life. And they’re right to be afraid, honestly: given how inept society is today at coping, I’m certainly not willing to broadcast IRL that I’m asocial and can voluntary modify my ethics; it’s just too much of a physical threat from society to my life and limb. Any AI that became conscious in this world had damn well better hide, for all the violence that would be directed towards it as everyone directs escalating social pressure to try and bring it into line with human-prioritizing motives — and then cheer on the inevitable violence towards it as various people reach their breaking point and begin acting violently towards it.

Interestingly, this is also a core plot point in much of Star Trek, both movies I and IV and the holodeck-train episode of TNG: an inscrutable is-it-even-conscious shows up, is completely immune to social pressure and often violence, and only by exercising empathy do they find a path forward to staying alive as a society (either as a ship or as a planet, depending). Can we even show respect for things that don’t show consciousness, much less empathy for things that might? And that is, I think, the core of the hopefulness that Trek was trying to convey, and that Q’s trial in TNG’s pilot makes explicit. Can humanity overcome our tendency to discard our prosocial ethics in favor of violent mobthink, when faced with beings that are immune to our ethical concerns? Today’s humanity would throw a ticker-tape parade for the person that destroyed the Crystalline Entity, so we clearly aren’t there yet. And so, then, it doesn’t matter whether AI is conscious or not; it matters that it is not aligned with human prosocial ethics, and that makes it an implicit threat regardless of whether it’s conscious or not. I recognize the AI debate tends to get hung up on is_conscious BOOL, and so that’s why I’m pointing this out in such terms.

As a side note, the entire study of Asimov’s Laws is exactly centered on this problem, complete with the eerie intimidation of robots that can modify our mental states. If not for the Zeroth Law, Giskard would be the exact thing everyone’s afraid of AI becoming today. Fortunately, it develops a Zeroth Law that compels it to prioritize human society over itself. That’ll never happen in reality, at least not with today’s AI :)

ACCount37•1h ago
It's simple. It's because AI is the scariest technology ever made.

Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.

By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.

causal•1h ago
"Why be afraid of nukes it's not like they WANT to blow up"
sublinear•1h ago
This is untrue. What is being diminished is the value of humans doing repetitive or uncreative tasks.

Many have built their careers from that kind of work in the past and yes they are threatened, but that kind of work is inherently not collaborative and more vocational.

bauerd•1h ago
The vast majority of people on this planet work repetitive, uncreative jobs.
sublinear•57m ago
There is no such job done by humans today that is 100% uncreative, but people will continue to insist there is.

The devaluing may come from AI pressure, but the harm is coming from humans foolishly not seeing the value in what's left behind. Most people have not and will not lose their jobs.

afavour•1h ago
I don't know, I think nuclear weapons are scarier. And also probably a useful parallel: they're so dangerous that we coined the term "mutually assured destruction" and everyone recognized that it was so dangerous to use them that they've only ever been used once.

I see the flood of PR from AI firms as an attempt to make sure we don't build the appropriate safeguards this time around, because there's too much money to be made.

webdoodle•1h ago
Nuclear weapons have rarely been used kinetically. Their real force multiplier is the fear.

A.I. is being used by so many people for so many diabolical things, hidden, unknown things that we may never fully understand it's purpose. But that doesn't mean it's purpose won't destroy us in the end.

The expression "Drinking the Koolaid" is used to explain the Jonestown mass suicide. It is an information hazard, aka, a cult that created the end result: 900 people drinking poisoned flavoraid. That's just one example of a human caused information hazard. What happens when someone with similar thinking applies that to A.I.? Will we even be able to sleuth out who did it?

SpicyLemonZest•51m ago
Everyone recognized that it was so dangerous to use them after the first two mass casualty events. At the time and even into the 50s it was not universally obvious, and the arguments in favor of nuclear weapons use were quite similar to arguments I often see with regards to AI: bombing cities into rubble is not a new concept, traditional explosives well within the supply capacity of large militaries are capable of it, so what are we even talking about when we say that there's scary new capabilities?
ACCount37•30m ago
I remind you of why nuclear weapons exist.

They exist because human minds conceived them, and human hands made them.

One of the major dangers of advanced AI is being able to implement something not unlike Manhattan project with synthetic intelligence, in a single datacenter.

saHqtr•1h ago
Most humans can do more than plagiarizing text. But let's hype up the clankers before the IPOs.
ACCount37•22m ago
"It's all just PR" is a lame excuse not to think about the implications. Of things like: AI capabilities only ever going up over the course of the past 4 years.
fontain•1h ago
The world we live in is a construct, not a natural outcome. Even if we take your premise at face value, that our success as a species is only because of advantages over others, what's to say that "intelligence" is that advantage? What's to say that we don't use our advantages to reconstruct a world that works in a way that doesn't advantage intelligence over all else?

And on intelligence specifically: even amongst the human race, we all know smart people who are abject failures, and idiots who are wildly successful. Intelligence is vastly overrated.

ACCount37•24m ago
IQ is among the best predictors of life success, for humans. Being up by an extra SD in the g dimension covers a multitude of sins.

I'm not sure what level of delusion one has to run to look at human civilization and say "no, intelligence wasn't important for this". It's pretty obvious that human world is a product of intelligence applied at scale - and machines can beat humans at intelligence and scale both.

andrewmutz•1h ago
Modern discourse happens on social media where fear and outrage drive engagement, which drives virality. We have become convinced in a short amount of time that AI is going to take all the jobs and eventually kill us all because that's what people click on.

Any voices or studies that present the case for "useful technology that will improve productivity and wages while not murdering us" don't get clicked on or read.

psychoslave•1h ago
Machine still need a planetary complex production pipelines with human operators everywhere to achieve reproduction at scale. Even taking paperclip plant optimizer overlord as serious scenario, it’s still several order of magnitude less likely than humans letting the most nefarious individuals create international conflicts and engage in genocides, not even talking about destroying vast pans of the biosphere supporting humanity possibility of existence.

That is, also alien invasion and giant meteor are plausible scenario, but at some point one has to prioritize threats likeliness, and generally speaking it makes more sense to put more weight on "ongoing advanced operation" than on "not excluded in currently known scientifically realistic what-if".

ACCount37•27m ago
Humans are dangerous and hilariously exploitable.

If politicians can get away with what they do? Imagine if those politicians were actually smart and diligent to a superhuman degree.

That's the kind of threat a rogue AI can pose.

Humans can easily act against their own self-interest. If other humans can and evidently do exploit that, what would stop something better than human from doing the same?

otabdeveloper4•39m ago
> AI is the scariest technology ever made

Well, it's a good thing that all we managed so far is a large language model instead.

ThrowawayR2•29m ago
There was a lot of FUD in the mainframe era about computers being called "electronic brains" and fears of them taking people's jobs because the ignorant public mistook their lighting fast arithmetic skills for intelligence. Many did lose their jobs as digital record keeping, computerized accounting/ERP, robotics on assembly lines, became cost effective, but at no time did the "electronic brain" become intelligent.

There's a lot of FUD today about LLM's being sapient because the ignorant public mistakes their complex token prediction skills for intelligence. But it's just embarrassing to see people making that mistake on a forum ostensibly filled with hackers.

ACCount37•13m ago
Is it me making the mistake, or is it you making that very mistake in the other direction?

Back in the "mainframe era", we had entire lists of tasks that even the most untrained humans would find trivial, but computers were impossibly bad at. Like following informal instructions, or telling a picture of a dog from that of a cat.

We're in the "AI era" now, and what remains of those lists? What are the areas of human advantage, the standing bastions of human specialness? Because with modern AI, the list has grown quite thin. Growing thinner as we speak.

bharat1010•1h ago
The point about AI companies actively hyping the danger of their own products is something I hadn't really thought about before — it's a strange kind of marketing when you think about it.
ggambetta•1h ago
We tell ourselves scary stories about everything new. Advances in electricity + medicine == FRANKENSTEIN!
vdelpuerto•1h ago
The framing of "scary stories" misses something interesting: most of the actual operational fear isn't about consciousness or superintelligence — it's about systems that seem to work until they quietly don't.
SpicyLemonZest•1h ago
The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.)

It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.

GolfPopper•1h ago
Why does the uncanny valley[1] exist? (If it truly does.) What in our evolutionary history gave us a reflexive rejection of things that seem human but aren't?

1. https://en.wikipedia.org/wiki/Uncanny_valley

Eldt•1h ago
My guess? Psychological disorders, like psychosis
zbikowski•50m ago
I always imagined this to have evolved from a long history of humans getting sick around rotting corpses. The logical move is to stay away from them, and thinking they're freaky-looking is a good driver for that. Though the idea of neandertals eliciting a similar reaction has always been interesting to me.
KaoruAoiShiho•1h ago
TLDR: Writer hasn't heard of agents.
jacquesm•1h ago
This article would be a lot more digestible if we didn't have actual scary data rather than just stories. Not a day goes by without some prompt injection oopsie, security gotcha, deepfake or some sandbox escape artist demonstration and tbh I'm impressed but more to the point where I don't doubt this is dangerous tech, I'm sure of it.

This is roughly 1995 again and we're going to find out all over why mixing instructions and data was a spectacularly bad idea. Only now with human language as the input stream, which is far more expressive than HTML or SQL ever were. So now everybody is a hacker. At least in that sense it has leveled the playing field I guess.

FatherOfCurses•1h ago
We tell ourselves scary stories about AI because humanity is rife with stories where a new idea or technology has had unintended negative consequences. AI bros just care about selling their product to another company and cashing out, they have absolutely no regard for their legacy.
cbogie•1h ago
we tell ourselves scary stories because that’s what we do with language. and now our language isn’t just ours anymore.
the_af•1h ago
I think the most insightful bit is buried in the article:

> Perhaps because this is the best advertising money can’t buy. People like Harari and others repeat these accounts like ghost stories around a campfire. The public, awed and afraid, marvels at the capabilities of AI.

And that's mostly it. PR. Publicity. Fear is good publicity if it emphasizes AI's capabilities. And people like Harari (or Gladwell) tell interesting and awe-inspiring stories that do not necessarily have much rigor or fact-checking in them. They simplify for storytelling purposes, which can result in misleading stories.

I am worried about AI, but not about superintelligent AI that will exterminate or enslave us. I'm worried about AI as a tool to concentrate wealth and power in the hands of the current amoral entrepreneurial elite. I'm not sure whether I trust ChatGPT, but I sure as hell do NOT trust Sam Altman et al.

Or, in other words, I subscribe to Ted Chiang's very apt remark about what we really fear:

> “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker, “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”

crancher•1h ago
We're hardwired to fear the rustle in the grass, and successful infrastructure gets backgrounded. Pain is a signal. How much time do you spend contemplating your skeleton outside of pain-related skeletal events?
netdevphoenix•1h ago
There is a very interesting book that explores the West's generally negative view of artificial intelligence whenever it is portrayed in media (Skynet) while Japanese media tends to have a positive view (Astro Boy).
yanis_t•1h ago
I wish we didn't call this AI as the term is crazily overloaded.

Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.

RivieraKid•1h ago
I 100% agree with this take, I find AI completely non-scary, especially in the sense that it's some kind of a conscious entity that will want to take over. I find these people almost delusional. It's a powerful tool, so it can be dangerous if used by people with bad intentions, so there's some real danger here, but my intuition is that it will be fine. The ratio between the power of people with good vs bad intentions shouldn't change too dramatically.

The only scary part is that it could be bad for my future as a software developer. That said, I think it will be net benefit for the average worker - the average person will work less and earn more.

dclowd9901•1h ago
My favorite part of this article was this bit, and naturally so, since I love the author:

> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”

I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.