frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

MiniMax M2.7 Is Now Open Source

https://firethering.com/minimax-m2-7-agentic-model/
43•steveharing1•1h ago•10 comments

An Interview with Pat Gelsinger

https://morethanmoore.substack.com/p/an-interview-with-pat-gelsinger-2026
40•zdw•2d ago•18 comments

The Miller Principle

https://puredanger.github.io/tech.puredanger.com/2007/07/11/miller-principle/
32•FelipeCortez•4d ago•22 comments

JVM Options Explorer

https://chriswhocodes.com/vm-options-explorer.html
4•0x54MUR41•50m ago•0 comments

Phyphox – Physical Experiments Using a Smartphone

https://phyphox.org/
26•_Microft•2h ago•6 comments

Apple update looks like Czech mate for locked-out iPhone user

https://www.theregister.com/2026/04/12/ios_passcode_bug/
140•OuterVale•2h ago•66 comments

Tofolli gates are all you need

https://www.johndcook.com/blog/2026/04/06/tofolli-gates/
68•ibobev•5d ago•11 comments

How We Broke Top AI Agent Benchmarks: And What Comes Next

https://rdi.berkeley.edu/blog/trustworthy-benchmarks-cont/
394•Anon84•16h ago•99 comments

Stewart Brand on How Progress Happens

https://www.newyorker.com/books/book-currents/stewart-brand-on-how-progress-happens
9•bookofjoe•4d ago•1 comments

The End of Eleventy

https://brennan.day/the-end-of-eleventy/
174•ValentineC•9h ago•125 comments

Small models also found the vulnerabilities that Mythos found

https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier
1123•dominicq•18h ago•298 comments

How Complex is my Code?

https://philodev.one/posts/2026-04-code-complexity/
132•speckx•5d ago•31 comments

US appeals court declares 158-year-old home distilling ban unconstitutional

https://www.theguardian.com/law/2026/apr/11/appeals-court-ruling-home-distilling-ban-unconstituti...
177•Jimmc414•6h ago•134 comments

I run multiple $10K MRR companies on a $20/month tech stack

https://stevehanov.ca/blog/how-i-run-multiple-10k-mrr-companies-on-a-20month-tech-stack
261•tradertef•5h ago•189 comments

Dark Castle

https://darkcastle.co.uk/
191•evo_9•15h ago•24 comments

447 TB/cm² at zero retention energy – atomic-scale memory on fluorographane

https://zenodo.org/records/19513269
219•iliatoli•15h ago•113 comments

Pijul a FOSS distributed version control system

https://pijul.org/
159•kouosi•5d ago•24 comments

Apple Silicon and Virtual Machines: Beating the 2 VM Limit (2023)

https://khronokernel.com/macos/2023/08/08/AS-VM.html
204•krackers•14h ago•142 comments

Network Flow Algorithms

https://www.networkflowalgs.com/
23•teleforce•5d ago•0 comments

Advanced Mac Substitute is an API-level reimplementation of 1980s-era Mac OS

https://www.v68k.org/advanced-mac-substitute/
246•zdw•19h ago•62 comments

Cirrus Labs to join OpenAI

https://cirruslabs.org/
267•seekdeep•22h ago•128 comments

Show HN: Pardonned.com – A searchable database of US Pardons

439•vidluther•1d ago•241 comments

Surelock: Deadlock-Free Mutexes for Rust

https://notes.brooklynzelenka.com/Blog/Surelock
218•codetheweb•3d ago•71 comments

How a dancer with ALS used brainwaves to perform live

https://www.electronicspecifier.com/products/sensors/how-a-dancer-with-als-used-brainwaves-to-per...
48•1659447091•9h ago•8 comments

How to build a `Git diff` driver

https://www.jvt.me/posts/2026/04/11/how-git-diff-driver/
119•zdw•17h ago•12 comments

Why meaningful days look like nothing while you are living them

https://pilgrima.ge/p/the-grand-line
51•momentmaker•8h ago•34 comments

Optimal Strategy for Connect 4

https://2swap.github.io/WeakC4/explanation/
296•marvinborner•3d ago•32 comments

What is a property?

https://alperenkeles.com/posts/what-is-a-property/
80•alpaylan•4d ago•23 comments

The Soul of an Old Machine

https://skalski.dev/the-soul-of-an-old-machine/
57•mskalski•4d ago•12 comments

Software Preservation Group: C++ History Collection

https://softwarepreservation.computerhistory.org/c_plus_plus/
26•quuxplusone•9h ago•2 comments
Open in hackernews

AI Will Be Met with Violence, and Nothing Good Will Come of It

https://www.thealgorithmicbridge.com/p/ai-will-be-met-with-violence-and
70•gHeadphone•2h ago

Comments

jstanley•1h ago
> Every time I hear from Amodei or Altman that I could lose my job, I don’t think “oh, ok, then allow me pay you $20/month so that I can adapt to these uncertain times that have fallen upon my destiny by chance.” I think: “you, for fuck’s sake, you are doing this.” And I consider myself a pretty levelheaded guy, so imagine what not-so-levelheaded people think.

Conversely, The Loudest Alarm Is Probably False[0]. If the idea that you are a pretty levelheaded guy pops up so frequently, consider that it might be wrong. Especially if you are motivated to write blog posts about violence in response to technology you don't like. Maybe you're just not as levelheaded as you think and that could explain the whole thing?

[0] https://www.lesswrong.com/posts/B2CfMNfay2P8f2yyc/the-loudes...

A_D_E_P_T•1h ago
Hah. Yes, and especially as “you, for fuck’s sake, you are doing this” should be, upon reflection, entirely and trivially false. You could remove those two figureheads from the equation and absolutely nothing would change. If violence were ever the answer, I think you'd need to go back in time like the Terminator and whack some academics and Google researchers.
balamatom•1h ago
Plural you.

As in, "all of you".

Including its users.

rowanG077•1h ago
I also find it so weird to play this on the person of Altman or Amodei. These are basically fungible public faces. If they die this very moment AI progress wouldn't halt. I don't think it would even be impacted. If anything you should be mad at governments not legislating if you are anti AI.
tux3•1h ago
The ship is going where it is because of the captain. If they die this very moment, the ship will not go back.

And yet,

CoastalCoder•1h ago
Related, I've been surprised that we haven't had more violence against corporations and/or their leadership in the vein of Luigi Mangione.

E.g., suppose that 1,000,000 persons believe that a corporation's evil acts destroyed their happiness [0]. I would have guessed that at least 1 person in that crowd would be so unhinged by the experience that they'd make a viable attempt at vengeance.

But I'm just not hearing of that happening, at least not nearly to the extent I would have guessed. I'm curious where my thinking is wrong.

[0] E.g., big tobacco, the Sacklers with Oxycontin, insurance companies delaying lifesaving treatment, or the Bhopal disaster.

peyton•49m ago
Litigation—the hope or fantasy to make a buck—soaks up a lot of the million-man animus I’d guess.

If that’s accurate, Luigi Mangione would be the exception that proves the rule. The “unwashed masses” generally want money more than they want to effect change in the world.

A lot of people spend mental energy fantasizing about getting rich off lawsuits. Like, a lot.

strangegecko•41m ago
Those unhinged people might be busy in social media bubbles, fighting endless pointless battles (or simply doom scrolling) until they're too exhausted to do anything.
Jtarii•1h ago
Especially considering Amodei and Altman will be little more than footnotes in 50 years time. They seem important now but they are just the people that happened to be in charge at the moment AI happened to happen. There is more going on than a couple of billionaires taking your job away.
andrepd•54m ago
Reading comprehension is tough, I know

> I don’t want to trivialize the grievances of the people who fear for their futures. I don’t want to defend Altman’s decisions. But this is not the way. This is how things devolve into chaos.

If I had a cent every time a lesswrong link was posted alongside a profoundly obtuse comment...

tao_oat•1h ago
The author seems to have some cognitive dissonance. For a piece saying that you cannot justify violence, there sure seems to be an awful lot of justifying violence in here.
trvz•1h ago
You may not be able to justify violence, but sometimes you can understand it.
thrance•1h ago
You should probably read up on cognitive dissonance, because this ain't it. Here's what the author actually wrote:

> Nothing that Altman could say justifies violence against him. This is an undeniable truth. But unfortunately, violence might still ensue. I hope not, but I guess we are seeing what appears to be the first cases.

redsocksfan45•50m ago
I think you're going to be killed for the side you've taken here. No no, I'm not saying you deserve it! In fact, I actually agree with you, you said nothing wrong. I'm just speculating on outcomes I think are likely and I think it's likely that somebody will look you up and track you down and take out their unjustifiable but completely understandable frustration on you. Please understand, I don't support this, I'm just talking about the possibility!

Of course, by talking about the possibility, despite asserting my disapproval of it, I am sowing seeds, but I assure you that's certainly not my intention!

tao_oat•29m ago
And a few paragraphs before:

> And then, and I’m sorry to be so blunt, then it’s die or kill.

oytis•1h ago
History is just full of emotional contradictions I guess. French and Russian revolutions were terrible bloodbaths, smaller violent movements like Luddite one caused deaths and achieved nothing - it would be stupid to approve any of these. But you could also see why this violence happened, and assign an appropriate share of blame to those who held the power to resolve social contradictions in a more equitable way and decided not to do so.
dwb•43m ago
I don't see any justification - the article is quite clear that it is anti-violence. Explanation and analysis is not, on its own, justification. This is one of the discursive patterns that most infuriates me: any attempt to analyse something can be seen as promotion or justification. Some of us want to figure out how things work and chart a course through, we are not trying to push an agenda in every single sentence.
ArchieScrivener•1h ago
This is nonsense, promoted to top of front page without any comments. How about all the rock stars killed over the years, or grocery store clerks shot and stabbed to death? EVERYTHING is met with violence because that's the nature of aggression no matter the impetus, it doesn't require a justifiable reason, only belief in the outcome of its use.

Sam Altman having a Molotov cocktail thrown at his house after Ronan wrote a very long and detailed report of his shady personality isn't just coincidence and likely not organic. Sam needs to be viewed as sympathetic, thank goodness for such a moment where no one was hurt and nothing actually damaged.

balamatom•1h ago
It is very sympathetic to brandish your child as a human shield, yes. Many parental units will sympathize.
inglor_cz•1h ago
People here are extra anxious about the impact of AI on their lives, so I am not surprised that any text which touches the topic gets upvoted.

We are somewhat violent species, so I agree that almost every significant economic and societal development has the potential to trigger some violence. That said, the jobs that are potentially threatened by AI are nowadays usually done by fairly sedentary people, so I wouldn't expect any large-scale violence, an occasional Ted Kaczynski notwithstanding. Programmers, translators and painters just aren't used to destroying things in the real world.

It would have been different if AI started to replace drug dealers or the mob.

TMWNN•1h ago
>How about all the rock stars killed over the years

With the exception of rappers, most musicians who die early die from overdoses, suicides, and such (the "27 club" <https://en.wikipedia.org/wiki/27_Club>), as opposed to being murdered.

ArchieScrivener•1h ago
That's why I said killed not died.
TMWNN•1h ago
Then your point doesn't make sense. As I said, musicians who die early (again, excepting rappers) usually die from self-inflicted causes, not violence from others. What is the connection between this and violent attacks on AI and/or AI people?
ArchieScrivener•1h ago
No my dear, your comment doesn't make sense because you weren't reading to understand only contest. Go to Reddit for that.

I'm not going to hold your hand and explain it to you since you've already engaged your ego and shut your mind.

jacquesm•1h ago
Pet peeve of mine: accounts less than 3 months old telling people to go to reddit.
tsunamifury•1h ago
We are in an inverse innovators dilemma

Automaters dilemma: the labor that is removed from production due to automation can no longer sustain the market’s that that automater was trying to make more efficient.

By optimizing just the production half of the economy and not the consumption half you end up breaking the market

mindok•1h ago
I’m convinced that 70% of the workforce of some large organisations is just white collar welfare / adult day care already. Maybe that goes to 80+% as a result of “AI” but doesn’t fundamentally change the model.
spaceman_2020•1h ago
The worst part is that AI's first casualties are jobs that no one really asked to kill.

AI is killing writing, music, art, and coding. I've done all of these voluntarily because I simply enjoyed them

Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI

Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?

Seems like a complete misallocation of capital if I'm perfectly honest

ArchieScrivener•1h ago
Its not a misallocation of capital its an investment in media control. You don't how all this works yet do you? Your job is to be frustrated and desperate so you indulge in vice and convenience so others can profit while making your confines smaller and smaller.
balamatom•1h ago
Correct. Thank you for not serving. <3
sleepingreset•1h ago
imagine if producing music was as easy as using claude code or something
azan_•1h ago
I'm sure people working in customer support or tax advisors would have different take of what should be killed by AI and what should be spared.
raincole•1h ago
> dealing with customer support

This is one of the first parts LLMs tried to automate. They were literally released in a form of chatbot. Whether it succeeded is another question.

> Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?

I'm not sure about musicians specifically, but in the whole past decade studios have been complaining how costly it is to make AAA games. And the cost mostly came from art asset side.

adrian_b•22m ago
I do not know how much I might be an outlier, because when I reach out to technical support the problems are rather difficult, because if they were easy I would solve them myself, without needing the official technical support.

In any case, during perhaps hundreds of interactions with chatbots accumulated during many years, I have never encountered even one when the chatbots were useful, but they were always just difficult to pass obstacles in the way of reaching a human who could actually solve the problem.

To be honest, even in the case when some services still had humans answering the calls, those were never more helpful than the chatbots, but at least when speaking with humans it was much easier to convince them to transfer the call to a competent person, which with chatbots may be completely impossible.

izucken•1h ago
Because elites hate you moreso than downtrodden (they love miserable people in a sense). You are an independent agent with your own ideas, worst case you are completely orthogonal to the hierarchy, and this is something that breaks the intended world order.
jollymonATX•1h ago
One of those 4 reliably makes enough to be a target just in how much is spent on it.
aerhardt•1h ago
AI cannot write for shit, it’s not even a fraction of a millimeter of the way there compared to the production of Thomas Mann or Dostoevski or Cervantes.

The fact that people are using it to flood the world with slop is a hyperscaled continuation of the overabundance and discovery problems we already had, but that doesn’t mean that writing is dead or dying.

The technology simply doesn’t have the capabilities right now, and even if it develops them, what will be put to the test is whether literature is about the artifact or the connection between the author and other humans.

philwelch•1h ago
Coding is one thing that is genuinely more enjoyable with AI than without it. It’s a different (but overlapping) skill set, but my median AI sessions remind me of the most exhilarating design discussions I’ve had with colleagues, and I get a lot more done more quickly than I used to.

Customer support is kind of something you can use AI for; most companies will foist you off to some system of exchanging written messages, which is annoying, but then you can use an AI to write your side of the conversation. It’s ill-mannered to do this when you’re interacting with actual people, but customer support is another story.

> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it? Did the world really wake up one day and concluded that, "wait, we're spending too much on musicians"?

People didn’t know what LLMs would be capable of until after they were invented. Cheap music generation turned out to be easy once we had cheap text generation, and cheap text generation turned out to be a tractable problem.

armchairhacker•55m ago
> AI is killing writing, music, art, and coding.

At least today, LLMs make bad creative writing, music, and art. They’re automating sweatshop work that, in an alternative timeline, goes to Fiverr-esque contractors who accept the lowest wages and sacrifice quality for efficiency in every way.

LLMs make developers more efficient but can’t fully replace them. This reduces jobs, but so did better IDEs, open-source libraries, and other developer improvements.

> Meanwhile the parts of my existence that I actually hate - dealing with customer support, handling government forms, dealing with taxes - is far from being automated by AI

LLMs can at least theoretically do these things. I’ve heard people use them to mass-apply to apartments and jobs, and send written customer complaints then handle responses.

> Look at Suno. Fantastic tool, but where was the capital need to make music generation so cheap that no musician could ever compete with it?

There’s no “capital need”, but a benefit of Suno is that it lets individuals, who otherwise don’t have the skill, to make catchy songs with silly lyrics or try out interesting genres. And the vast majority of top artists are still human, although most streaming revenue has already gone to a few celebrities who seem to rely on looks and connections more than music talent.

ben8bit•1h ago
A lot of the magic of LLMs, I think, has been tarnished by these CEOs and other FAANG companies. It might have been a far more interesting world if they didn't bring "AI" or "AGI" into the conversation in such a politicized way.
keiferski•1h ago
It’s the inevitable result of valuations based on hype and future potential, not business fundamentals. It incentivizes companies to be as hyperbolic as possible with their pitches and marketing.

Cryptocurrency is an interesting technology with some niche use cases, but it was pitched as replacing the entire money system. LLMs are extremely useful for certain types of work, but are pitched as AGI ending all work. Etc.

sigmoid10•1h ago
Unfortunately, this is the only way to get enough venture capital to support the compute needs for this kind of technology. Who is going to spend hundreds on billions on a vague idea without regular claims that this will upend the existing economy in six to twelve months and whoever owns it will become unfathomably rich? And despite all the actual developments we have seen going against that idea, investors keep falling for it. This will continue until it crashes, one way or another. The question is how long it can build up and how deep the fall will be. LLMs will certainly change the economy in the end, but so did mortgage backed securities.
pydry•1h ago
It's a sad indictment of our society that there is always a shortage of money for medical care, infrastructure, housing, food stamps and space exploration but always a surplus of cash for war and tools that purport to replace the workforce.
block_dagger•1h ago
War accelerated evolution, it’s why it exists.
jacquesm•1h ago
You have cause and effect mixed up.
bregma•1h ago
So did compassion, probably in a greater amount. And yet the greater amount of resources goes into war at the expense of compassion.

Humanity has taken control of its own evolution and no longer relyies on natural selection to be the driving force for change. Using evolution as an excuse to make bad and immoral choices is a poor argument and should be left back in the stone age.

gmerc•56m ago
The opportunity cost to society of performative model training is stunning - 400M for a grok training run to dominate the charts for 2 weeks
roenxi•47m ago
> It's a sad indictment of our society that there is always a shortage of money for medical care...

It has nothing to do with society; there is infinite demand for medical care. The upper limit is whatever it takes to live until the universe's heat death in good health. That takes a lot of resources.

However much society spends on medical care, there is always more that could be spent. The modern era has the best, most affordable medical care in history and people are showing no signs of being satisfied at all.

While war spending generally just causes pain for no gain it doesn't change the fact that there will never be enough available to satisfy people's demand for medical care. Every single time people get what they want they just come up with a new aspirational minimum standard.

philwelch•40m ago
There isn’t really a shortage of money for those things, just rampant levels of fraud, corruption, and incompetence in the government to make those things artificially expensive. California spends so much money on high speed rail and gets 0 feet of track because they’re not paying for track; the whole thing is a scam where the politicians give taxpayer money to their political supporters in exchange for political support. Defense isn’t immune to this either; Boeing, which builds a shitty heavy lift rocket out of Space Shuttle spare parts and delivers it late and over budget, pulls the exact same bullshit with their defense contracts, and there’s always some shitty Senator siding with them against the American people whenever anyone gets upset.
bluegatty•1h ago
It'd be nice if they didn't use the term at all because I don't think they're useful relevant or real.

If we thought of all of this as 'stochastic data systems' then our heads would be in the right place as we thought about it just as 'powerful software' that can be used for good or bad purposes, and the negative externalizes will be derived from our use of it, not some inherent property.

dr_dshiv•1h ago
On the other hand, "magical new systems that provide almost unlimited capacity for intelligent work" is probably a more functional mental model. Genie can give you 1000 wishes till you reach your session limit.
mindok•1h ago
Not quite 1000 on Codex as of last day or two!
jacquesm•1h ago
It would have been better if they didn't bootstrap it off the outright theft of a very large amount of IP only to lock it behind a paywall.
djtango•1h ago
Magic or no, ultimately "AI" leads to labour displacement and it's just a continuation of the much broader trend of automation driven by computers.

Labour displacement leads to an erosion of standards of living and in a world that ties purpose to work is an existential threat on a very practical level.

It was always going to be met with violence once it became more than a curiosity for tinkerers.

georgemcbay•48m ago
> in a world that ties purpose to work is an existential threat on a very practical level.

I don't disagree that we tie purpose to work and severing that tie will have negative societal consequences, but it is far more impactful that we tie the ability to continue to exist to work (for anyone not lucky enough to already be wealthy).

If I suddenly became unemployable tomorrow I'm positive I could find alternate purpose in my life to fill that gap, I already volunteer for various causes and could happily do more of the same to fill in the gaps left by lack of work. What I couldn't do is feed myself, keep myself housed, and get medical care (especially in the US, where this is very directly tied to work).

The really big fuckup we are committing as a society in the US (may or may not apply to each person's country individually) isn't just this looming threat of massive labor displacement due to AI, it is that instead of planning for any sort of soft landing we are continually slashing what few social safety nets already exist. We are creating the conditions for desperation that likely will result in increasing violence as outlined in the linked post.

yfw•37m ago
If ai benefitted everyone and not just the billionaires we would be viewing it differently.
quantummagic•17m ago
That's a truism. But it ignores The Iron Law of Oligarchy, Pareto Principle, and dozens more that remind us that power tends towards centralization. It's currently fashionable to call out the billionaires, but if you removed them, they'd just be replaced by corrupt government officials, or something else.

That's not to say we should just throw up our hands and accept every social injustice. But IMHO we shouldn't go around simplistically implying that all social ills will be solved by neutering the billionaire class.

hackrmn•57m ago
I don't want to stir up the hornet's nest here, but in my humble opinion the entire problem rests on the unabated and unchecked modern and "late-stage" capitalism model, championed by the U.S. and since exported to and sprung good root everywhere else, even in Europe where it as of yet has a few more checks and balances (which unsurprisingly draws a lot of ire from its acolytes and priests across the Atlantic).

Soviet Union lost due to an inferior societal model, but this too is too much along what once was a relatively sustainable path. The American dream is now a parody of itself, as it takes more to end up with the rest of them, I could go on about the irony of wanting to escape the pit but not wanting to acknowledge the pit is the 99% of the U.S. -- Not Altmans, Bezos'es, Musks or Trumps or their hordes of peripheral elites.

Point being, the model doesn't work _today_ with its cancerous appetite and correspondingly absurd neglect of the human, _any_ human. We can't have humanism and the kind of AI we're about to "enjoy".

The acceleration of wealth disparity may prove to be nearly geometrical, as the common man is further stripped of any capacity to inflict change on the "system". I hope I am wrong, but for all their crimes, anarchy and in a twist of irony -- inhumane treatment of opponent -- the October revolutionaries in Russia, yes bolsheviks, were merely a natural response to a similar atmosphere in Russia at the turn of the previous century. It's just that they didn't have mass surveillance used against them in the same capacity our gadgets allow the "governments" today, nor were they aided by AI which is _also_ something that can be used against an entire slice of populace (a perfect application of general principles put in action). So although the situation may become similar, we're increasingly in no position to change it. The difference may be counted in _generations_, as in it will take multiple generations to dismantle the power structures we allow be put in place now, with Altmans etc. These people may not be evil, but history proves they only have to be short-sighted enough for evil to take root and thrive.

Sorry for the wall of text, but I do agree with the point of the blog post in a way -- demanding people become civilised and refrain from throwing eggs (or Molotovs) on celebrities that are about to swing _entire governments_, is not seeing the forest for the trees.

There's also no precedent in a way -- our historical cataclysms we have created ourselves, have been on a smaller scale, so we're spiraling outwards and not all of the tools we think we have, are going to have the effect required in order to enact the change we want. In the worst case, of course.

threethirtytwo•49m ago
No it’s tarnished by becoming too popular. Just like how people hated nickelback, if you remember.
roschdal•1h ago
Yes. AI is evil.
MrOrelliOReilly•1h ago
> People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.

Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.

Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).

psychoslave•1h ago
I doubt there is a single profile about "not accelerate blindly on adoption everywhere".

On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.

amelius•1h ago
Yes, the moment they put 8 foot tall robots in the streets, I am fetching my black spray paint can.
jacquesm•1h ago
7.5' tall robots it is then.
bluegatty•1h ago
'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.

AI will be 'dangerous' because humans will use it irresponsibly, and that's all of the risk.

- giving it too much trust, being lazy, improper guards and accidents - leveraging it for negative things (black hats, military targetting) - states and governments using it as instrument of control etc.

That's it.

Stop worrying about the ghost in the machine and start worrying about crappy and evil businesses and governing institutions.

Democracy, vigilance, laws, responsibility are what we need, in all things.

psychoslave•1h ago
Exactly what I tried to articulate yesterday in https://news.ycombinator.com/item?id=47718812#47719503
fleebee•1h ago
> 'Rogue super intelligence' is the most ridiculous sci-fi nonsense of the AI hype, worse than the pro AI hype.

In my view that line of argument is pro-AI hype. It's the Big Tech CEOs themselves who often share their predictions of the end of the world as we know it caused by AI. It's FUD that makes the technology sound more powerful and important than it is.

philwelch•1h ago
What a load of pointless handwringing.
dwroberts•1h ago
> But this is not the way. This is how things devolve into chaos.

Meanwhile

https://www.reuters.com/world/middle-east/how-many-people-ha...

> U.S.-based rights group HRANA said 3,636 people have been killed since the war erupted. It said 1,701 of those were civilians, including at least 254 children.

(Mentioning this specifically because we know the DoD is using AI)

nacozarina•1h ago
Humans have been successfully using violence for conflict-resolution for tens of thousands of years. We’ll be fine, it’s not our first rodeo.
fcantournet•53m ago
Successfully ? Maybe the OG survivor bias here..
hackrmn•50m ago
The ugly truth indeed. It sucks to die for the world you won't enjoy, but sometimes it's the only viable solution. Much of our progress has been to minimise casualties and human suffering in order to sustain the world most can agree is better (than the alternatives), but it seems the period of the wave just hits the troughs farther apart, but when it hits them it's like taking breath before the water swallows you, and without training it's quite the panic and suffering (and prospect of death). We know it's in our bones but we want to forget because our bodies are made to interpret pain in the most direct and literal sense -- re-conditioning is always painful too. Strong people create weak people who create strong people, etc.

So yeah _we_ will be fine, but some of us definitely won't, and with the growth in our numbers on Earth, the proportion of martyrs may be growing. Quantifying personal suffering is not possible, especially if the prospect is death.

lapcat•39m ago
> We’ll be fine, it’s not our first rodeo.

Because World War I was fine, World War II finer....

JCattheATM•28m ago
Well sure, the conflicts were resolved.
jollymonATX•1h ago
Such a cowardly way to write really. Just own your intentions and direction. No need to handwave theater and CYA when spookie superintelligence llm is in the room with you.
mrweasel•1h ago
I really should have gone into sewage work.
Kye•52m ago
An aligned AI will at least tell someone that a "flushable" wipe shouldn't be flushed if they think to ask.
balamatom•1h ago
>And then, and I’m sorry to be so blunt, then it’s die or kill.

The people ready to die or kill for the AI, do you already imagine what they are going to be like?

deyiao•1h ago
They say cars replaced carriages but created drivers, so no net job loss. They say AI will do the same—destroy some jobs, create others. But bro, the automobile wiped out 95% of the world's horses. And this time, what AI is replacing is humans.
ares623•1h ago
All this, so people like us can have an easier time doing a job that wasn’t that hard in the first place, and in reality was actually quite comfortable, for employers who are promising to lay us off, for productivity gains that aren’t even measurable.
taffydavid•1h ago
> It hit Horsfall in the groin, who, nominative-deterministically, fell from his horse.

Lovely writing. I once knew someone who's surname was HorsFELL and now I wonder if they were related

tokioyoyo•52m ago
A bit tangent, but is there anyone working on something for “what if AI pans out?” world? I’m not sure how to explain it, but if in the next 5 years a lot of jobs get displaced because of AI, obviously we’ll have big problems. Is there anyone working on analysis, outcomes, strategies and etc.? I think about it a lot, and would be cool to help and contribute.
direwolf20•38m ago
The most important question is how to prevent the starving workers from banding together and attacking the dragon hoards of food and other wealth. I think the plan is automated drones with machine guns, and mass surveillance from Flock and Ring to determine who to target. Requiring ID for all online interaction will also improve targeting accuracy as we'll be able to target them based on their social media posts. Robot dogs from Boston Dynamics (armed with machine guns) are a secondary enforcement mechanism indoors in places drones can't reach. So they're working on it, and they have been for a while.
groundhogstate•37m ago
Many. 80,000 hours has been on the topic for a long while. Agree with the EA crowd or not, they have some thought provoking analyses and a decent newsletter. The future of humanity institute has also been vocal on the topic for some time. Both have a lot of material you could get acquainted with. I know of at least one professional union in my country that is dedicating time and talking to political figures. I'm sure there is one you could contribute to. Or try start one.

Plus the labs themselves, of course.

lps41•33m ago
EA?
tao_oat•25m ago
Effective Altruism: https://en.wikipedia.org/wiki/Effective_altruism
tokioyoyo•22m ago
I believe they meant Effective Altruism, pieces from lesswrong and etc.
apothegm•18m ago
“Effective altruism”. (Recommended to be researched with a healthy dose of skepticism.)
tokioyoyo•28m ago
Thank you. I’ve seen/read a bunch from the EA crowd, and think pieces from different contributors/labs, but most I’ve seen sounded very hypothetical with “yeah big bad stuff might happen, we don’t have a solution yet”.

And the other side, “pause/ban AI” crowd, also sounded impractical, as the vested interests from governments and private industries will not really let it happen.

Sorry for yapping, it might be that I’m looking at the wrong sources.

forgetfreeman•36m ago
Yes: https://www.youtube.com/watch?v=3lJif2LX3bA
JCattheATM•29m ago
It's not complicated. Just tax the corporations and billionaires a fair share and setup UBI.
tokioyoyo•23m ago
It is very much so complicated though. The conversations about UBI in the internet has been around since I’ve been online. And since then, there hasn’t been a single large scale test of the system to see if it can be compatible with the current version of capitalism that’s ran in the most of the world.

Even if I support UBI morally, there isn’t even local appetite for it, yet alone global one. And you’ll run into quick questions about inflations, every chart from UBI-lite era of COVID, and so on.

Hamuko•44m ago
One thing I'm kinda worried is what happens to social trust in society once we have more and more LLMs flooding the Internet. Divison in society, in particular in the United States, already seemed to be increasing at a rapid pace as social media became more and more relevant, and I'm afraid that LLMs are just going to add more fuel into the already started fire.

I'm less concerned about AI becoming the Skynet and killing humans and more concerned about AI making the world so miserable that we'll be killing ourselves and each other.

lapcat•38m ago
> Perhaps the most serious mistake that the AI industry made after creating a technology that will transversally disrupt the entire white-collar workforce before ensuring a safe transition

This was not an oversight. To the contrary, it was the goal. Technological feudalism, with people like Altman and Musk becoming the Lords of the world.

> Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.

This illustrates my previous point. What they're doing is not a mistake.

> For what it’s worth, the New Yorker piece I’m referring to, which Altman also referred to in his blog post, made me see him more as a flawed human rather than a sociopathic strategist. My sympathy for him will probably never be very high, but it grew after reading it.

It feels like we read two different articles.