frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
1•myk-e•56s ago•0 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•1m ago•0 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•3m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•5m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•7m ago•0 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•10m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•15m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•16m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•20m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•32m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•34m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•34m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•47m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•50m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•53m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•1 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments
Open in hackernews

Why aren't more people here worried about AI's exceeding us capabilities?

5•hollerith•8mo ago
I'm one of those people that keep saying that no one knows how to control an AI that is much more all-around capable than (organized groups of) people are, and that we should stop AI research till this is figured out. (People can keep on using the models that have already been released or extensively deployed.)

But even if you don't believe me that no one knows how to control a super-capable AI, why is no one worried about some nation or disaffected group intentionally creating an AI to kill us all as some kind of doomsday weapon? Every year the craft of creating powerful AIs becomes better understood, and researchers (recklessly IMHO) publish this better understanding for anyone to see. We don't know whether all the knowledge needed to create an AI more capable than people will be published this year or 25 years from now, but as soon as it happens, any actor on earth capable of reading and understanding machine-learning papers and in possession of the necessary GPUs and electricity-generating capacity can destroy the world or at least destroy the human species. Why are so many of you so complacent about that risk?

In the news recently was a young man who killed some people at a fertility clinic. He was a "promortalist": someone who believes that there is so much suffering in the world that the only moral response is to help all the people die (so they cannot suffer any more). Eventually, the craft of machine learning will become so well understood and access to compute resources so widespread and affordable that anyone (e.g., some troubled soul living in some damp basement somewhere who happens to inherits $66 million from some eccentric uncle or happens to win a big personal-injury lawsuit against some rich corporation) will have the means to end the human experiment.

He will not have to figure out how to stay in control of the AI he unleashes. Any AI (just like any human being) will have some system of preferences: there will be some ways the future might unfold that the AI will prefer to other ways. And if you put enough optimization pressure behind almost any system of preferences, what happens strongly tends to be incompatible with continued human survival unless the AI has been correctly programmed to care whether the humans survive. Our troubled soul bent on ending the human experiment can simply rely on this thorny property shared by all really powerful optimizing processes.

In summary, even if you don't believe me that no one knows (and no one is likely to find out in time if AI research is not stopped) how to create an AI that will keep on caring what happens to the people, aren't you worried about a human actor who need not bother to make sure that the AI will care what happens to the people because this actor is troubled and wants all the people to die?

I mean, yes, some of you genuinely disbelieve that AI can or will get good enough to be able to wrestle control over the future out of the hands of humankind. But many of you consider it likely that AI technology will continue to improve (or else people wouldn't've invested so much in AI and wouldn't've driven the market cap of Nvidia to 3 trillion dollars). Why so little worry?

Comments

pvg•8mo ago
You're better off not loading the question like "Do you simply consider it someone else's job to worry about risks like that?". Who would want to talk to you when it sounds you're not asking but looking to berate?
hollerith•8mo ago
I removed that sentence (from the end of my post). Thanks for the feedback. I'll try to calm myself down now.
bigyabai•8mo ago
Your question still implies a hysteric interpretation of a nonexistent featureset. I think you will struggle to foster a serious discussion without actually describing what you're worried about. "AI kills people" is not any more of a serious concern than household furnitute becoming sentient and resolving to form an army that challenges humankind.

You have to describe what the actual threat is for us to treat it as an imperative issue. 99% of the time, these hypotheticals end with human error, not rogue AI.

bigyabai•8mo ago
1. If AI is latently capable of killing people using just computer power, then it was going to happen regardless. If the AI requires assistance from human actors then it's basically indistinct from human actors acting alone without AI. If you are a human that puts AI in charge of a human life, you are liable for criminal negligence.

2. You cannot stop AI research because of a bunch of unknowns. People will not be afraid of an immaterial threat that has no plausible way to threaten people besides generating text. Even if that text has access to the internet, the worst that can happen has probably already been explored by human actors. No AI was ever needed to proliferate catastrophes like Stuxnet, Sarin gas attacks, or 9/11.

3. Some people (like myself) have been following this space since Google published BERT. In that time, I have watched LLMs go from "absolutely dogshit text generator" to "slightly less dogshit text generator". It sounds to me like you've drank Sam Altman's Kool-aid without realizing that Sam is bullshitting too.

philipkglass•8mo ago
Robotics progress is a lot slower than progress in disembodied AI, and disembodied AI trying to kill humanity is like naked John von Neumann trying to kill a tiger in an arena. IMO we need to figure out AI safety before physically embodied AI (smart robots) becomes routine, but to me safety in that context looks more like traditional safety-critical and security-critical software development.

I'm aware of the argument that smart enough AI can rapidly bootstrap itself to catastrophically affect the material world:

https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-trans...

"It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like, smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second."

As someone with a strong background in chemistry this just makes me skeptical of Yudkowsky's groundedness as a prognosticator. Biological life is not compatible with known synthesis conditions for diamond, and even superintelligence may not discover workarounds. I am even more skeptical that AI can make such advances and turn them into working devices purely by pondering/simulation, i.e. without iterative laboratory experiments.