frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•1m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•1m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•2m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•2m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•2m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•3m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•3m ago•0 comments

Show HN: Velocity - Cheaper Linear Clone

https://velocity.quest
1•kevinelliott•4m ago•1 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•5m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
1•nmfccodes•6m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
1•eatitraw•12m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•12m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•14m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•15m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•16m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•16m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
2•birdmania•16m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
3•samasblack•18m ago•1 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•19m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•20m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•21m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•23m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•23m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•23m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•24m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•25m ago•0 comments

Show HN: iPlotCSV: CSV Data, Visualized Beautifully for Free

https://www.iplotcsv.com/demo
2•maxmoq•26m ago•0 comments

There's no such thing as "tech" (Ten years later)

https://www.anildash.com/2026/02/06/no-such-thing-as-tech/
2•headalgorithm•26m ago•0 comments

List of unproven and disproven cancer treatments

https://en.wikipedia.org/wiki/List_of_unproven_and_disproven_cancer_treatments
1•brightbeige•27m ago•0 comments

Me/CFS: The blind spot in proactive medicine (Open Letter)

https://github.com/debugmeplease/debug-ME
1•debugmeplease•27m ago•1 comments
Open in hackernews

Why aren't more people here worried about AI's exceeding us capabilities?

5•hollerith•8mo ago
I'm one of those people that keep saying that no one knows how to control an AI that is much more all-around capable than (organized groups of) people are, and that we should stop AI research till this is figured out. (People can keep on using the models that have already been released or extensively deployed.)

But even if you don't believe me that no one knows how to control a super-capable AI, why is no one worried about some nation or disaffected group intentionally creating an AI to kill us all as some kind of doomsday weapon? Every year the craft of creating powerful AIs becomes better understood, and researchers (recklessly IMHO) publish this better understanding for anyone to see. We don't know whether all the knowledge needed to create an AI more capable than people will be published this year or 25 years from now, but as soon as it happens, any actor on earth capable of reading and understanding machine-learning papers and in possession of the necessary GPUs and electricity-generating capacity can destroy the world or at least destroy the human species. Why are so many of you so complacent about that risk?

In the news recently was a young man who killed some people at a fertility clinic. He was a "promortalist": someone who believes that there is so much suffering in the world that the only moral response is to help all the people die (so they cannot suffer any more). Eventually, the craft of machine learning will become so well understood and access to compute resources so widespread and affordable that anyone (e.g., some troubled soul living in some damp basement somewhere who happens to inherits $66 million from some eccentric uncle or happens to win a big personal-injury lawsuit against some rich corporation) will have the means to end the human experiment.

He will not have to figure out how to stay in control of the AI he unleashes. Any AI (just like any human being) will have some system of preferences: there will be some ways the future might unfold that the AI will prefer to other ways. And if you put enough optimization pressure behind almost any system of preferences, what happens strongly tends to be incompatible with continued human survival unless the AI has been correctly programmed to care whether the humans survive. Our troubled soul bent on ending the human experiment can simply rely on this thorny property shared by all really powerful optimizing processes.

In summary, even if you don't believe me that no one knows (and no one is likely to find out in time if AI research is not stopped) how to create an AI that will keep on caring what happens to the people, aren't you worried about a human actor who need not bother to make sure that the AI will care what happens to the people because this actor is troubled and wants all the people to die?

I mean, yes, some of you genuinely disbelieve that AI can or will get good enough to be able to wrestle control over the future out of the hands of humankind. But many of you consider it likely that AI technology will continue to improve (or else people wouldn't've invested so much in AI and wouldn't've driven the market cap of Nvidia to 3 trillion dollars). Why so little worry?

Comments

pvg•8mo ago
You're better off not loading the question like "Do you simply consider it someone else's job to worry about risks like that?". Who would want to talk to you when it sounds you're not asking but looking to berate?
hollerith•8mo ago
I removed that sentence (from the end of my post). Thanks for the feedback. I'll try to calm myself down now.
bigyabai•8mo ago
Your question still implies a hysteric interpretation of a nonexistent featureset. I think you will struggle to foster a serious discussion without actually describing what you're worried about. "AI kills people" is not any more of a serious concern than household furnitute becoming sentient and resolving to form an army that challenges humankind.

You have to describe what the actual threat is for us to treat it as an imperative issue. 99% of the time, these hypotheticals end with human error, not rogue AI.

bigyabai•8mo ago
1. If AI is latently capable of killing people using just computer power, then it was going to happen regardless. If the AI requires assistance from human actors then it's basically indistinct from human actors acting alone without AI. If you are a human that puts AI in charge of a human life, you are liable for criminal negligence.

2. You cannot stop AI research because of a bunch of unknowns. People will not be afraid of an immaterial threat that has no plausible way to threaten people besides generating text. Even if that text has access to the internet, the worst that can happen has probably already been explored by human actors. No AI was ever needed to proliferate catastrophes like Stuxnet, Sarin gas attacks, or 9/11.

3. Some people (like myself) have been following this space since Google published BERT. In that time, I have watched LLMs go from "absolutely dogshit text generator" to "slightly less dogshit text generator". It sounds to me like you've drank Sam Altman's Kool-aid without realizing that Sam is bullshitting too.

philipkglass•8mo ago
Robotics progress is a lot slower than progress in disembodied AI, and disembodied AI trying to kill humanity is like naked John von Neumann trying to kill a tiger in an arena. IMO we need to figure out AI safety before physically embodied AI (smart robots) becomes routine, but to me safety in that context looks more like traditional safety-critical and security-critical software development.

I'm aware of the argument that smart enough AI can rapidly bootstrap itself to catastrophically affect the material world:

https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-trans...

"It gets an immense technological advantage. If it's smart, it doesn't announce itself. It doesn't tell you that there's a fight going on. It emails out some instructions to one of those labs that'll synthesize DNA and synthesize proteins from the DNA and get some proteins mailed to a hapless human somewhere who gets paid a bunch of money to mix together some stuff they got in the mail in a file. Like, smart people will not do this for any sum of money. Many people are not smart. Builds the ribosome, but the ribosome that builds things out of covalently bonded diamondoid instead of proteins folding up and held together by Van der Waals forces, builds tiny diamondoid bacteria. The diamondoid bacteria replicate using atmospheric carbon, hydrogen, oxygen, nitrogen, and sunlight. And a couple of days later, everybody on earth falls over dead in the same second."

As someone with a strong background in chemistry this just makes me skeptical of Yudkowsky's groundedness as a prognosticator. Biological life is not compatible with known synthesis conditions for diamond, and even superintelligence may not discover workarounds. I am even more skeptical that AI can make such advances and turn them into working devices purely by pondering/simulation, i.e. without iterative laboratory experiments.