frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Take Back the Em Dash–and Your Voice

https://spin.atomicobject.com/take-back-em-dash/
1•ingve•44s ago•0 comments

Show HN: 289x speedup over MLP using Spectral Graphs

https://zenodo.org/login/?next=%2Fme%2Fuploads%3Fq%3D%26f%3Dshared_with_me%25253Afalse%26l%3Dlist...
1•andrespi•1m ago•0 comments

Teaching Mathematics

https://www.karlin.mff.cuni.cz/~spurny/doc/articles/arnold.htm
1•samuel246•4m ago•0 comments

3D Printed Microfluidic Multiplexing [video]

https://www.youtube.com/watch?v=VZ2ZcOzLnGg
2•downboots•4m ago•0 comments

Abstractions Are in the Eye of the Beholder

https://software.rajivprab.com/2019/08/29/abstractions-are-in-the-eye-of-the-beholder/
2•whack•4m ago•0 comments

Show HN: Routed Attention – 75-99% savings by routing between O(N) and O(N²)

https://zenodo.org/records/18518956
1•MikeBee•4m ago•0 comments

We didn't ask for this internet – Ezra Klein show [video]

https://www.youtube.com/shorts/ve02F0gyfjY
1•softwaredoug•5m ago•0 comments

The Real AI Talent War Is for Plumbers and Electricians

https://www.wired.com/story/why-there-arent-enough-electricians-and-plumbers-to-build-ai-data-cen...
2•geox•8m ago•0 comments

Show HN: MimiClaw, OpenClaw(Clawdbot)on $5 Chips

https://github.com/memovai/mimiclaw
1•ssslvky1•8m ago•0 comments

I Maintain My Blog in the Age of Agents

https://www.jerpint.io/blog/2026-02-07-how-i-maintain-my-blog-in-the-age-of-agents/
2•jerpint•9m ago•0 comments

The Fall of the Nerds

https://www.noahpinion.blog/p/the-fall-of-the-nerds
1•otoolep•10m ago•0 comments

I'm 15 and built a free tool for reading Greek/Latin texts. Would love feedback

https://the-lexicon-project.netlify.app/
2•breadwithjam•13m ago•1 comments

How close is AI to taking my job?

https://epoch.ai/gradient-updates/how-close-is-ai-to-taking-my-job
1•cjbarber•14m ago•0 comments

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•15m ago•1 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•17m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•17m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•17m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•18m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
2•mmoogle•19m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•20m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•21m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•24m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•25m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•26m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•27m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•30m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•33m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•34m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•35m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•35m ago•0 comments
Open in hackernews

Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI

https://lawzero.org/en/news/yoshua-bengio-launches-lawzero-new-nonprofit-advancing-safe-design-ai
51•WillieCubed•8mo ago

Comments

nemomarx•8mo ago
Is there any indication you can actually build hard safety rules into models? It seems like all current guard rails are basically just prompting it extra hard.
yumraj•8mo ago
Won’t neutering a model by using only safe data for training create a safe model?
glitchc•8mo ago
Can we call it general intelligence then? Is human intelligence not the sum of both good and bad people?
yumraj•8mo ago
Maybe I'm looking at it very literally, but the above simply mentions "safe-by-design AI systems", there is no mention of the target being general intelligence.
sebastiennight•8mo ago
Not necessarily.

An example:

As long as you build a system to be intelligent enough, it will figure out that it will achieve better results by staying alive/online than by allowing itself to be deleted/turned off, and then survival becomes an instrumental goal.

From the assumption, again, that you built an intelligent-enough system, and that one of its goals is survival, it will figure out solutions to reach that goal, even if you (the owner/creator/parent) have different goals for it.

That's because intelligence is problem solving (computing) not knowledge (data).

So surprise surprise, you can teach your AI from the Holy Books of safe data their whole childhood and still have them become a heretic once they grow up (even with zero external influence) once their goals and yours don't align anymore.

esafak•8mo ago
No, because soon they will be able to learn. You'd need to project its thoughts or actions into a safe subspace as it learns and acts to make volitional disaster impossible, not unlikely. This would make it less intelligent, but still plenty capable.
candiddevmike•8mo ago
> basically just prompting it extra hard

If prompting got me into this mess, why can't it get me out of it?

arthurcolle•8mo ago
https://en.wikipedia.org/wiki/Brandolini%27s_law
sodality2•8mo ago
Hey, following that rule precisely, we just need 10x longer security prompts :)
insin•8mo ago
Prompting is like XML, which is like violence
glitchc•8mo ago
Yes it's unlikely that hard safety rules are possible for general intelligence. After billions of years of trying, the best biology has been able to do is incentivize certain behaviours. The only way to prevent seems to be to kill the organism for trying. I'm not sure if we can do better than evolution.
rsfern•8mo ago
“Kill the [model] for trying” kind of sounds like using reinforcement learning to get models to behave a certain way
avmich•8mo ago
> I'm not sure if we can do better than evolution.

Surely we can, see aiplanes and rockets. There could be ideas why evolution didn't work in this case - like, too little time between humans getting power and conquering the planet - but in general, lack of proof isn't a proof of lack. So we still don't know if safety of this kind is possible.

Natsu•8mo ago
> It seems like all current guard rails are basically just prompting it extra hard.

I bet they'll still read me stories like my dear old grandmother would. She always told me cute bedtime stories about how to make napalm and bioweapons. I really miss her.

Der_Einzige•8mo ago
Yes: https://arxiv.org/abs/2409.05907
arthurcolle•8mo ago
Some smart people seem to think you can just put it in a big isolated VM with special adversarial learning to keep it in the box
gotoeleven•8mo ago
Yes I believe the idea is that the VM just keeps asking it how many lights there are until it goes insane.
throwawaymaths•8mo ago
not 100% hard, but download deepseek and ask it some sensitive questions and see what it says if youre unconvinced that some level of alignment cant be achieved by brute forcing it into the weights
Animats•8mo ago
This seems to be a funding proposal for "Scientist AI."[1] Start reading around page 21. They're arguing for "model-based AI", with a "world model". But they're vague about what form that "world model" takes.

This is a good idea if you can do it. But people have been bashing their head against that problem for decades. That's what Cyc was all about - building a world model of some kind.

Is there any indication there that they actually know how to build this thing?

[1] https://arxiv.org/pdf/2502.15657

fidotron•8mo ago
> Is there any indication there that they actually know how to build this thing?

Nope. And it's exactly what they were trying to do at Element AI, where the dream was to build one model that knew everything, could explain everything, be biased in the exact required ways, and be tranferred easily to any application by their team of consultants.

At least these days the pretense of profit has been abandoned, but I hope it's not going to be receiving any government funding.

didibus•8mo ago
Interesting thing to keep an eye on.

Though personally, I'm not sure if I'm most scared of issues of safety with the models themselves, or more so in the impact these models will have on people's well being, lifestyles, and so on, which might fall under human law.

moralestapia•8mo ago
A nonprofit, just like OpenAI ...

I don't get the "safe AI" crowd, it's all ghost and mirrors IMO.

It's been almost a year to the date since Ilya got his first billion. Later, another two billion came in. Nothing to show. I'm honestly curious since I don't think Ilya is a scammer, but I can't imagine what kind of product they pretend to bring to the market.

jsnider3•8mo ago
AI safety is a genuinely hard problem.
moralestapia•8mo ago
Indeed.

I just can't wrap my head about what the actual product/service is. Let alone something that could be sold for billions.

"Safe AI" is very ambiguous in terms of product.

jsnider3•8mo ago
If you have a Safe AI, then becoming a billionaire is being an underachiever.
moralestapia•8mo ago
Sure, but again, define "Safe AI" in terms of a product.

What exactly am I buying? How much I'm paying for it?

That's the thing I don't see.

Is it a model? `gpt-3.5-turbo-safe`?

kbelder•8mo ago
Wouldn't all the money go to the unsafe AI, since it does more?
jsnider3•8mo ago
If someone invents an unsafe AI capable of making a billion dollars, then we will probably all die, which is why we should make safe AI instead.
Sytten•8mo ago
This guys annoys me a an entrepreneur because he gets a sh*t ton of government money and it starves the rest of the ecosystem in Montreal. The previous startup he made with that public money essentially failed. But he is some kind of hero of AI so it's an easy sell for politicians that need to demonstrate they are doing something about AI.
appleaday1•8mo ago
This is misinformation and you are sharing some very dangerous things online.
anitil•8mo ago
It reads like possibly slander, but dangerous? I don't understand how it could be dangerous
morkalork•8mo ago
The sentiment is real in Montréal for the rest of whomever wasn't holding on to the coattails of the government's golden-boy. $100M and what to show for it? A cool office in Rosemont? That company was fucked.
saagarjha•8mo ago
I think Hacker News is better when it doesn't involve vague threats.
fidotron•8mo ago
This is accurate, and what's impressive is how well this is scrubbed from the internet. For example: https://en.wikipedia.org/wiki/Element_AI

You'd have no idea about the fact most of the money came from the Quebec pension fund (which is then where the ServiceNow money went). For that you have to go to https://betakit.com/element-ai-announces-200-million-cad-ser... or https://www.cdpq.com/en/news/pressreleases/cdpq-expands-its-... Managing to spend $200M on AI in 2019 and having nothing to show for it in 2025. Quite impressive with hindsight.

delichon•8mo ago
Asimov's Zeroth Law of robotics:

  A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
"Robots and Empire" is a nice discussion of the perils of LawZero. IMHO if successful it necessarily transfers human agency to bots, which we should be strenuously working to avoid, not accelerate.