frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

You are the reason I am not reviewing this PR

https://github.com/NixOS/nixpkgs/pull/479442
2•midzer•1m ago•0 comments

Show HN: FamilyMemories.video – Turn static old photos into 5s AI videos

https://familymemories.video
1•tareq_•3m ago•0 comments

How Meta Made Linux a Planet-Scale Load Balancer

https://softwarefrontier.substack.com/p/how-meta-turned-the-linux-kernel
1•CortexFlow•3m ago•0 comments

A Turing Test for AI Coding

https://t-cadet.github.io/programming-wisdom/#2026-02-06-a-turing-test-for-ai-coding
2•phi-system•3m ago•0 comments

How to Identify and Eliminate Unused AWS Resources

https://medium.com/@vkelk/how-to-identify-and-eliminate-unused-aws-resources-b0e2040b4de8
2•vkelk•4m ago•0 comments

A2CDVI – HDMI output from from the Apple IIc's digital video output connector

https://github.com/MrTechGadget/A2C_DVI_SMD
1•mmoogle•5m ago•0 comments

CLI for Common Playwright Actions

https://github.com/microsoft/playwright-cli
3•saikatsg•6m ago•0 comments

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
2•HamoodBahzar•7m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•11m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
3•gmays•11m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•13m ago•1 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•13m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•16m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•20m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•21m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•21m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•21m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•22m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•25m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•25m ago•1 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•27m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•28m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•29m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•29m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
3•Brajeshwar•29m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•29m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•30m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•31m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•32m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•38m ago•0 comments
Open in hackernews

General principles for the use of AI at CERN

https://home.web.cern.ch/news/official-news/knowledge-sharing/general-principles-use-ai-cern
104•singiamtel•2mo ago

Comments

singiamtel•2mo ago
I found this principle particularly interesting:

    Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
conartist6•2mo ago
It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!

It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it

miningape•2mo ago
I think you're more reading what you want to read out of that - but that's the problem, it's too ambiguous to be useful
hgomersall•2mo ago
That doesn't follow. Say you write a proof for a something I request, I can then check that proof. That doesn't mean I don't derive any value from being given the proof. A lack of trust does not imply no use.
MaybiusStrip•2mo ago
"You can use AI but you are responsible for and must validate its output" is a completely reasonable and coherent policy. I'm sure they stated exactly what they intended to.
geokon•2mo ago
If you have a program that looks at CCTV footage and IDs animals that go by.. is a human supposed to validate every single output? How about if it's thousands of hours of footage?

I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases

pu_pe•2mo ago
I don't see it so bleakly. Using your analogy, it would simply mean that if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable. If the program is responsible for a critical task (ie the animal will be shot depending on the classification) then yes, a human should validate every output or be held accountable in case of a mistake.
mattkrause•2mo ago
Exactly.

If some dogs chew up an important component, the CERN dog-catcher won't avoid responsibility just by saying "Well, the computer said there weren't any dogs inside the fence, so I believed it."

Instead, they should be taking proactive steps: testing and evaluating the AI, adding manual patrols, etc.

conartist6•2mo ago
I take an interest in plane crashes and human factors in digital systems. We understand that there's a very human aspect of complacency that is often read about in reports of true disasters, well after that complacency has crept deep into an organization.

When you put something on autopilot, you also massively accelerate your process of becoming complacent about it -- which is normal, it is the process of building trust.

When that trust is earned but not deserved, problems develop. Often the system affected by complacency drifts. Nobody is looking closely enough to notice the problems until they become proto-disasters. When the human finally is put back in control, it may be to discover that the equilibrium of the system is approaching catastrophe too rapidly for humans to catch up on the situation and intercede appropriately. It is for this reason that many aircraft accidents occur in the seconds and minutes following an autopilot cutoff. Similarly, every Tesla that ever slammed into the back of an ambulance on the back of the road was a) driven by an AI, b) that the driver had learned to trust, and c) the driver - though theoretically responsible - had become complacent.

pu_pe•2mo ago
Sure, but not every application has dramatic consequences such as plane or car crashes. I mean, we are talking about theoretical physics here.
oasisaimlessly•2mo ago
Half-Life showed a plausible story of how high energy physics could have unforeseen consequences.
conartist6•2mo ago
Theoretical? I don't see any reason that complacency is fine in science. If it's a high school science project and you don't actually care at all about the results, sure.
geokon•2mo ago
The problem is that the original statement is too black and white. We make tradeoffs based on costs and feasibility

"if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable"

Like.. compared to one human? Or an army of a thousand humans tracking animals? There is no nuance at all. It's just unreasonable to make a blanket statement that humans always have to be accountable.

"If the program is responsible for a critical task .."

See how your statement has some nuance? and recognizes that some situations require more accountability and validation that others?

SiempreViernes•2mo ago
> So they endorse trusting it is my read, exactly the opposite of what they appear to say!

They endorse limited trust, not exactly a foreign concept to anyone who's taken a closer look at an older loaf of bread before cutting a slice to eat.

Sharlin•2mo ago
Interesting in what sense? Isn't it just stating something plainly obvious?
jacquesm•2mo ago
It is, but unfortunately the fact that to you - and me - it is obvious does not mean it is obvious to everybody.
Sharlin•2mo ago
Quite. One would hope, though, that it would be clear to prestigious scientific research organizations in particular, just like everything else related to source criticism and proper academic conduct.
SiempreViernes•2mo ago
Did you forget the entire DOGE episode where every government worker in the US had to send an weekly email to an LLM to justify their existence?
Sharlin•2mo ago
I'd hold CERN to a slightly higher standard than DOGE when it comes to what's considered plainly obvious.
SiempreViernes•2mo ago
Sure, but the way you maintain this standard is by codifying rules that are distinct from the "lower" practices you find elsewhere.

In other words, because of the huge DOGE clusterfuck demonstrated how horrible practices people will actually enact, you need to put this into the principles.

piokoch•2mo ago
Oddly enough nowadays CERN is very much like a big corpo, yes they do science, but there is a huge overhead of corpo-like people who running CERN as an enterprise that should bring "income".
elashri•2mo ago
Can you elaborate on this, hopefully with details and sources including the revenue stream that CERN is getting as a cooperation?
mk89•2mo ago
I want to see how obvious this becomes when you start to add agents left and right that make decisions automagically...
Sharlin•2mo ago
Right. It should be obvious that at an organization like CERN you're not supposed to start adding autonomous agents left and right.
xtiansimon•2mo ago
Where is “human oversight” in an automated workflow? I noticed the quote didn’t say “inputs”.

And with testing and other services, I guess human oversight can be reduced to _looking at the dials_ for the green and red lights?

SiempreViernes•2mo ago
Someone's inputs is someone else's outputs, I don't think you have spotted an interesting gap. Certainly just looking at the dials will do for monitoring functioning, but falls well short of validating the system performance.
monkeydust•2mo ago
The real interesting thing is how does that principle interplay with their pillars and goals i.e. if the goal is to "optimize workflow and resource usage" then having a human in the loop at all points might limit or fully erode this ambition. Obviously it not that black and white, certain tasks could be fully autonomous where others require human validation and you could be net positive - but - this challenge is not exclusive to CERN that's for sure.
contrarian1234•2mo ago
Do they hold the CERN Roomba to the same standard? If it cleans the same section of carpet twice is someone going to have to do a review?
conartist6•2mo ago
Feels like the useless kind of corporate policy, expressed in terms of the loftiest ideals instead of how to make real trade offs with costs
jordanpg•2mo ago
Organizations above a certain size absolutely cannot help themselves but publish this stuff. It is the work of senior middle managers. Ark Fleet Ship B.

I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.

alkonaut•2mo ago
99% of corporate policies are to be able to point to a document that says "well it's not my fault, I made the policy and someone didn't follow it".
marginalia_nu•2mo ago
You don't even need to go as far as saying someone didn't follow the policy, you can just say you need to review the policies. That way, conveniently enough, nobody is really ever at fault!
SiempreViernes•2mo ago
It is a organisation wide document of "General principles", how could it possibly have something more specific to say that about the inherently context specific trade-offs of each specific use of AI?
mariusor•2mo ago
Well, CERN is not a corporation, it can afford not optimizing for "costs", whatever you mean by that in this context.
oytis•2mo ago
What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
LudwigNagasena•2mo ago
CERN is in principle opposed to military research. That and stuff like lawfulness, fairness, sustainability, privacy are just general CERN principles restated for fluff.
oblio•2mo ago
> CERN’s convention states: “The Organization shall have no concern with work for military requirements and the results of its experimental and theoretical work shall be published or otherwise made generally available.”

CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.

oytis•2mo ago
Sorry, looks like I misunderstood what "having no concern" means.
danparsonson•2mo ago
Yeah it's written as in, "we don't concern ourselves with that", i.e. it's none of their business
jacquesm•2mo ago
It's a bit of a fig leaf though, any high energy physics has military implications.
tempay•2mo ago
What does the LHC physics program have to do with military applications?
miningape•2mo ago
You'd be surprised how creative the military can be when there's demand
oskarkk•2mo ago
Research on interactions between particles can probably be helpful for nuclear weapons R&D.
fainpul•2mo ago
Doesn't all of physics have some military implications?

But at least they make everything public knowledge, instead of keeping it secret and only selling it to one nation.

oblio•2mo ago
> any physics has military implications.

Fixed that for you. That's been the case since we discovered sticks and stones, but it doesn't mean that CERN is lying when they say they want to focus on non-military areas.

Let's not assume the worst of an institution that's been fairly good for the world so far.

jacquesm•2mo ago
> Fixed that for you.

You didn't fix anything.

> Let's not assume the worst of an institution that's been fairly good for the world so far.

I'm not assuming the worst. I'm just being realistic, and I think it would be nice if CERN explicitly acknowledged the fact that what they do there could have serious implications for weapons technology.

oblio•2mo ago
By that logic a tire manufacturer should do the same.

You're really grasping at straws here. CERN doesn't need to do anything. Nor do universities, for example.

jacquesm•2mo ago
CERN is explicit about something they know isn't true. They could just say nothing.

I'm fine with CERN, its scientific mission and whatever they come up with there and have contributed to their cause in a minor way so I can do without the lecturing.

If you do research it is easy to stick your head in the ground and pretend that as an academic you have no responsibility for the outcome. But that's roughly analogous to a gun manufacturer pushing the 'guns don't kill people, people do' angle. CERN has a number of projects on the go whose only possible outcome will be more powerful or more compact weapons.

For instance, anti-matter research. If and when we manage to create anti-matter in larger quantities and to be able to do so more easily it will have potentially massive impact on the kind of threats societies have to deal with. To pretend that this is just abstract research is willfully abdicating responsibility.

Once it can be done it will be done, and once it will be done it is a matter of time before it is used. Knowledge, once gained can not be unlearned. See also: the atomic bomb. Now, CERN isn't the only facility where such research takes place and I'm well aware of the geopolitical impact of being 'late' when it comes to such research. I would just like them to be upfront about it. There is a reason why most particle accelerators and associated goodies are funded by the various departments of defense.

Your typical university research lab is not doing stuff with such impact, though, the biology department of some of these are investigating things that can easily be weaponized, and which should come with similar transparency about possible uses.

oblio•2mo ago
Antimatter would also revolutionize energy production...
jacquesm•2mo ago
Not necessarily. Making something go boom is a lot easier than making that same thing make controlled energy over a longer period of time.
SideburnsOfDoom•2mo ago
sure, though "have no concern with" comes across to me less like ""we avoid building anything that could be conceivably used as a weapon by anyone", and more as "We're not in that business, but it's not our concern if you manage to stab yourself with it. It's not secret".
GuB-42•2mo ago
One reason I can think of is with regard to confidentiality. A lot of AI services are controlled by companies in the US or China, and they may not want military research to leak to these countries.

Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.

Temporary_31337•2mo ago
blah, blah,people will simply use it as they see fit
Schlagbohrer•2mo ago
It's about as detailed and helpful as saying, "Don't be an asshole"
elashri•2mo ago
In such scientific environment, There are gentlemen agreements about many things that boils down to "Don't be an asshole" or "Be considerate of the others" with some hard requirements at this or that point for things that are very serious.
blitzar•2mo ago
"Don't be an asshole" could solve world peace.
eisbaw•2mo ago
So general that it says nothing. Very corporate.
DisjointedHunt•2mo ago
This corporate crap makes me want to puke. It is a consequence of the forced bureaucracy from European regulations, particularly the EU AI act which is not well thought out and actively adds liability and risk to anyone on the continent touching AI including old school methods such as bank credit scoring systems.
fsh•2mo ago
CERN is neither corporate, nor in the EU.
DisjointedHunt•2mo ago
The content is corporate. The EU AI Act is extra judicial. You don't have to be in the EU to adopt this very set of "AI Principles", but if you don't, you carry liability.
GranularRecipe•2mo ago
What I find interesting is the implicit priorisation: explainability, (human) accountability, lawfulness, fairness, safety, sustainability, data privacy and non-military use.
peepee1982•2mo ago
Might be implicit prioritization, but I don’t think it’s prioritized by importance, rather than by likelihood of being a problem.
annjose•2mo ago
I agree, though I would prefer to highlight the first half of the first item - transparency. Also, perhaps make Safety an independent principle than combining with Security.

These are a good set of principles for any company (or individual) can follow to guide them how they use AI.

macleginn•2mo ago
‘Sustainability: The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.’ [1]

‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]

I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.

[1] https://home.web.cern.ch/news/official-news/knowledge-sharin... [2] https://home.cern/science/engineering/powering-cern

hengheng•2mo ago
That is equivalent to a continuous draw of 150 MW. Not great, not terrible.

Far less power than those projected gigawatt data centers that are surely the one thing keeping AI companies from breaking even.

macleginn•2mo ago
I presume that this policy is not about building data-centres but about the use of AI by CERN employees, so essentially about marginal cost of generating an additional Python script, or something. Don't know if this calculation ever makes sense on the global scale, but if one’s job is to literally spend energy to produce knowledge, it becomes even less straightforward.
tempfile•2mo ago
How did that turn into "not great, not terrible"? That's still 300,000 homes that could otherwise be powered. It's an enormous amount of electricity!
ceejayoz•2mo ago
And all we get out of CERN is… the entire modern economy.

Their ledgers are balanced just fine for a while.

tempfile•2mo ago
This is a very silly argument. The energy expended should be justified on its own (scientific!) merits. The fact the web happened to be invented at CERN has almost nothing to do with the fact that they burn through terajoules of electricity every year.
ceejayoz•2mo ago
> The energy expended should be justified on its own (scientific!) merits.

Is the scientific merit of such a thing always immediately apparent?

hengheng•2mo ago
In your opinion, what would instead justify the total cost of devoting 10'000 people's lives to basic research?
Jean-Papoulos•2mo ago
Humans have poured resources into the pursuit of largely impractical pure knowledge for millenia. This has been said of an incredible number of human scientific endeavors, before they found use in other domains.

Also, the web was invented at CERN.

piokoch•2mo ago
All this impractical knowledge people accumulated over centuries gave you cars, planes, computers, air condition, antibiotics, iphones, and, in fact, everything you have when human kind left the trees. So I would rather burn this 1,3 terawatt on this than on, say, running Facebook or bitcoins mining.
hexo•2mo ago
from that picture it looks like they want to do everything with AI. this is very sad.
dude250711•2mo ago
> Responsibility and accountability: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.

This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated. Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.

mark_l_watson•2mo ago
Good guidelines. My primary principle for using AI is that it should be used as a tool under my control to make me better by making it easier to learn new things, offer alternative viewpoints. Sadly, AI training seems headed towards producing ‘averaged behaviors’ while in my career the best I had to offer employers was an ability to think outside the box, have different perspectives.

How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.

nathan_compton•2mo ago
I'm not optimistic about this in the short term. Creative and diverse viewpoints seem to come from diverse life experiences, which AI does not have and, if they are present in the training data, are mostly washed out. Statistical models are like that. The objective function is to predict close to the average output, after all.

In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.

bryanlarsen•2mo ago
The vast majority of advances seem to be of the form "do X for Y", where neither X nor Y is novel but the combination is. I have no idea whether AI is going to better than humans at this, but it seems like it could be.