frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•1m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•1m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•2m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•2m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•5m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•9m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•9m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
1•tosh•14m ago•0 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•15m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•16m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•19m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•21m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•21m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•22m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•22m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•24m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•25m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•28m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•30m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•30m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•30m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•33m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
5•sakanakana00•37m ago•1 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•39m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•39m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•41m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
4•Nive11•41m ago•6 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•45m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
3•chartscout•47m ago•1 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•50m ago•0 comments
Open in hackernews

UK threatens action against X over sexualised AI images of women and children

https://www.theguardian.com/technology/2026/jan/12/uk-threatens-action-against-x-over-sexualised-ai-images-of-women-and-children
35•chrisjj•3w ago

Comments

Festro•3w ago
This further exposes just how pointless and ill-thought out the Online Safety Act was in the UK. It does nothing to actually limit harm at the source, and empower the UK's public body's to take immediate action.

Ironic that the minister who spearheaded that awful bill (Peter Kyle) as Tech minister is now being the government spokesperson for this debacle as Business Minister. The UK needs someone who knows how tech and business works to tackle this, and that's not Peter Kyle.

A platform suspension in the UK should have been swift, with clear terms for how X can be reinstated. As much as it appears Musk is doubling down on letting Grok produce CSAM as some form of free speech, the UK government should treat it as a limited breach or bug that the vendor needs to resolve, whilst taking action to block the site causing harm until they've fixed it.

Letting X and Grok continue to do harm, and get free PR, is just the worst case scenario for all involved.

roryirvine•3w ago
The draft Online Safety Bill was first published in 2021, was substantially re-written and re-introduced in early 2022, was amended over the course of the next 18 months, and eventually passed into law as the Online Safety Act in October 2023.

Peter Kyle was in opposition until July 2024, so how could he have spearheaded it?

Festro•3w ago
Because he implemented it under his tenure in July 2025. He didn't come up with it, he spearheaded its actual implementation. Sorry if that wasn't clear.
tonyedgecombe•3w ago
The first conviction under the bill was March 2024 so that makes no sense.
Festro•3w ago
Why would it make no sense? Like many bills/acts, it came into effect in stages. You're referring to new laws/crimes that came into effect in January 2024.

I'm referring to the Act's powers to compel companies to actually do things in my original comment. I don't know exactly when various parts came into effect that would constitute that, but for the point of my post I'm going on Peter Kyle's own website's dated reference to holding companies accountable.

"As of the 24th July 2025, platforms now have a legal duty to protect children"

https://www.peterkyle.co.uk/blog/2025/07/25/online-safety-ac...

I don't understand why people are taking issue with that. Peter Kyle is the minister who delivered the measures from the bill that a lot of people are angry about and this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse. Peter Kyle is the one who called out objections to the bill as being on the "side of predators". Peter Kyle is now the one, despite having moved department, who is somehow commenting on this issue.

Totally happy to call out the Tories, and prior ministers who worked on the Bill/Act but Kyle implemented it, made reckless comments about, and now is trying to look proactive about an issue it covers that it's so ineffectively applying to.

blitzar•3w ago
Partisan politics has rotted peoples brains, I wonder if it is by design to lower peoples critical thinking skills or if it is just a fringe benefit from the tribalism it creates.
PearlRiver•3w ago
Reminds me of something emperor Trump said. "I can shoot somebody on Fifth Avenue and they will still vote for me".

Sometimes people just dig themselves into a hole and they start going off the deep end. Why did it take until 1944 for someone to blow up Hitler?

chrisjj•3w ago
> this latest issue on X is just another red flag that the bill is poorly worded and thought out putting too much emphasis on ID age checks for citizens than actually stopping any abuse.

The bill is designed to protect children from extreme sexual violence, extreme pornography and CSAM.

Not to protect adults from bikinification.

It is working as designed.

chrisjj•3w ago
The OSA very much does empower action e.g. against images of extreme sexual violence and extreme pornography.

It does not empower platform suspension for bikinification.

And there's as yet no substantiation of your claim Grok produces CSAM.

0xy•3w ago
This is a purely political move to censor dissent by a government that polls like a minor party and is slated for electoral wipeout next election. If it were not, they'd issue the same threats to Gemini and ChatGPT.

https://www.digitaltrends.com/computing/googles-gemini-deeme...

bmacho•3w ago
Flagged for the title implying men have no rights. That's totally uncalled for and I hope such submission titles are not allowed here.
graemep•3w ago
I agree with the underlying point and the social bias it reflects (which i have experienced myself) but the title here is (as usual) just the article so The Guardian is to blame rather than HN.

I think the solution is not to disallow the titles, but to comment on them and draw attention to the sexism in the article.

bmacho•3w ago
The solution is absolutely disallow offending titles. The same principle that would make HN moderators take down a "Kill all the Jews" title from the front page should apply for this one too.

Submission titles should be the original article titles, as long as those aren't problematic.

graemep•3w ago
Its not as offensive.

I agree this is problematic, but I am inclined to see it as an opportunity to discuss the problem and illustrate how widespread it is. We can also mention real issues, such as about half of all domestic abuse victims in the UK are male (if you count emotional abuse, otherwise its still 40%), more than half of rapes in the US are of men (because of the huge number of prison rapes), etc.

jraph•3w ago
I don't think this title implies this. The title says "There were sexualised AI images of women and children, and the UK threatens X over this". What more than this do you read in this title?

Is there actually a significant number of problematic sexualised AI images of men on X that the title fails to mention? If not, the follow-up question would be: what are you actually complaining about, exactly?

Women are often sexualised, way more than men. Would it be more comfortable to you if this fact was invisibilized?

moralestapia•3w ago
Agree 100% and thanks for bringing this up.

Sexual abuse towards men is as prevalent as it is towards women.

chrisjj•3w ago
The HN title is true to the source and should not be flagged.
saaaaaam•3w ago
The title is the title of the article published by the Guardian. It has not been editorialised by the person submitting the article. If you have an issue with the title of the article flagging the submission is not hugely useful. Email the editor.
leobg•3w ago
So I guess in the 90s they would’ve sued Adobe for not putting spyware into Photoshop?

If you believe in democracy, and the rule of law, and citizenship, then the responsibility obviously lies with people who create and publish pictures, not the makers of tools.

Think of it. You can use a phone camera to produce illegal pictures. What kind of a world would we live in if Apple was required to run an AI filter on your pics to determine whether they comply with the laws?

A different question is if X actually hosts generated pictures that are illegal in the UK. In that case, X acts as a publisher, and you can sue them along with the creator for removal.

Symbiote•3w ago
Photoshop does have (since the late 1990s or so) algorithms to detect and prevent editing images of currency.

The power of the AI tools is so great in comparison to a non-AI image editor that there's probably debate on who -- the user, or the operator of the AI -- is creating the image.

https://en.wikipedia.org/wiki/EURion_constellation

chrisjj•3w ago
> The power of the AI tools is so great in comparison to a non-AI image editor that there's probably debate on who -- the user, or the operator of the AI -- is creating the image.

Compute power is irrelevent. What's relevant in law is who is causing the generation, and that's obviously the operator.

chrisjj•3w ago
Sorry. Correction: What's relevant in law is who is causing the generation, and that's obviously the user.
graemep•3w ago
There is a big difference between running spyware on things running locally, and monitoring how people use a service running on your own computers. The former means you have to exfiltrate data, the latter is monitoring data you already have.

Photoshop in the 90s was the former, Grok is the latter.

SteveMqz•3w ago
Apple does run software for detecting CSAM on pictures users store to the cloud.
chrisjj•3w ago
That's to ensure Apple compliance, not user compliance.
chrisjj•3w ago
> A different question is if X actually hosts generated pictures that are illegal in the UK.

If the answer was Yes, these Govt. complaints would claim so. They don't.

The Govt's problem is imagery it calls 'barely legal'. I.e. "legal but we wish it wasn't." https://www.theguardian.com/society/2025/aug/03/uk-pornograp...

dunhuang_nomad•3w ago
This move makes perfect sense to me. I think people are bit too online pilled to think about this as if it were a different product.

If you produce a product that causes harm, and there are steps that could be taken to prevent that harm, you should be held responsible for it. Before the trump admin dropped the Boeing case, Boeing was going to be held liable for design defects in its Max planes that caused crashes. The government wasn’t going after Boeing bc a plane crashed, but bc Boeing did not take adequate steps from preventing that from happening.

chrisjj•3w ago
> If you produce a product that causes harm, and there are steps that could be taken to prevent that harm, you should be held responsible for it.

This is wholly unrealistic. Any product can be used to cause harm and there are always steps that could be taken to prevent that. E.g. ceasing sales. But that would often do more harm than it prevents.

dunhuang_nomad•3w ago
I appreciate the pushback. I’m reading your argument that every product can be used to cause harm, I agree with that take. The question is did the manufacturer do everything reasonable to limit the harm caused?

You can’t go after a company that makes kitchen knives if those are used to harm bc there’s nothing reasonable they could have done to prevent that harm, and there’s a legitimate use case for knives for cooking.

In this case, my understanding is other companies (OpenAI and Anthropic) have done more to limit harm, whereas XAI hasn’t.

Nasrudith•3w ago
Personally I can't help but think that 'reasonable' is a dangerous legal standard due to its unpredictability, subjectivity and assumed values and knowledge. Is it reasonable to put powdered aluminum and iron oxide into paint? What about when the paint is going onto a Zeppelin? Oh wait, those are thermite's ingredients. Oops. Is it reasonable for the paint seller to be held liable for selling paint with common reagents?
dunhuang_nomad•3w ago
Things aren’t black and white, and that’s why we have humans and the law. There’s no clear definition of what probable cause means in search warrants but it’s “subjectivity” does not mean you should have no searches?

But in this case it’s pretty easy, other model providers have in fact limited harm better than grok. So you don’t even need something arbitrary, just do it as well as competitors.

chrisjj•3w ago
> The question is did the manufacturer do everything reasonable to limit the harm caused?

OK. A different and better question.

The problem is, would it be considered reasonable to avoid harm to the mental wellbeing of bikinified persons at the cost of harm to all users enjoying a service supported by bikinification earnings.

moralestapia•3w ago
I can use Photoshop to create a sexualized image of someone irl.

How's that any different?

chrisjj•3w ago
In UK law, it isn't.

The practical difference is simply that now it is happening far more frequently.

edgineer•3w ago
And people do do this and have been making crazy and creepy pictures online since its inception. It's never been that much of an issue until now.
roryirvine•3w ago
If you were to provide Photoshop as a Service on a sufficiently large scale, you would also be expected to take all reasonable measures to prevent it being used to disseminate CSAM and other abusive material.

So, no different to the standard that X should be held to.

saaaaaam•3w ago
I guess the difference is that it requires a certain amount of time, ability and skill to make creepy pictures using photoshop - which limits the number of people who will actually do that.

Photoshops also does not have a direct distribution channel built in, which means that if even if you had the wherewithal, knowledge, and nothing better to do, your creepy images would likely stay on your computer, and never see the light of day.

As I understand it, with Grok you simply give it a sexualised prompt and it does it for you in seconds, and immediately distributes the results to a potential audience of hundreds, thousands or even millions of people, where it will likely stay for a long period of time.

To my mind that's definitely rather different.

ulfw•3w ago
Don't threaten. Do it.

Indonesia has. Malaysia has. Why not you?

https://www.bbc.com/news/articles/cg7y10xm4x2o

chrisjj•3w ago
> Indonesia has. Malaysia has.

They banned porn sites too.

> Why not you?

The UK Govt has no power to ban it, since it is legal.

ulfw•3w ago
So do many US states.
chrisjj•3w ago
So perhaps ask why those US states allow it (Grok+X).
ulfw•3w ago
Two things really: a) I have given up all hope about the US. b) you do realise the title of this is about "UK threatens action..." right?
thw_9a83c•3w ago
I think this is a lost cause. Even if the mainstream services are blocked or forced to comply, there will always be hundreds of lesser-known tools and services offering the same features. At this point, nobody has the power to close this can of worms.

Besides, who is going to decide when people's images are sexualized enough? Are images of Elon Musk in bikini alright because he's not a woman or a child?

chrisjj•3w ago
> Are images of Elon Musk in bikini alright because he's not a woman or a child?

Didn't he consent?