frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•2m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
1•Brajeshwar•6m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
2•Brajeshwar•6m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
1•Brajeshwar•6m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•9m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•12m ago•0 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•14m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•14m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
2•vinhnx•15m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•19m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•24m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•28m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•29m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•30m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•37m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•40m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•40m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•41m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•42m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•42m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•43m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•43m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•47m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•48m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•49m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•49m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•57m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•57m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
2•surprisetalk•1h ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
3•surprisetalk•1h ago•0 comments
Open in hackernews

Grok Sexual Images Draw Rebuke, France Flags Content as Illegal

https://finance.yahoo.com/news/grok-sexual-images-draw-rebuke-180354505.html
58•akutlay•1mo ago

Comments

akutlay•1mo ago
It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
zajio1am•1mo ago
This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.
nozzlegear•1mo ago
Why does that seem absurd to you?
7bit•1mo ago
Don't feed the troll
nutjob2•1mo ago
Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.

If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.

zajio1am•1mo ago
When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.
chrisjj•1mo ago
> When these models are fine tuned to allow any kind of nudity

If you're suggesting Grok is fine-tuned to allow any kind of nudity, some evidence would be in order.

The article suggests otherwise: "The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute."

Ancapistani•1mo ago
I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.

> If you think people here think that models should enable CSAM you're out of your mind.

Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.

> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.

wolvoleo•1mo ago
True, CSAM should be blocked by all means. That's clear as day.

However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.

If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.

Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.

However CSAM generation should obviously be blocked and it's very illegal here too.

sam_lowry_•1mo ago
Funnily Mistral is as much censored as ChatGPT.

One should search Huggingface for role-playing models to have a decent level of erotic content, but even that does not guarantee you a pleasant experience.

chrisjj•1mo ago
Some misunderstanding here. This article makes abolutely no mention of CSAM. The objection is to "sexual content on X without people’s consent".
johneth•1mo ago
It's nonconsensual generation of sexual content of real people that is breaking the law. And things like CSAM generation which is obviously illegal.

> It feels a bit like foreign morals are being forced upon us.

Welcome to the rest of the world, where US morals have been forced upon us for decades. You should probably get used to it.

judahmeek•1mo ago
whether it was the "first" definitely depends on your standards & focus: https://cloudsecurityalliance.org/blog/2025/02/19/deepseek-r...
akutlay•1mo ago
Also see: https://timesofindia.indiatimes.com/technology/tech-news/it-...
chrisjj•1mo ago
“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

Not possible.

SpicyLemonZest•1mo ago
It's extremely possible! As the source article notes, the Grok developers specifically chose to make their AI more permissive of sexual content than their competitors, which won't produce such images. This isn't a scenario where someone developed a complex jailbreak to circumvent Grok's built-in protections.
ben_w•1mo ago
> Not possible.

To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."

Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.

ls612•1mo ago
These models generate probably a billion images a day. If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models. That may precisely be the point of this tbh.
lokar•1mo ago
If they can't prevent child porn, then it should be banned.
ls612•1mo ago
Should photoshop be outlawed? What about MS Paint? Both of them I’m pretty sure are capable of creating this stuff.

Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?

lokar•1mo ago
Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.

I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.

That is just what the law says today (AIUI), and is consistent with how it has been applied.

ls612•1mo ago
Somehow I doubt the prosecutor will apply the same standard to the other image generation models, which I bet (obviously without evidence given the nature of this discussion) can be convinced by a motivated adversary to do the same thing at least once. But alas, selective prosecution is the foundation of political power in the west and pointing that out gets you nothing but downvotes. patio11 once put it that pointing out how power is exercised is the first thing that those who wield power prohibit when they gain it.
lokar•1mo ago
You often see (appropriately, IMO) a certain amount of discretion wrt prosecution when things are changing quickly.

I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.

One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.

ls612•1mo ago
When the GP said “not possible” they were referring to the strict letter of the law that I was, not to your lower standard of “make a good effort to fix it”. Law is not code because that gives the lawgivers discretion to exercise power arbitrarily while convincing the citizens that they live under the “rule of law”. At least the Chinese for all their faults don’t bother with the pretense.
lokar•1mo ago
If you reject the foundation of liberal western civilization I don’t know what to tell you.

Move to china?

ls612•1mo ago
I’m just pointing out how the world works in real life not saying that it is desirable. Thinking in terms of that distinction is very useful.
chrisjj•1mo ago
> When the GP said “not possible” they were referring to the strict letter of the law that I was

I, the GP, was referring to what I quoted:

“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material [CSAM],” and I agree this is in effect the law at least here in UK.

zajio1am•1mo ago
> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.

What if Photoshop is provided as a web service? This is analogous to running image generation as a service. In both cases provider takes input from the user (in one case textual description, on the other case sequence of mouse events) and generates and image with an automated process, without specific human intentional input from the provider.

Note that in this case using them for producing CSAM was against terms of service, so the business was tricked to produce CSAM.

And there are other automated services that could be used for CSAM generation, for example automated photo booths. Should their operator be held liable if someone use them to produce CSAM?

ben_w•1mo ago
If you really care, ask a lawyer, not a tech forum.

I anticipate there will already be case law/prescident showing the shape of what is allowed/forbidden, and most of us won't know the legal jargon necessary to understand the answer.

Or answers, plural, because laws vary by jurisdiction.

Most of us here are likely to be worse at painting such boundaries than an LLM. LLMs can pass at least one of the bar exams, most of us probably cannot.

chrisjj•1mo ago
> Note that in this case using them for producing CSAM

There's no such report in this article.

chrisjj•1mo ago
> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not.

The law disagrees - at least in UK. CSAM is illegal regardless of tool used.

> I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created

The article makes no report that happened. And it does report that is prohibited by the tool in question. But it does then quote a child safety advocate saying tools should not be allowed to "generate this material", so is misleading in the extreme.

nl•1mo ago
Even the OP's quote made it clear this isn't the case. Companies need to show they rigorously tested that the model doesn't do this.

It's like cyber insurance requirements - for better or worse, you need to show that you have been audited, not prove you are actually safe.

ben_w•1mo ago
> These models generate probably a billion images a day.

Collectively, probably more. Grok? Not unless you count each frame of a video, I think.

> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.

If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.

Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.

> That may precisely be the point of this tbh.

Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.

* https://benwheatley.github.io/blog/2022/10/09-19.33.04.html

krapp•1mo ago
>To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."

No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.

If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.

We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.

ben_w•1mo ago
> No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.

Not in Europe it hasn't, and definitely not for specifically image generation, where it seems to be filling the same role as clipart, stock photos, and style transfer that can be done in other ways.

Image editing is the latest hotness in GenAI image models, but knowledge of this doesn't seem to have percolated very far around the economy, only with weird toys like this one currently causing drama.

> If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.

I wish I could've shown this kind of message to people 3.5 years ago, or even 2 years ago, saying that AI will never take over because we can always just switch it off.

Mind you, for 2 years ago I did, and they still didn't like it.

pureagave•1mo ago
I'm sorry to tell you this, but the EU has already been lost.
wolvoleo•1mo ago
Because we're not on the forefront of AI development? It also means we have less to lose when the bubble blows. I'm quite happy with the policies here. And we will become more independent from US tech. It'll just take time.
GolfPopper•1mo ago
>No, they likely won't. AI has become far too big to fail at this point.

Things that cannot happen will not happen. "AI" (aka LLMs dressed up as AGI by giga-scalr scammers) is never going to work as hyped. What I expect to see in the collision is an attempt to leverage corporate fear and greed into wealth-extractive social control. Hopefully it burns to the ground.

nozzlegear•1mo ago
> AI has become far too big to fail at this point.

This might be true for the glorified search engine type of AI that everyone is familiar with, but not for image generation. It's a novelty at best, something people try a couple times and then forget about.

krapp•1mo ago
Every industry that uses images and art in any way - entertainment, publishing, science, advertising, you name it - is already investing in image and video generation. If any business in these fields isn't already exclusively using LLMs to generate their content, I promise you they're working on it as aggressively as they can afford to.

Grok is a novelty, but that's Grok.

nozzlegear•1mo ago
Meh, I don't buy it. People dislike AI generated images and art more than they dislike AI generated, well, anything. AI images adorning an article, blog post, announcement or product listing is the hallmark of a cheap, bottom of the barrel product these days, if not an outright scam.
SpicyLemonZest•1mo ago
People dislike AI generated art in the same way that they dislike cheap injection molded plastic. When they inspect it in detail, they wish it were something more expensive and artisan, but most of the time they barely notice it and just see that the world is a bit more colorful than a blank page or unfinished metal panel would be.

For context, the top 5 HN links as of this comment contain one attributed (https://xeiaso.net/notes/2026/year-linux-desktop/, characters page discloses Stable Diffusion usage) and one likely (https://www.madebywindmill.com/tempi/blog/hbfs-bpm/, high-context unattributed image with no Tineye results) AI generated image.

xena•1mo ago
Fwiw, replacing that is in my TODO list, but my TODO list is long.
SpicyLemonZest•1mo ago
Entirely reasonable if you ask me!
krapp•1mo ago
Businesses don't care, it's more important to the bottom line to use AI than not.

And they know that eventually people will just learn to accept it.

ben_w•1mo ago
I am uncertain about this.

Yes, GenAI content is cheap.

But a business whose output is identical to everyone else's, because everyone is using the same models to solve the same problems, has no USP and no signal to customers to say why they're different.

The meme a while back about OpenAI having no moat? That's just as true for businesses depending on any public AI tool. If you can't find something that AI fails at, and also show this off to potential customers, then your business is just a lottery ticket with extra steps.

krapp•1mo ago
Most businesses don't compete on difference - most competitors are virtually indistinguishable from one another. Rather they tend to compete on brand identity and loyalty.

I think businesses assume the output of AI can be the same as with their current workflow, just with the benefit of cutting their workforce, so all upside and no downside.

I also suspect that a lot of businesses (at least the biggest ones) are looking into hosting their own LLM infrastructure rather than depending on third party services, but even if not there are plenty of "indispensible" services that businesses rely on already. Look at AWS.

chrisjj•1mo ago
> Most businesses don't compete on difference - ... Rather they tend to compete on brand identity and loyalty.

Without a difference, brand identity and loyalty are impossible to build.

chrisjj•1mo ago
> We don't want models that do this.

But plenty enough people do want them. Grok is meeting demand.

ben_w•1mo ago
"We the people" in agregate.

"Many individuals" != democratic majority.

To argue otherwise is to claim that the ~1% of the population who are into this are going to sway the governments or the people they represent.

chrisjj•1mo ago
If we're talking about undressing, there is no aggregate. Some people want something; others want them not to have it. Simple.

What the former want is not illegal. So the fact they are a minority is irrelevent. Minorites have rights too.

If we're talking about genuine CSAM, that very different and not even limited to undressing.

ben_w•1mo ago
> If we're talking about genuine CSAM, that very different and not even limited to undressing.

Why would you think I was talking about anything else?

Also, "subset" != "very different"

> What the former want is not illegal. So the fact they are a minority is irrelevent. Minorites have rights too.

This is newsworthy because non-consensual undressing of images of a minor, even by an AI, already passes the requisite threshold in law and by broad social agreement.

This is not a protected minority.

chrisjj•1mo ago
> Why would you think I was talking about anything else?

Because this thread shows CSAM confused with other, e.g. simple child pornography.

And even the source of the quote isn't helping. Clicking its https://www.iwf.org.uk/ "Find out why we use the term ‘child sexual abuse’ instead of ‘child pornography’." gives 403 - Forbidden: Access is denied.

Fortunately a good explanation of the difference can be found here: https://www.thorn.org/blog/ai-generated-child-sexual-abuse-t...

> This is newsworthy because non-consensual undressing of images of a minor, even by an AI

That's not the usage in question. The usage is "generate realistic pictures of undressed minors". Undressing images of real people is prohibited.

BigTTYGothGF•1mo ago
Then maybe they shouldn't go to market.
pureagave•1mo ago
AI is a nation defense issue. No nation has the luxury to stop their AI companies without the risk of losing national sovereignty.
belter•1mo ago
So child porn is now a national security issue?
squigz•1mo ago
Lumping image gen models, LLMs, and other forms of recent machine learning altogether and dressing it up in the "National Defence" ribbon doesn't seem like a great idea.

I don't think the ability for citizens to make deep fake porn of whoever they want is the same as a country not investing in practical defensive applications of AI.

dragonwriter•1mo ago
> AI is a nation defense issue.

AI image editors attached to social media networks with a design that allows producing AI edits (including, but not limited to, nonconsensual intimate images and child pornography) of other user’s media without consent are not a national defense issue, and, even to the extent that AI arguably is a national defense issue, those particular applications can be curtailed entirely by a nation without any adverse impact on national defense.

You can distort any issue by zooming out to orbital level and ignoring the salient details.

UncleMeat•1mo ago
"We have to make the revenge porn machine for national defense" is the sort of thing that makes people light bay area tech busses on fire.
ben_w•1mo ago
I'm 90% sure LLMs are, just from how important code is, but image generators? Nah. They're as relevant to national sovereignty as having a local film industry: more than zero, because money is fungible, but still really really low.
lokar•1mo ago
Then your business can fairly be ruled illegal.

You don't have the right to act in violation of the law merely because it's the only way to make a buck.

kelseyfrog•1mo ago
In practice, once a business reaches a size threshold, the law is creatively decided to preserve its existence rather than terminate it. Legality is a function of economics.
lokar•1mo ago
Until people have had enough and push back

And if you want to change the law to allow the business, go for it. But until then, we must follow the law.

ben_w•1mo ago
> Legality is a function of economics.

Sometimes it is. Sometimes "democracy" isn't just a buzzword.

X.com has been blocked by poorer nations than France (specifically, Brazil) for not following local law.

belter•1mo ago
Possible or not, what about starting by criminal investigation, to force disclosure, and find out if Musk company had child porn in the training data?
fragmede•1mo ago
It probably doesn't have pictures of fishes driving cybertrucks, but it's able to generate those, so I doubt there'd need to be CSAM in the database, but maybe I don't know how these things really work.
belter•1mo ago
AI generates child porn, HN downvotes a proposal for an investigation...
dragonwriter•1mo ago
> “AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

> Not possible.

Note that the description of the accusation earlier in the article is:

> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.

It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.

chrisjj•1mo ago
> it is quite practical for the Grok product to enforce consent of the user whose content is being operated on

No, because it cannot even ID that user.

xenospn•1mo ago
If it's possible to create a model that generates photorealistic images based on a single line of text, it is 100% possible to restrict the output.
wolvoleo•1mo ago
I'm sure it's possible. If anything they can just run an AI check after generation. Similar to the way Google makes sure it doesn't return CSAM in their results. If they can filter that, the AI providers can check their own output too.
chrisjj•1mo ago
I think you've mistaken CSAM for child pornography.
maplethorpe•1mo ago
Sure it is. Forbid training models on images of humans, humanoids, or living creatures, and they won't be able to generate images of those things. It's not like AI is some uncontrollable magic force that hatched out of an egg. It can only output what you put in.
ChrisArchitect•1mo ago
Earlier:

https://news.ycombinator.com/item?id=46460880

https://news.ycombinator.com/item?id=46466099

https://news.ycombinator.com/item?id=46468414

josefritzishere•1mo ago
It would be Musk automating CSAM. This is how we're starting 2026?
chrisjj•1mo ago
The article doesn't mention CSAM. It is about "created sexualized images of people including minors" and CSAM is not that.
TheAlchemist•1mo ago
What's amazing to me is that this is silenced by HN. It should be a major topic of discussion here.
chrisjj•1mo ago
What makes you say it is silenced by HN?
JKCalhoun•1mo ago
This seems like it should be on the HN front page.

And yesterday.

chrisjj•1mo ago
Surely it us missing just because many have flagged it. But that's far short of silencing it.
JKCalhoun•1mo ago
I imagine—but then that is just the HN community silencing it. Maybe flagging should have some different kind of weighting?
chrisjj•1mo ago
It is not silenced.
ndsipa_pomu•1mo ago
I think discussions should be made immune from users flagging them if they have more than a certain amount of comments (50? 100?). If it's a problematic topic, then the administrator(s) should be able to flag it if they deem it necessary.
chrisjj•1mo ago
That would surely suffer the cobra effect.
akutlay•1mo ago
This was in fact flagged (though no indication on the title) yesterday, approximately 2 hours after it was on the second page.
TheAlchemist•1mo ago
It got upvoted quite quickly, then flagged. The way the algorithm works, if a hot topic is flagged for some time, the story will never show up on the front page.
thrance•1mo ago
Grok breaks France's hate-speech laws all the time but they're only going after it because it can create images of naked people? Musk's propaganda nexus should have been banned years ago here, but not for this stupid reason.
johneth•1mo ago
It makes sexual images of real people without their consent. That's what's breaking the law.
chrisjj•1mo ago
Is an image of someone wearing only a bikini seriously claimed to be sexual here?

Not by this article, for sure.

"The service prohibits pornography involving real people’s likenesses and sexual content involving minors, which is illegal to create or distribute.

Still, users have prompted Grok to digitally remove clothing from photos — mostly of women — so the subjects appeared to be wearing only underwear or bikinis."

archagon•1mo ago
Try doing that to your coworker and report back on how HR describes it in your offboarding meeting.
chrisjj•1mo ago
The question is about an image not an action.
tukarsdev•1mo ago
Removing people's clothes without their consent is assault, it doesn't matter if, in another setting, where they did consent to it, it would be fine. It obviously is sexual if you look at the intent of people doing it. Not the clothing itself.
chrisjj•1mo ago
> Removing people's clothes without their consent is assault

Didn't you know? Grok does not actually remove people's clothes. Instead it pastes from photos of /other people who are already naked/.

tukarsdev•1mo ago
It makes it look realistic with their likeness and body shape though, so it's not merely "pasting" from photos of other people. And quite honestly I find it morally objectionable to have a tool that makes violating consent and bodily autonomy so trivial. Filters exist, they should be used. It's nothing like photoshop. It runs on their servers, using their software, and then is uploaded, by them, onto their website. Yeah I definitely hold X and grok accountable for the harm it causes. It's nothing like offline software.