frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

New reporting rules end crypto’s tax secrecy era

https://www.pymnts.com/cryptocurrency/2026/new-reporting-rules-end-cryptos-tax-secrecy-era/
1•hhs•53s ago•0 comments

Show HN: Browser in C and Lua for the Playdate Console

https://github.com/remysucre/ORBIT
1•remywang•1m ago•0 comments

NumPy Enhancement Proposal 21: Simplified and explicit advanced indexing

https://numpy.org/neps/nep-0021-advanced-indexing.html
1•dynm•1m ago•0 comments

Life and Death at the County Fair

https://bittersoutherner.com/issue-no-12/life-and-death-at-the-county-fair
1•noleary•1m ago•0 comments

Codex Front end Skill: Unique Designs within one shot

https://github.com/vipulgupta2048/codex-skills
1•vipulgupta2048_•6m ago•1 comments

Grok Blames 'Lapses in Safeguards' After AI Chatbot Posts Sexual Images of Kids

https://www.forbes.com/sites/tylerroush/2026/01/02/grok-blames-lapses-in-safeguards-after-ai-chat...
2•randycupertino•7m ago•1 comments

AI Maestro Agent Orchestration

https://github.com/23blocks-OS/ai-maestro
1•RyanShook•9m ago•0 comments

TIL: I am an open-source contributor

https://beasthacker.com/til/i-am-an-open-source-contributor.html
1•oumua_don17•9m ago•0 comments

Spotify Wrapped season, don't outsource your love of music to AI

https://www.theguardian.com/music/2025/dec/03/spotify-wrapped-ai-create-your-own-playlists
1•cdrnsf•14m ago•0 comments

Solving Agent Context Loss: A Beads and Claude Code Workflow for Large Features

https://jx0.ca/solving-agent-context-loss/
1•jarredkenny•18m ago•1 comments

IMS Toucan – Text-to-Speech for over 7000 Languages

https://github.com/DigitalPhonetics/IMS-Toucan
1•punnerud•18m ago•0 comments

Show HN: A flight simulator for difficult leadership conversations

https://shadowscoping.com/
2•rezat•19m ago•0 comments

Self-driving cars aren't nearly a solved problem

https://strangecosmos.substack.com/p/self-driving-cars-arent-nearly-a
1•el_nahual•23m ago•0 comments

Show HN: Snowflake Emulator – Local Snowflake Development with Go and DuckDB

https://github.com/nnnkkk7/snowflake-emulator
1•sr-white•23m ago•2 comments

Roundup of Events for Bootstrappers in January 2026

https://bootstrappersbreakfast.com/2025/12/23/roundup-of-january-2026-bootstrappers-events/
1•skmurphy•24m ago•1 comments

Lynkr – Multi-Provider LLM Proxy

https://github.com/Fast-Editor/Lynkr
1•vishalveera•24m ago•1 comments

How to read more? We might take instruction from a more leisurely age

https://www.historytoday.com/archive/out-margins/new-year-readers-resolutions
3•hhs•24m ago•0 comments

Common prefix skipping, adaptive sort

http://smalldatum.blogspot.com/2026/01/common-prefix-skipping-adaptive-sort.html
2•coffepot77•24m ago•0 comments

Home Assistant

https://www.home-assistant.io/
1•elsewhen•25m ago•0 comments

2026 will be my year of the Linux desktop

https://xeiaso.net/notes/2026/year-linux-desktop/
70•todsacerdoti•30m ago•36 comments

The Physics of Ideas: Reality as a Coordination Problem

https://fuck.fail
2•shoes_for_thee•31m ago•1 comments

Veil: Client-Side Steganography

https://veil.offseq.com/
2•jonbaer•31m ago•0 comments

Carbon Costs Quantified

https://www.astralcodexten.com/p/carbon-costs-quantified
1•thelastgallon•31m ago•0 comments

Show HN: Website that plays the lottery every second

https://lotteryeverysecond.lffl.me/
2•Loeffelmann•33m ago•0 comments

Wayfarer Labs is about to be OverWorld

https://wayfarerlabs.ai
3•overworld•35m ago•1 comments

Software Error Will Force 325,000 Californians to Replace Real IDs

https://www.nytimes.com/2026/01/02/us/california-real-id-dmv-error.html
3•bookofjoe•37m ago•1 comments

EmacsConf 2025 Notes

https://sachachua.com/blog/2026/01/emacsconf-2025-notes/
3•JNRowe•39m ago•0 comments

Tell HN: I shipped a script-based language filter and an onboarding tour

1•rankiwiki•39m ago•0 comments

One line, one agent: LLM-native language NERD goes agent-first

https://www.nerd-lang.org/agent-first
1•gnanagurusrgs•42m ago•1 comments

Show HN: Wip – Watch and reload any process using pluggable hooks

https://github.com/system32-ai/wip
2•debarshri•44m ago•0 comments
Open in hackernews

Grok Sexual Images Draw Rebuke, France Flags Content as Illegal

https://finance.yahoo.com/news/grok-sexual-images-draw-rebuke-180354505.html
26•akutlay•2h ago

Comments

akutlay•2h ago
It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
zajio1am•1h ago
This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.
nozzlegear•45m ago
Why does that seem absurd to you?
7bit•24m ago
Don't feed the troll
nutjob2•42m ago
Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.

If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.

More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.

wolvoleo•6m ago
True, CSAM should be blocked by all means. That's clear as day.

However I think for Europe the regular sexual content moderation is way over the top. I know the US is very prudish but here most people aren't. As an example I was at a NYE dinner party and we had a lengthy discussion about female ejaculation (trying to avoid using the more common slang word for it :) and several of my friends commented on their experiences. I imagine this is not done in the US but here it's quite normal among friends, just less so in a work setting (hence the NSFW label being pretty appropriate)

If you mention something like that to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics.

Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards.

akutlay•2h ago
Also see: https://timesofindia.indiatimes.com/technology/tech-news/it-...
chrisjj•2h ago
“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

Not possible.

SpicyLemonZest•1h ago
It's extremely possible! As the source article notes, the Grok developers specifically chose to make their AI more permissive of sexual content than their competitors, which won't produce such images. This isn't a scenario where someone developed a complex jailbreak to circumvent Grok's built-in protections.
ben_w•1h ago
> Not possible.

To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."

Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.

ls612•1h ago
These models generate probably a billion images a day. If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models. That may precisely be the point of this tbh.
lokar•1h ago
If they can't prevent child porn, then it should be banned.
ls612•1h ago
Should photoshop be outlawed? What about MS Paint? Both of them I’m pretty sure are capable of creating this stuff.

Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?

lokar•59m ago
Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.

I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.

That is just what the law says today (AIUI), and is consistent with how it has been applied.

ls612•42m ago
Somehow I doubt the prosecutor will apply the same standard to the other image generation models, which I bet (obviously without evidence given the nature of this discussion) can be convinced by a motivated adversary to do the same thing at least once. But alas, selective prosecution is the foundation of political power in the west and pointing that out gets you nothing but downvotes. patio11 once put it that pointing out how power is exercised is the first thing that those who wield power prohibit when they gain it.
lokar•33m ago
You often see (appropriately, IMO) a certain amount of discretion wrt prosecution when things are changing quickly.

I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.

One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.

ls612•12m ago
When the GP said “not possible” they were referring to the strict letter of the law that I was, not to your lower standard of “make a good effort to fix it”. Law is not code because that gives the lawgivers discretion to exercise power arbitrarily while convincing the citizens that they live under the “rule of law”. At least the Chinese for all their faults don’t bother with the pretense.
nl•1h ago
Even the OP's quote made it clear this isn't the case. Companies need to show they rigorously tested that the model doesn't do this.

It's like cyber insurance requirements - for better or worse, you need to show that you have been audited, not prove you are actually safe.

ben_w•59m ago
> These models generate probably a billion images a day.

Collectively, probably more. Grok? Not unless you count each frame of a video, I think.

> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.

If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.

Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.

> That may precisely be the point of this tbh.

Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.

* https://benwheatley.github.io/blog/2022/10/09-19.33.04.html

krapp•1h ago
>To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."

No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.

If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.

We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.

ben_w•54m ago
> No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.

Not in Europe it hasn't, and definitely not for specifically image generation, where it seems to be filling the same role as clipart, stock photos, and style transfer that can be done in other ways.

Image editing is the latest hotness in GenAI image models, but knowledge of this doesn't seem to have percolated very far around the economy, only with weird toys like this one currently causing drama.

> If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.

I wish I could've shown this kind of message to people 3.5 years ago, or even 2 years ago, saying that AI will never take over because we can always just switch it off.

Mind you, for 2 years ago I did, and they still didn't like it.

pureagave•20m ago
I'm sorry to tell you this, but the EU has already been lost.
wolvoleo•1m ago
Because we're not on the forefront of AI development? It also means we have less to lose when the bubble blows. I'm quite happy with the policies here. And we will become more independent from US tech. It'll just take time.
GolfPopper•48m ago
>No, they likely won't. AI has become far too big to fail at this point.

Things that cannot happen will not happen. "AI" (aka LLMs dressed up as AGI by giga-scalr scammers) is never going to work as hyped. What I expect to see in the collision is an attempt to leverage corporate fear and greed into wealth-extractive social control. Hopefully it burns to the ground.

nozzlegear•48m ago
> AI has become far too big to fail at this point.

This might be true for the glorified search engine type of AI that everyone is familiar with, but not for image generation. It's a novelty at best, something people try a couple times and then forget about.

krapp•6m ago
Every industry that uses images and art in any way - entertainment, publishing, science, advertising, you name it - is already investing image and video generation. If any business in these fields isn't already exclusively using LLMs to generate their content, I promise you they're working on it as aggressively as they can afford to.

Grok is a novelty, but that's Grok.

BigTTYGothGF•1h ago
Then maybe they shouldn't go to market.
pureagave•22m ago
AI is a nation defense issue. No nation has the luxury to stop their AI companies without the risk of losing national sovereignty.
belter•20m ago
So child porn is now a national security issue?
squigz•11m ago
Lumping image gen models, LLMs, and other forms of recent machine learning altogether and dressing it up in the "National Defence" ribbon doesn't seem like a great idea.

I don't think the ability for citizens to make deep fake porn of whoever they want is the same as a country not investing in practical defensive applications of AI.

lokar•1h ago
Then your business can fairly be ruled illegal.

You don't have the right to act in violation of the law merely because it's the only way to make a buck.

kelseyfrog•1h ago
In practice, once a business reaches a size threshold, the law is creatively decided to preserve its existence rather than terminate it. Legality is a function of economics.
lokar•58m ago
Until people have had enough and push back

And if you want to change the law to allow the business, go for it. But until then, we must follow the law.

ben_w•43m ago
> Legality is a function of economics.

Sometimes it is. Sometimes "democracy" isn't just a buzzword.

X.com has been blocked by poorer nations than France (specifically, Brazil) for not following local law.

belter•1h ago
Possible or not, what about starting by criminal investigation, to force disclosure, and find out if Musk company had child porn in the training data?
fragmede•56m ago
It probably doesn't have pictures of fishes driving cybertrucks, but it's able to generate those, so I doubt there'd need to be CSAM in the database, but maybe I don't know how these things really work.
belter•18m ago
AI generates child porn, HN downvotes a proposal for an investigation...
dragonwriter•48m ago
> “AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”

> Not possible.

Note that the description of the accusation earlier in the article is:

> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.

It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.

jsheard•26m ago
> it is quite practical for the Grok product to enforce consent of the user whose content is being operated on

It could enforce the consent of the user who posted the source image, but anyone can post an image of anyone else, so that wouldn't count for much.

xenospn•3m ago
If it's possible to create a model that generates photorealistic images based on a single line of text, it is 100% possible to restrict the output.
ChrisArchitect•1h ago
Earlier:

https://news.ycombinator.com/item?id=46460880

https://news.ycombinator.com/item?id=46466099

https://news.ycombinator.com/item?id=46468414

josefritzishere•5m ago
It would be Musk automating CSAM. This is how we're starting 2026?