frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Grok turns off image generator for most after outcry over sexualised AI imagery

https://www.theguardian.com/technology/2026/jan/09/grok-image-generator-outcry-sexualised-ai-imagery
74•beardyw•16h ago

Comments

westpfelia•15h ago
Now only paying Grok subscribers can make CSAM. Super cool.
nickmyersdt•15h ago
Therefore we know that a proportion of paying grok subscribers will cause harm to real victims. This isn't an abstract debate about free expression.

Non-consensual intimate imagery harms real people.

CSAM normalizes and facilitates abuse of real children.

Grok, and everyone involved in it or similar endeavours, facilitate abuse.

literalAardvark•14h ago
Paying subscribers are trivial to track down and convict if they're making CSAM.

In a way, leaving it open as a honeypot is the best action.

janice1999•14h ago
Doubtful. The first thing Musk did was fire the safety team at Twitter.
Hamuko•14h ago
Safety people are also quitting Xitter themselves.

https://bsky.app/profile/caseynewton.bsky.social/post/3mbwqh...

Hamuko•14h ago
Makes perfect business sense. Where else would these users go to for their CSAM-generation needs? They have no other option but to pay!
Urahandystar•15h ago
Took them long enough, This was predictable and dangerous. It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble even if the execution is haphazard and horrendous. The combination of X's userbase and that technology made this almost inevitable.
nickmyersdt•15h ago
The goal itself is flawed, not just the execution.

If you build a system explicitly designed to have no content boundaries, and it produces CSAM, that's not a failure of execution - that's the system working as designed. You don't get credit for noble intentions when the outcome was entirely foreseeable.

Deciding to place no limits on what an AI will generate is itself a value judgment. It's choosing to enable every possible use, including the worst ones. That's not principled neutrality; it's moral abdication dressed up as libertarianism.

maplethorpe•14h ago
> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble

When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.

Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).

bakies•10h ago
Just like his guise of "Platform of Free Speech" this is an intentional marketing tool and not at all his nobility.
ben_w•8h ago
> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble

Are those goals noble? This is the same guy who also said "with AI we are summoning the demon" and whose self-justification for getting a trillion dollar Tesla bonus deal involved the phrase "robot army"?

Havoc•15h ago
Probably one of the most weak ass responses to a crisis ever. How was this not done within hours? Or if they can’t manage that at least within hours of it hitting mainstream news
pjc50•15h ago
Crisis? It was an intentional product launch. They assumed they'd be able to "get away with it" and that media outrage would not translate into effective legal action.
Havoc•14h ago
That does seem plausible given how blatant it was

CaaS

soco•14h ago
Move fast and break things? Or, innovation at all costs? Or, business value here and now? Or... (add more marketing buzzwords)
praptak•15h ago
They first tried to manage it by putting the blame 100% on their pedophile users and obviously absolving themselves of any reponsibility (cue tired analogies with knife makers not responsible for stabbings).

Fortunately this narration did not catch traction.

close04•14h ago
> cue tired analogies with knife makers not responsible for stabbings

The knife maker will be in hot water if you ask them for a knife and you're very specific about how you'll break the law with it, and they just give it to you and do nothing else about it (equivalent to the prompt telling the LLM exactly the illegal thing you want).

Even more if the knife they made is illegal, an automatic knife or a dagger (equivalent to the model containing the necessary information to create CSAM).

rsynnott•9h ago
It's hard to believe that they didn't know that they had this problem before launching; given the volume of material, it's not like it can be difficult to drag out of the offending magic robot.

I'd assume they were just blindsided by the response; they're likely in real danger of getting either DNS-delisted or outright banned in several jurisdictions.

pjc50•15h ago
Presumably in response to https://www.telegraph.co.uk/business/2026/01/08/musks-x-coul... and others. I've seen a claim that Spain is referring X for prosecution over this as well.

It's just been restricted to paying customers, and that decision could be driven as much by cost as by outrage.

Edit: may also be linked to people making deepfakes of Renee Good, the woman murdered by US authorities in Minneapolis.

richsouth•14h ago
So only PAYING customers can make CSAM and distribute it openly. Nice one.
rsynnott•13h ago
The dreaded bluetick becomes a shade ickier.
xiphias2•8h ago
Noone can, but it's much easier to verify / prosecute people using credit cards (especially as credit card companies take it very seriously as well)
rsynnott•13h ago
> Edit: may also be linked to people making deepfakes of Renee Good, the woman murdered by US authorities in Minneapolis.

Bloody hell, what the hell is wrong with people?

pjc50•10h ago
Culture war.
kyleee•9h ago
People have been this way since the dawn of time...
DataDaemon•15h ago
Too late, let's wait for another 120M from EU.
usrnm•15h ago
I understand that it's a very controversial take, but I don't really understand what's so terrible about computer-generated images depicting something like this. I mean, it's clearly wrong when it concerns actual children, but this is just pixels on the screen and bytes on disks. No living creature was actually hurt when producing these images and cannot be hurt by them. We as a society are totally ok with images depicting various horrible and outrageous stuff, why is this example suddenly such a big issue?

Edit: I'm not talking about deepfakes of real people

DANmode•14h ago
1) Gateway-drug theory.

2) Inability to differentiate between real and fake - less theoretical.

usrnm•14h ago
That's the same argument used for banning video games. By killing monsters on the screen you somehow become more violent in real life. Which is comlete bullshit
admash•14h ago
The difference being that sexuality is typically considered an innate desire, whilst the desire to commit violence is not.

Plus, we as a species have a biological imperative to protect our offspring, but apparently an immense capacity to ignore violence committed against others.

scotty79•13h ago
Availability of pornography correlates with people having less sex not more on average.
mrbombastic•11h ago
Sure but there have also been numerous studies that it does affect sexual behavior and expectations for those that do have sex
scotty79•11h ago
Monkey see. Monkey do. Nothing surprising here. Once people decide to do something then they model their actions on what they've seen. Even for such innate and strong desires. So completely hands off approach, leaving it to market forces, might not be the best course of action. But banning doesn't seem like a golden bullet either.
scns•14h ago
Those monsters are virtual ie not real. The people harmed by this are real breathing human beings.
usrnm•14h ago
I stated it already several times in this thread: my question is not about deepfakes, I can clearly see how those can be harmful. I'm talking about pure computer-generated content, not based on existing people
pjc50•13h ago
> Inability to differentiate between real and fake - less theoretical.

This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.

palmotea•5h ago
>> Inability to differentiate between real and fake - less theoretical.

> This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.

I don't think he's talking about "media ecosystems" but rather enforcement. If fake CSAM proliferates (that can be confused for the real thing), it would create problems for investigating real cases. Investigative resources would be wasted investigating fake CSAM, and people may not report real CSAM because they falsely assume it's fake.

So it probably makes sense to make fake CSAM illegal, not because of the direct harm it does, but for the same reasons its illegal to lie to a police officer or make a false crime report.

gbil•14h ago
Is this some kind of a joke? People, children, are getting bullied every day by fake nude images of them ending even in suicide and you are asking what is "so terrible" ???

EDIT: op seems to have clarified in another reply that they are talking about fully generated computer images but this is out of context, the outcry here is that Grok generated fake images of actual people to start with!

Handy-Man•14h ago
| No living creature was actually hurt when producing these images and cannot be hurt by them

Huh? People have been editing images of real women to depict them in bikinis (and that's the least offensive).

usrnm•14h ago
I'm not talking about deepfakes of real people, I'm talking about computer-generated CSAM. Should've made myself clearer
Hamuko•14h ago
This is sorta the wrong context to have this debate, since in Grok's case, most of them are edits of real pictures of real people. It's not just Grok generating some lolicon out of thin air.
defrost•14h ago
Is that trained on real CSAM just as computer generated art is trained on real images?
raincole•14h ago
The argument is that virtual porn normalizes actual porn and virtual abuse normalizes actual abuse. You know, like how the Bible normalized burning people alive.
7bit•14h ago
I agree, let's finally ban the fucking Bible.
cosmicgadget•4h ago
The kama sutra?
Hamuko•14h ago
I would've never cast my first stone had it not been for the Bible.
wrecked_em•14h ago
Back that up with verse and context, please.
willmarch•10h ago
There actually are explicit cases: Leviticus 20:14 and Leviticus 21:9 prescribe burning as a punishment in certain scenarios (ancient Israelite legal code).

Leviticus 20:14 (KJV)

“And if a man take a wife and her mother, it is wickedness: they shall be burnt with fire, both he and they; that there be no wickedness among you.”

Leviticus 21:9 (KJV)

“And the daughter of any priest, if she profane herself by playing the whore, she profaneth her father: she shall be burnt with fire.”

admash•14h ago
Because it creates an appetite for that type of content which is expected to grow to include real images with real harm.
nicbou•14h ago
I agree, but then again didn’t we have the same debate about violent video games? I don’t know why I am okay with simulated violence but repulsed by simulated sexuality.
admash•14h ago
The difference being that sexuality is typically considered an innate desire, whilst the desire to commit violence is not. Plus, we as a species have a biological imperative to protect our offspring, but apparently an immense capacity to ignore violence committed against others.
Jigsy•5h ago
> whilst the desire to commit violence is not.

Considering how many people want to murder people (or justify murdering people) simply for being attracted to children, are you categorically sure about that?

Jigsy•5h ago
People say the same thing about anime artwork and manga, which is an excuse I don't buy into.

The only reason I personally can't condone photorealistic AI images is because they're indistinguishable from photographs.

And in this case, it's required the exploitation of another human being (their photograph, or a photograph of them) in order to undress/manipulate the image.

oktoberpaard•14h ago
I’d say it’s very easy to hurt someone with pixels on the screen by spreading these generated images of actual people online.
nicbou•14h ago
A woman posts on twitter. In the replies, people ask Grok to remove her clothes. Deepfakes are proliferating, sometimes out of personal interest, sometimes as a form of bullying. Fake images can and do hurt real people.

There is also a greater debate about giving people with harmful deviances a “victimless” outlet versus normalising their deviance. Sure there are no children harmed in making the images, but what if they generate images of child actors, or normalise conversation about child sexual abuse?

scotty79•14h ago
That is super interesting culturally. Once video hosting became feasible to be offered for free people came up with the idea that posting their recordings online is a good idea. Which is fine bacause to the casual observer their face is as anonymous as their online handle. You can extract value from millions of your viewers with them knowing about you only as much as you told and shown them.

Publishing is just the first part. The other part is reactions. Most platforms let you disable them so your viewers don't see the disgusting things people say about you right there along your content. Yet many people who publish themselves decide to leave them on. Because the vile comments actually help them exploit their viewers. There's a value to being told to kill yourself in a comment on your post.

If the capability to let users generate fake porn in the comments was left to creators. Many of them would leave that option on. For the same reason they leave the comments on. Ben Shapiro could benefit a lot if some terrible person commented on his video with deepfake or him sucking someone off. Both because of the outrage and because his viewer base is more homophobic and homophoby correlates with homosexual arousal.

pjc50•13h ago
> There's a value to being told to kill yourself in a comment in your post.

At this stage it might just be easier to seek out Satan directly, you'll probably get a better deal for your soul.

scotty79•11h ago
Welcome to media economy of 2026 where a prolific UK white supremacist is actually an Indian living in Dubai.

Have an interesting stay.

_petronius•14h ago
Not so much controversial, as evidence that you completely lack capacity for empathy and you should do some serious self-reflection. This is just a really vile and amoral view to hold.

People are being hurt by this, because "just pixels on a screen and bytes on a disk" can constitute harm due to the social function that information serves.

It's like calling hurling insults at someone as "just words" because no physical violence has occurred yet. The words themselves absolutely can be harm because of the effect they have, and also create an environment that leads to further, physical violence. Anyone who has experienced even mild bullying can attest to that.

Furthermore women and girls are often subject to online harassment and humiliation. This is of course part of that -- we aren't talking about fictional images here, we are talking about photos of real people, many who are children, being manipulated to shame, humiliate, and harass them sexually, targetted at women and girls overwhelmingly.

Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.

drcongo•14h ago
The fact that this has downvotes speaks volumes about HN users.
nomel•13h ago
Or, it's that

> we aren't talking about fictional images here, we are talking about photos of real people, many who are children,

is not compatible with what GP actually said

> it's clearly wrong when it concerns actual children

> No living creature was actually hurt when producing these images and cannot be hurt by them.

making these overly dramatic character attacks seem mostly silly

> you completely lack capacity for empathy and you should do some serious self-reflection. This is just a really vile and amoral view to hold.

> Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.

And everyone clapped.

drcongo•8h ago
> is not compatible with what GP actually said

GP edited their post to add that after everyone pointed out that that's what the entire thread is actually about and GP realised how disgusting they looked. Keep up.

nomel•27m ago
> GP edited their post to add that

This is false. They only added the "edit: " text, not anything I quoted. I know because I quoted the same in a now-deleted reply before the "edit: " text was added.

scotty79•11h ago
Who knew users of HACKER news wouldn't be in favor of suppressing technology and exchange of information whatever it might be for mainstream morality reasons.
exodust•11h ago
It's just manipulated photos. No need to panic.

Everyone knows photos can be easily faked. The alarmist response serves no purpose. AI itself can be tasked to make and publish fake photos. Will you point pitchforks at the generators of the generators of the generators?

Fake content has a momentary fizz followed by a sharp drop-off and demotion to yesterday's sloppy meme. Fading to nothing more than a cartoon you don't like. Let's not, I hope, go after "cartoons" or their publishers.

w4rh4wk5•14h ago
I think this got blown up right now due to how accessible it was.

However, I think it's clear that Pandora's box is now wide open and that you cannot close it. Sure, you can turn off that Grok integration, but the AI image generation capabilities are now widely available for basically anyone to use.

I wonder whether it'd be better to just "accept and live with it". I agree that this can cause a lot of harm, but I don't see a way where this can be outlawed and prosecuted in such a way that there's a net benefit for society. In the EU many have been battling proposals like Chat Control for the past decades not because they want to protect sex offenders, but because backdooring societies privacy on a grand scale is likely far more detrimental than the impact of sex offenders. (And here we aren't even talking about "real" CSAM content.)

pjc50•13h ago
> I wonder whether it'd be better to just "accept and live with it".

I don't think a world where every female public figure gets nonconsensual porn of themselves shared _publicly_ is better.

Private is a separate matter, but only if it stays truly private.

soraminazuki•8h ago
> I wonder whether it'd be better to just "accept and live with it".

Big tech's approach of move fast, break things, and gain a sh** load of money and influence has cost the world so much over the past two decades. So much so that the post-WWII rules-based international order is under threat. We're on the verge of sliding back towards a world where might make right and the powerful gets to kill, beat, steal, and sexually abuse whoever they want whenever they want. Worse, with the help of technology, they get to entertain the masses by turning those horrific acts into social media content.

It's largely due to the acts of big tech that we got into this mess. But instead learning from this biggest mistake of our generation and taking proactive steps to prevent further harm, you propose that we all suck it up and accept whatever our tech billionaire overlords want to further inflict on this world? WTAF.

> proposals like Chat Control

Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?

w4rh4wk5•5h ago
First, I am not proposing anything.

I feel like we are already past the point where influential people have to play by the same rules as everyone else. I dislike this as much as 99% of the population, but I don't realistically see a remedy given how our governments (EU) are operating.

> Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?

I am not comparing these two things; I mentioned Chat Control as tackling CSAM is one of its main selling points. These forging tools are in the open and banning access and use of them is practically impossible. You could force platforms into setting up filters for public content, but this won't stop the nudes from being shared privately and likely still being accessible on the web _somewhere_. Just look at the bs on tiktok that's been publicly accessible and growing for years now...

IMHO public content filtering won't help much and the following steps will likely involve tapping into people's private content and messages. And this is where I draw the line.

igleria•14h ago
You don't understand what is dangerous about being able to trivially generate believable images of events that actually did not happen?
kyleee•7h ago
The cat is out of the bag
array_key_first•3h ago
Yeah, that doesn't mean you need to give the cat a car bomb and license to kill. Of course X has responsibility here, people deserve to go to jail for this.
duxup•11h ago
I think the issue becomes an issue of what content and engagement you’re selling when most of the responses to a pretty girl are “grok post a pic of her clothes off” type posts.

The level of discourse on twitter is pretty terrible already but at that point what’s the platform even selling… and who is the platform for?

Just from a product standpoint you got problems.

episteme•10h ago
I think I hold a similar view to you and have the same question so maybe what I’ve been thinking about might be useful to you. Everyone is upset about CSAM but when you talk about it, it’s only about deepfakes.

I don’t think we can avoid a world where people can generate CSAM easily, so we have to separate the discussion between being able to do that privately and grok being able to do it.

It makes sense to me that we don’t want widely used websites to contain images of CSAM that you can’t easily avoid, it’s simply repulsive to almost everyone and that’s almost certainly a human instinct, I don’t think it needs to be much more complicated than that.

In terms of generating CSAM privately or even sharing it with other people, I think this is a much more interesting discussion. I think at this point it is an open question on whether it is harmful. Could this replace the abuse that is happening to create some of the real content? Does the escalation argument hold water - will people be more likely to sexually assault children due to access to this material? I don’t think we know enough about pedophilia to answer these questions but given that I don’t think there is a way to stop generating this content in 2026 we really need to answer these questions before we decide to simply incarcerate everyone doing it.

danso•10h ago
You condition your take with “I’m not talking about deepfakes of real people”, but why do you think it is that Grok offers users the ability to generate pornographic imagery of entirely fake digital people — even tailoring them to be as visually flawless as one could desire — and yet so many users end up using it to generate deepfake porn of real people?

So with that in mind, why should we assume that the unrestricted proliferation of fake child porn would not produce significant harm downstream to society? Is the assumption that unlimited fake child porn would satiate the kind of people who now seek out real child porn?

ndsipa_pomu•10h ago
> No living creature was actually hurt when producing these images

I would strongly dispute that as the amount of energy required by and pollution produced by AI systems does cause a lot of harm to living creatures and environments.

Heapifying•9h ago
I do wonder why Grok has this capability in the first place.

Is it because it was pre-trained with real images? (which would be highly illegal and immoral, but I wonder if Twitter has a data-curation team somewhere)

Maybe some kind of distillation technique such as "generate normal porn -> decrease body size -> generate childish face and replace the original image's face with it"? Which would prove there's an intent to explicitly allow generation of this kind

Is it an emergent behaviour?

Why are there no better safeguards?

jiggawatts•3h ago
I’ve heard anecdotes that AI image generators become better at illustrating clothed people if they’re also trained on nudes.

A good analogy is that human artists also often train by painting or drawing a nude model.

muwtyhg•7h ago
> Edit: I'm not talking about deepfakes of real people

Then you are not talking about the article, so what was the point of your comment?

drcongo•14h ago
Willing to bet he got threats from Apple and Google (well, Apple at least) that the CSAM app formerly known as Twitter would be removed from the App Store.
pjc50•13h ago
Everyone else just gets deleted instantly with nowhere to call. Twitter has long had favourable treatment despite the "adult content" rules of the app stores.
duxup•11h ago
All the big companies give each other so much extra room to operate.

Facebook’s practices would have gotten any other dev banned from all stores long ago.

Meanwhile any other devs are under a different microscope / standard.

rchaud•6h ago
The walled garden never claimed to offer equal treatment under its laws.
neko_ranger•9h ago
>Twitter has long had favourable treatment despite the "adult content" rules of the app stores.

Reddit as well

ChoGGi•10h ago
Oh okay, so only a few pedophiles will have access to Elon Musk's pedophile picture generator?

"Random Braveheart quote"

Unless you use the grok app...

"Random Matrix quote"

I'll take those downvotes and see myself out.

fortranfiend•7h ago
Hmm it just let me put Keir Starmer in a bikini.
dragonwriter•7h ago
More accurate: “After free demo proves demand (in the worst possible way), Grok makes image generation and editing a paid-only feature”.

Erdos problem #728 was solved more or less autonomously by AI

https://mathstodon.xyz/@tao/115855840223258103
152•cod1r•2h ago•94 comments

JavaScript Demos in 140 Characters

https://beta.dwitter.net
161•themanmaran•5h ago•35 comments

RTX 5090 and Raspberry Pi: Can It Game?

https://scottjg.com/posts/2026-01-08-crappy-computer-showdown/
117•scottjg•5h ago•59 comments

Caltrain shows why every region should be moving toward regional rail

https://www.hsrail.org/blog/caltrain-shows-why-every-region-should-be-moving-toward-regional-rail/
11•gok•20m ago•3 comments

Flock Hardcoded the Password for America's Surveillance Infrastructure 53 Times

https://nexanet.ai/blog/53-times-flocksafety-hardcoded-the-password-for-americas-surveillance-inf...
149•fuck_flock•7h ago•52 comments

Scientists discover oldest poison, on 60k-year-old arrows

https://www.nytimes.com/2026/01/07/science/poison-arrows-south-africa.html
84•noleary•1d ago•24 comments

How will the miracle happen today?

https://kk.org/thetechnium/how-will-the-miracle-happen-today/
321•zdw•5d ago•178 comments

The (likely?) cheapest home-made Michelson interferometer

https://guille.site/posts/3d-printed-michelson/
76•LolWolf•5d ago•36 comments

QtNat – Open you port with Qt UPnP

http://renaudguezennec.eu/index.php/2026/01/09/qtnat-open-you-port-with-qt/
37•jandeboevrie•4h ago•22 comments

How Markdown took over the world

https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/
115•zdw•6h ago•73 comments

Show HN: EuConform – Offline-first EU AI Act compliance tool (open source)

https://github.com/Hiepler/EuConform
56•hiepler•5h ago•33 comments

Show HN: Scroll Wikipedia like TikTok

https://quack.sdan.io
126•sdan•6h ago•31 comments

Ragdoll Mayhem Maker – a physics-based level editor for my indie game

https://ragdollmayhemmaker.com/
14•anefiox•2d ago•5 comments

Show HN: I made a memory game to teach you to play piano by ear

https://lend-me-your-ears.specr.net
388•vunderba•7h ago•137 comments

See it with your lying ears

https://lcamtuf.substack.com/p/see-it-with-your-lying-ears
7•fratellobigio•22m ago•0 comments

Turn a single image into a navigable 3D Gaussian Splat with depth

https://lab.revelium.studio/ml-sharp
50•ytpete•6h ago•34 comments

Washington National Opera Is Leaving the Kennedy Center

https://www.nytimes.com/2026/01/09/arts/music/washington-national-opera-kennedy-center.html
24•mikhael•49m ago•3 comments

Show HN: Rocket Launch and Orbit Simulator

https://www.donutthejedi.com/
79•donutthejedi•5h ago•26 comments

Replit (YC W18) Is Hiring

https://jobs.ashbyhq.com/replit
1•amasad•6h ago

Amiga Pointer Archive

https://heckmeck.de/pointers/
36•erickhill•9h ago•15 comments

Design duality and the expression problem (2018)

https://www.tedinski.com/2018/02/27/the-expression-problem.html
4•NeutralForest•6d ago•0 comments

Kagi releases alpha version of Orion for Linux

https://help.kagi.com/orion/misc/linux-status.html
328•HelloUsername•11h ago•233 comments

The Vietnam government has banned rooted phones from using any banking app

https://xdaforums.com/t/discussion-the-root-and-mod-hiding-fingerprint-spoofing-keybox-stealing-c...
403•Magnusmaster•7h ago•495 comments

Show HN: Similarity = cosine(your_GitHub_stars, Karpathy) Client-side

https://puzer.github.io/github_recommender/
113•puzer•3d ago•32 comments

Show HN: I built a tool to create AI agents that live in iMessage

https://tryflux.ai/
46•danielsdk•5d ago•23 comments

Show HN: Various shape regularization algorithms

https://github.com/nickponline/shreg
42•nickponline•22h ago•3 comments

Deno has made its PyPI distribution official

https://github.com/denoland/deno/issues/31254
14•zahlman•3h ago•4 comments

TextMaze

https://robobunny.com/projects/textmaze/html/?page=0
8•kqr•6d ago•1 comments

Cloudflare CEO on the Italy fines

https://twitter.com/eastdakota/status/2009654937303896492
384•sidcool•7h ago•559 comments

Exercise can be nearly as effective as therapy for depression

https://www.sciencedaily.com/releases/2026/01/260107225516.htm
276•mustaphah•6h ago•214 comments