frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Global AI computing capacity is doubling every 7 months

https://epoch.ai/data-insights/ai-chip-production
1•delichon•39s ago•0 comments

Universal AI Agent Subscription

https://twitter.com/firmwareai/status/2009735769867571459
1•cgilly2fast•1m ago•0 comments

Bulk rename files by pasting Excel data (client-side)

1•Salmannaseem•1m ago•0 comments

Ask HN: Who's running local AI workstations in 2026?

1•Blue_Cosma•2m ago•0 comments

Lemon Slice nabs $10.5M from YC and Matrix to build out its digital avatar tech

https://techcrunch.com/2025/12/23/lemon-slice-nabs-10-5m-from-yc-and-matrix-to-build-out-its-digi...
1•PaulHoule•2m ago•0 comments

Betterment Hacked

https://twitter.com/usdshitcoin/status/2009761135457599766
1•chardigio•3m ago•0 comments

Betterment Users Receive Suspicous Crypto Investment Push Notifcation

https://www.reddit.com/r/betterment/s/yfueybW3ZZ
1•rickcarlino•4m ago•0 comments

Vendor Locked CPUs, Restricting and Securing Hardware

https://cloudninjas.com/blogs/news/vendor-locked-cpus-restricting-and-securing-hardware
1•tanelpoder•5m ago•0 comments

Jujutsu v0.37.0 Released

https://github.com/jj-vcs/jj/releases/tag/v0.37.0
1•birdculture•5m ago•0 comments

Box64 vs. FEX Emulation Performance on ARM Cortex-A53

https://printserver.ink/blog/box64-vs-fex/
1•ValdikSS•5m ago•0 comments

SpeedyEDA – One-Line Data Exploration for Developers and Data Scientists

https://pypi.org/project/speedyeda/
1•dawitworku•6m ago•1 comments

Betterment Hacked by Crypto Scam

https://bsky.app/profile/ericrie.se/post/3mbzlov44sc23
3•EricRiese•10m ago•1 comments

Every Developer Abandoned This Product in 3 Minutes

https://tessakriesel.com/every-developer-abandoned-this-product-in-3-minutes/
2•mooreds•11m ago•0 comments

Apartments to be built above Costco's

https://www.entrepreneur.com/business-news/hundreds-of-apartments-are-being-built-on-top-of-a-cos...
2•nateb2022•11m ago•0 comments

Collection and Use of Biometrics by U.S. Citizenship and Immigration Services

https://www.federalregister.gov/documents/2025/11/03/2025-19747/collection-and-use-of-biometrics-...
1•hentrep•16m ago•0 comments

Tell HN: Increased Number of Incidents on GitHub Between Nov 2025 and Jan 2026

https://www.githubstatus.com/history
1•stefankuehnel•16m ago•0 comments

PoC Wayland compositor rendering graphics into terminal

https://github.com/pshirshov/brain-damage
2•pshirshov•18m ago•1 comments

AI solves Erdos problem #728 (Terence Tao mathstodon post)

https://mathstodon.xyz/@tao/115855840223258103
2•cod1r•27m ago•0 comments

U.S. Hiring Turned Sluggish over First Year of Trump's Second Term

https://www.nytimes.com/2026/01/09/us/politics/us-hiring-economy-trump-second-term.html
6•duxup•27m ago•1 comments

Building a Raytracer from Scratch in Go

https://github.com/ikarishinji9/riot
1•ikarishinji9•28m ago•0 comments

Stored Procedures Considered Harmful

https://pouyamiri.com/blog/stored-procedures-considered-harmful
1•p0u4a•28m ago•0 comments

First 12 Minutes of MTV [video]

https://www.youtube.com/watch?v=oVrEzH9gkZk
1•MilnerRoute•30m ago•0 comments

Tim Cook and Sundar Pichai are cowards

https://www.theverge.com/policy/859902/apple-google-run-by-cowards
14•mdhb•36m ago•2 comments

Pre-Commit Lint Checks: Vibe Coding's Kryptonite

https://www.getseer.dev/blogs/pre-commit-linting-vibe-coding
1•akshay326•37m ago•1 comments

Turso: The Next Evolution of SQLite

https://github.com/tursodatabase/turso
1•nateb2022•37m ago•0 comments

The Future of Stack Overflow

https://waspdev.com/articles/2026-01-09/the-future-of-stack-overflow
1•senfiaj•44m ago•0 comments

America is falling out of love with pizza

https://www.msn.com/en-us/money/companies/america-is-falling-out-of-love-with-pizza/ar-AA1Tziyh
1•jnord•45m ago•1 comments

The Abstraction Trap: Why Layers Are Lobotomizing Your Model

2•blas0•46m ago•1 comments

Tell HN: X changed its Iran flag emoji

3•michaeltimo•46m ago•1 comments

Italy Fines Cloudflare for Refusing to Filter Pirate Sites on Public 1.1.1.1 DNS

https://torrentfreak.com/italy-fines-cloudflare-e14-million-for-refusing-to-filter-pirate-sites-o...
4•jnord•46m ago•0 comments
Open in hackernews

Grok turns off image generator for most after outcry over sexualised AI imagery

https://www.theguardian.com/technology/2026/jan/09/grok-image-generator-outcry-sexualised-ai-imagery
72•beardyw•14h ago

Comments

westpfelia•14h ago
Now only paying Grok subscribers can make CSAM. Super cool.
nickmyersdt•13h ago
Therefore we know that a proportion of paying grok subscribers will cause harm to real victims. This isn't an abstract debate about free expression.

Non-consensual intimate imagery harms real people.

CSAM normalizes and facilitates abuse of real children.

Grok, and everyone involved in it or similar endeavours, facilitate abuse.

literalAardvark•13h ago
Paying subscribers are trivial to track down and convict if they're making CSAM.

In a way, leaving it open as a honeypot is the best action.

janice1999•12h ago
Doubtful. The first thing Musk did was fire the safety team at Twitter.
Hamuko•12h ago
Safety people are also quitting Xitter themselves.

https://bsky.app/profile/caseynewton.bsky.social/post/3mbwqh...

Hamuko•12h ago
Makes perfect business sense. Where else would these users go to for their CSAM-generation needs? They have no other option but to pay!
Urahandystar•13h ago
Took them long enough, This was predictable and dangerous. It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble even if the execution is haphazard and horrendous. The combination of X's userbase and that technology made this almost inevitable.
nickmyersdt•13h ago
The goal itself is flawed, not just the execution.

If you build a system explicitly designed to have no content boundaries, and it produces CSAM, that's not a failure of execution - that's the system working as designed. You don't get credit for noble intentions when the outcome was entirely foreseeable.

Deciding to place no limits on what an AI will generate is itself a value judgment. It's choosing to enable every possible use, including the worst ones. That's not principled neutrality; it's moral abdication dressed up as libertarianism.

maplethorpe•13h ago
> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble

When I was young it was considered improper to scrape too much data from the web. We would even set delays between requests, so as not to be rude internet citizens.

Now, it's considered noble to scrape all the world's data as fast as possible, without permission, and without any thought as to the legality the material, and then feed that data into your machine (without which the machine could not function) and use it to enrich yourself, while removing our ability to trust that an image was created by a human in some way (an ability that we have all possessed for hundreds of thousands of years -- from cave painting to creative coding -- and which has now been permanently and irrevocably destroyed).

bakies•8h ago
Just like his guise of "Platform of Free Speech" this is an intentional marketing tool and not at all his nobility.
ben_w•7h ago
> It's a real shame because Elon's goals of allowing an unrestricted AI are somewhat noble

Are those goals noble? This is the same guy who also said "with AI we are summoning the demon" and whose self-justification for getting a trillion dollar Tesla bonus deal involved the phrase "robot army"?

Havoc•13h ago
Probably one of the most weak ass responses to a crisis ever. How was this not done within hours? Or if they can’t manage that at least within hours of it hitting mainstream news
pjc50•13h ago
Crisis? It was an intentional product launch. They assumed they'd be able to "get away with it" and that media outrage would not translate into effective legal action.
Havoc•13h ago
That does seem plausible given how blatant it was

CaaS

soco•12h ago
Move fast and break things? Or, innovation at all costs? Or, business value here and now? Or... (add more marketing buzzwords)
praptak•13h ago
They first tried to manage it by putting the blame 100% on their pedophile users and obviously absolving themselves of any reponsibility (cue tired analogies with knife makers not responsible for stabbings).

Fortunately this narration did not catch traction.

close04•13h ago
> cue tired analogies with knife makers not responsible for stabbings

The knife maker will be in hot water if you ask them for a knife and you're very specific about how you'll break the law with it, and they just give it to you and do nothing else about it (equivalent to the prompt telling the LLM exactly the illegal thing you want).

Even more if the knife they made is illegal, an automatic knife or a dagger (equivalent to the model containing the necessary information to create CSAM).

rsynnott•7h ago
It's hard to believe that they didn't know that they had this problem before launching; given the volume of material, it's not like it can be difficult to drag out of the offending magic robot.

I'd assume they were just blindsided by the response; they're likely in real danger of getting either DNS-delisted or outright banned in several jurisdictions.

pjc50•13h ago
Presumably in response to https://www.telegraph.co.uk/business/2026/01/08/musks-x-coul... and others. I've seen a claim that Spain is referring X for prosecution over this as well.

It's just been restricted to paying customers, and that decision could be driven as much by cost as by outrage.

Edit: may also be linked to people making deepfakes of Renee Good, the woman murdered by US authorities in Minneapolis.

richsouth•12h ago
So only PAYING customers can make CSAM and distribute it openly. Nice one.
rsynnott•11h ago
The dreaded bluetick becomes a shade ickier.
xiphias2•7h ago
Noone can, but it's much easier to verify / prosecute people using credit cards (especially as credit card companies take it very seriously as well)
rsynnott•11h ago
> Edit: may also be linked to people making deepfakes of Renee Good, the woman murdered by US authorities in Minneapolis.

Bloody hell, what the hell is wrong with people?

pjc50•9h ago
Culture war.
kyleee•7h ago
People have been this way since the dawn of time...
DataDaemon•13h ago
Too late, let's wait for another 120M from EU.
usrnm•13h ago
I understand that it's a very controversial take, but I don't really understand what's so terrible about computer-generated images depicting something like this. I mean, it's clearly wrong when it concerns actual children, but this is just pixels on the screen and bytes on disks. No living creature was actually hurt when producing these images and cannot be hurt by them. We as a society are totally ok with images depicting various horrible and outrageous stuff, why is this example suddenly such a big issue?

Edit: I'm not talking about deepfakes of real people

DANmode•13h ago
1) Gateway-drug theory.

2) Inability to differentiate between real and fake - less theoretical.

usrnm•13h ago
That's the same argument used for banning video games. By killing monsters on the screen you somehow become more violent in real life. Which is comlete bullshit
admash•13h ago
The difference being that sexuality is typically considered an innate desire, whilst the desire to commit violence is not.

Plus, we as a species have a biological imperative to protect our offspring, but apparently an immense capacity to ignore violence committed against others.

scotty79•12h ago
Availability of pornography correlates with people having less sex not more on average.
mrbombastic•9h ago
Sure but there have also been numerous studies that it does affect sexual behavior and expectations for those that do have sex
scotty79•9h ago
Monkey see. Monkey do. Nothing surprising here. Once people decide to do something then they model their actions on what they've seen. Even for such innate and strong desires. So completely hands off approach, leaving it to market forces, might not be the best course of action. But banning doesn't seem like a golden bullet either.
scns•12h ago
Those monsters are virtual ie not real. The people harmed by this are real breathing human beings.
usrnm•12h ago
I stated it already several times in this thread: my question is not about deepfakes, I can clearly see how those can be harmful. I'm talking about pure computer-generated content, not based on existing people
pjc50•11h ago
> Inability to differentiate between real and fake - less theoretical.

This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.

palmotea•3h ago
>> Inability to differentiate between real and fake - less theoretical.

> This feels like a deeper, much more important topic that I'm at risk of writing thousands of words on. It feels like the distinction used to be .. more clearly labelled? And now nobody cares, and entire media ecosystems live on selling things that aren't really true as "news", with real and occasionally deadly consequences.

I don't think he's talking about "media ecosystems" but rather enforcement. If fake CSAM proliferates (that can be confused for the real thing), it would create problems for investigating real cases. Investigative resources would be wasted investigating fake CSAM, and people may not report real CSAM because they falsely assume it's fake.

So it probably makes sense to make fake CSAM illegal, not because of the direct harm it does, but for the same reasons its illegal to lie to a police officer or make a false crime report.

gbil•13h ago
Is this some kind of a joke? People, children, are getting bullied every day by fake nude images of them ending even in suicide and you are asking what is "so terrible" ???

EDIT: op seems to have clarified in another reply that they are talking about fully generated computer images but this is out of context, the outcry here is that Grok generated fake images of actual people to start with!

Handy-Man•13h ago
| No living creature was actually hurt when producing these images and cannot be hurt by them

Huh? People have been editing images of real women to depict them in bikinis (and that's the least offensive).

usrnm•13h ago
I'm not talking about deepfakes of real people, I'm talking about computer-generated CSAM. Should've made myself clearer
Hamuko•12h ago
This is sorta the wrong context to have this debate, since in Grok's case, most of them are edits of real pictures of real people. It's not just Grok generating some lolicon out of thin air.
defrost•12h ago
Is that trained on real CSAM just as computer generated art is trained on real images?
raincole•13h ago
The argument is that virtual porn normalizes actual porn and virtual abuse normalizes actual abuse. You know, like how the Bible normalized burning people alive.
7bit•13h ago
I agree, let's finally ban the fucking Bible.
cosmicgadget•2h ago
The kama sutra?
Hamuko•12h ago
I would've never cast my first stone had it not been for the Bible.
wrecked_em•12h ago
Back that up with verse and context, please.
willmarch•9h ago
There actually are explicit cases: Leviticus 20:14 and Leviticus 21:9 prescribe burning as a punishment in certain scenarios (ancient Israelite legal code).

Leviticus 20:14 (KJV)

“And if a man take a wife and her mother, it is wickedness: they shall be burnt with fire, both he and they; that there be no wickedness among you.”

Leviticus 21:9 (KJV)

“And the daughter of any priest, if she profane herself by playing the whore, she profaneth her father: she shall be burnt with fire.”

admash•13h ago
Because it creates an appetite for that type of content which is expected to grow to include real images with real harm.
nicbou•13h ago
I agree, but then again didn’t we have the same debate about violent video games? I don’t know why I am okay with simulated violence but repulsed by simulated sexuality.
admash•13h ago
The difference being that sexuality is typically considered an innate desire, whilst the desire to commit violence is not. Plus, we as a species have a biological imperative to protect our offspring, but apparently an immense capacity to ignore violence committed against others.
Jigsy•3h ago
> whilst the desire to commit violence is not.

Considering how many people want to murder people (or justify murdering people) simply for being attracted to children, are you categorically sure about that?

Jigsy•3h ago
People say the same thing about anime artwork and manga, which is an excuse I don't buy into.

The only reason I personally can't condone photorealistic AI images is because they're indistinguishable from photographs.

And in this case, it's required the exploitation of another human being (their photograph, or a photograph of them) in order to undress/manipulate the image.

oktoberpaard•13h ago
I’d say it’s very easy to hurt someone with pixels on the screen by spreading these generated images of actual people online.
nicbou•13h ago
A woman posts on twitter. In the replies, people ask Grok to remove her clothes. Deepfakes are proliferating, sometimes out of personal interest, sometimes as a form of bullying. Fake images can and do hurt real people.

There is also a greater debate about giving people with harmful deviances a “victimless” outlet versus normalising their deviance. Sure there are no children harmed in making the images, but what if they generate images of child actors, or normalise conversation about child sexual abuse?

scotty79•12h ago
That is super interesting culturally. Once video hosting became feasible to be offered for free people came up with the idea that posting their recordings online is a good idea. Which is fine bacause to the casual observer their face is as anonymous as their online handle. You can extract value from millions of your viewers with them knowing about you only as much as you told and shown them.

Publishing is just the first part. The other part is reactions. Most platforms let you disable them so your viewers don't see the disgusting things people say about you right there along your content. Yet many people who publish themselves decide to leave them on. Because the vile comments actually help them exploit their viewers. There's a value to being told to kill yourself in a comment on your post.

If the capability to let users generate fake porn in the comments was left to creators. Many of them would leave that option on. For the same reason they leave the comments on. Ben Shapiro could benefit a lot if some terrible person commented on his video with deepfake or him sucking someone off. Both because of the outrage and because his viewer base is more homophobic and homophoby correlates with homosexual arousal.

pjc50•12h ago
> There's a value to being told to kill yourself in a comment in your post.

At this stage it might just be easier to seek out Satan directly, you'll probably get a better deal for your soul.

scotty79•9h ago
Welcome to media economy of 2026 where a prolific UK white supremacist is actually an Indian living in Dubai.

Have an interesting stay.

_petronius•13h ago
Not so much controversial, as evidence that you completely lack capacity for empathy and you should do some serious self-reflection. This is just a really vile and amoral view to hold.

People are being hurt by this, because "just pixels on a screen and bytes on a disk" can constitute harm due to the social function that information serves.

It's like calling hurling insults at someone as "just words" because no physical violence has occurred yet. The words themselves absolutely can be harm because of the effect they have, and also create an environment that leads to further, physical violence. Anyone who has experienced even mild bullying can attest to that.

Furthermore women and girls are often subject to online harassment and humiliation. This is of course part of that -- we aren't talking about fictional images here, we are talking about photos of real people, many who are children, being manipulated to shame, humiliate, and harass them sexually, targetted at women and girls overwhelmingly.

Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.

drcongo•12h ago
The fact that this has downvotes speaks volumes about HN users.
nomel•11h ago
Or, it's that

> we aren't talking about fictional images here, we are talking about photos of real people, many who are children,

is not compatible with what GP actually said

> it's clearly wrong when it concerns actual children

> No living creature was actually hurt when producing these images and cannot be hurt by them.

making these overly dramatic character attacks seem mostly silly

> you completely lack capacity for empathy and you should do some serious self-reflection. This is just a really vile and amoral view to hold.

> Advocating for the freedom to commit that kind of harm against other people is gross, and you should reconsider your views and how much care you have for other people.

And everyone clapped.

drcongo•6h ago
> is not compatible with what GP actually said

GP edited their post to add that after everyone pointed out that that's what the entire thread is actually about and GP realised how disgusting they looked. Keep up.

scotty79•9h ago
Who knew users of HACKER news wouldn't be in favor of suppressing technology and exchange of information whatever it might be for mainstream morality reasons.
exodust•10h ago
It's just manipulated photos. No need to panic.

Everyone knows photos can be easily faked. The alarmist response serves no purpose. AI itself can be tasked to make and publish fake photos. Will you point pitchforks at the generators of the generators of the generators?

Fake content has a momentary fizz followed by a sharp drop-off and demotion to yesterday's sloppy meme. Fading to nothing more than a cartoon you don't like. Let's not, I hope, go after "cartoons" or their publishers.

w4rh4wk5•13h ago
I think this got blown up right now due to how accessible it was.

However, I think it's clear that Pandora's box is now wide open and that you cannot close it. Sure, you can turn off that Grok integration, but the AI image generation capabilities are now widely available for basically anyone to use.

I wonder whether it'd be better to just "accept and live with it". I agree that this can cause a lot of harm, but I don't see a way where this can be outlawed and prosecuted in such a way that there's a net benefit for society. In the EU many have been battling proposals like Chat Control for the past decades not because they want to protect sex offenders, but because backdooring societies privacy on a grand scale is likely far more detrimental than the impact of sex offenders. (And here we aren't even talking about "real" CSAM content.)

pjc50•12h ago
> I wonder whether it'd be better to just "accept and live with it".

I don't think a world where every female public figure gets nonconsensual porn of themselves shared _publicly_ is better.

Private is a separate matter, but only if it stays truly private.

soraminazuki•7h ago
> I wonder whether it'd be better to just "accept and live with it".

Big tech's approach of move fast, break things, and gain a sh** load of money and influence has cost the world so much over the past two decades. So much so that the post-WWII rules-based international order is under threat. We're on the verge of sliding back towards a world where might make right and the powerful gets to kill, beat, steal, and sexually abuse whoever they want whenever they want. Worse, with the help of technology, they get to entertain the masses by turning those horrific acts into social media content.

It's largely due to the acts of big tech that we got into this mess. But instead learning from this biggest mistake of our generation and taking proactive steps to prevent further harm, you propose that we all suck it up and accept whatever our tech billionaire overlords want to further inflict on this world? WTAF.

> proposals like Chat Control

Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?

w4rh4wk5•3h ago
First, I am not proposing anything.

I feel like we are already past the point where influential people have to play by the same rules as everyone else. I dislike this as much as 99% of the population, but I don't realistically see a remedy given how our governments (EU) are operating.

> Are you seriously comparing banning tools for openly forging nudes and sex pics of people to backdooring people's private communications?

I am not comparing these two things; I mentioned Chat Control as tackling CSAM is one of its main selling points. These forging tools are in the open and banning access and use of them is practically impossible. You could force platforms into setting up filters for public content, but this won't stop the nudes from being shared privately and likely still being accessible on the web _somewhere_. Just look at the bs on tiktok that's been publicly accessible and growing for years now...

IMHO public content filtering won't help much and the following steps will likely involve tapping into people's private content and messages. And this is where I draw the line.

igleria•12h ago
You don't understand what is dangerous about being able to trivially generate believable images of events that actually did not happen?
kyleee•5h ago
The cat is out of the bag
array_key_first•1h ago
Yeah, that doesn't mean you need to give the cat a car bomb and license to kill. Of course X has responsibility here, people deserve to go to jail for this.
duxup•9h ago
I think the issue becomes an issue of what content and engagement you’re selling when most of the responses to a pretty girl are “grok post a pic of her clothes off” type posts.

The level of discourse on twitter is pretty terrible already but at that point what’s the platform even selling… and who is the platform for?

Just from a product standpoint you got problems.

episteme•9h ago
I think I hold a similar view to you and have the same question so maybe what I’ve been thinking about might be useful to you. Everyone is upset about CSAM but when you talk about it, it’s only about deepfakes.

I don’t think we can avoid a world where people can generate CSAM easily, so we have to separate the discussion between being able to do that privately and grok being able to do it.

It makes sense to me that we don’t want widely used websites to contain images of CSAM that you can’t easily avoid, it’s simply repulsive to almost everyone and that’s almost certainly a human instinct, I don’t think it needs to be much more complicated than that.

In terms of generating CSAM privately or even sharing it with other people, I think this is a much more interesting discussion. I think at this point it is an open question on whether it is harmful. Could this replace the abuse that is happening to create some of the real content? Does the escalation argument hold water - will people be more likely to sexually assault children due to access to this material? I don’t think we know enough about pedophilia to answer these questions but given that I don’t think there is a way to stop generating this content in 2026 we really need to answer these questions before we decide to simply incarcerate everyone doing it.

danso•8h ago
You condition your take with “I’m not talking about deepfakes of real people”, but why do you think it is that Grok offers users the ability to generate pornographic imagery of entirely fake digital people — even tailoring them to be as visually flawless as one could desire — and yet so many users end up using it to generate deepfake porn of real people?

So with that in mind, why should we assume that the unrestricted proliferation of fake child porn would not produce significant harm downstream to society? Is the assumption that unlimited fake child porn would satiate the kind of people who now seek out real child porn?

ndsipa_pomu•8h ago
> No living creature was actually hurt when producing these images

I would strongly dispute that as the amount of energy required by and pollution produced by AI systems does cause a lot of harm to living creatures and environments.

Heapifying•7h ago
I do wonder why Grok has this capability in the first place.

Is it because it was pre-trained with real images? (which would be highly illegal and immoral, but I wonder if Twitter has a data-curation team somewhere)

Maybe some kind of distillation technique such as "generate normal porn -> decrease body size -> generate childish face and replace the original image's face with it"? Which would prove there's an intent to explicitly allow generation of this kind

Is it an emergent behaviour?

Why are there no better safeguards?

jiggawatts•2h ago
I’ve heard anecdotes that AI image generators become better at illustrating clothed people if they’re also trained on nudes.

A good analogy is that human artists also often train by painting or drawing a nude model.

muwtyhg•5h ago
> Edit: I'm not talking about deepfakes of real people

Then you are not talking about the article, so what was the point of your comment?

drcongo•12h ago
Willing to bet he got threats from Apple and Google (well, Apple at least) that the CSAM app formerly known as Twitter would be removed from the App Store.
pjc50•12h ago
Everyone else just gets deleted instantly with nowhere to call. Twitter has long had favourable treatment despite the "adult content" rules of the app stores.
duxup•9h ago
All the big companies give each other so much extra room to operate.

Facebook’s practices would have gotten any other dev banned from all stores long ago.

Meanwhile any other devs are under a different microscope / standard.

rchaud•4h ago
The walled garden never claimed to offer equal treatment under its laws.
neko_ranger•7h ago
>Twitter has long had favourable treatment despite the "adult content" rules of the app stores.

Reddit as well

ChoGGi•9h ago
Oh okay, so only a few pedophiles will have access to Elon Musk's pedophile picture generator?

"Random Braveheart quote"

Unless you use the grok app...

"Random Matrix quote"

I'll take those downvotes and see myself out.

fortranfiend•5h ago
Hmm it just let me put Keir Starmer in a bikini.
dragonwriter•5h ago
More accurate: “After free demo proves demand (in the worst possible way), Grok makes image generation and editing a paid-only feature”.