As long as you don't need actual CSAM material in the training data and the generated images are different enough from a real person (both of which seem to be very possible technology-wise), that seems to be a good thing.
Or is there any indication that availability of CSAM material actually increases the likelihood that people act on it later?
Given that, I don't see how you can allow ai generated CSAM without effectively making "real" csam images be unprosectable.
upmind•1h ago