Why is this relevant?
These LLMs were trained on the internet’s data. Which is full of a lot of vile shit. I can’t imagine these models having a measurable sense of decency as a result. Not without much tweaking from the big model makers.
It’s no wonder people become affected by this technology. It’s trained on humanities unfiltered thoughts. So dark that light can’t escape.
We are on for a ride.
unfiltered data might exist in the petabytes of shit they train on, but goodthink datasets are heavily prioritized.
bookofjoe•7h ago