Right before the new year, the AI training community absorbed one of its biggest shockwaves: 50k contributors waking up to sudden removal and a one-line “quality requirements changed” message, with no real path to recover. For many, it meant losing time, momentum, and income.
This isn’t a post against AI training, just more of a defense for experts’ contributions. RLHF and data annotation help make models reliable, effective, and safe in the real world, and scaling it will demand deep expertise across industries, languages, and edge cases.
If we’re serious about scaling it, we need to start elevating the expert workforce that shapes AI across domains. We can’t treat them as disposable or erase them overnight.
tjr•1d ago
KyleW9•1d ago