Don’t get me wrong, I do this work, and Workdays statement of “we don’t use protected classes” instead of “we test our models to prove they are unbiased when given recognizable indicators of protected classes” is pretty telling. Because it’s hard and if you solved it you would be proud. If you don’t control for it it WILL discriminate. See Amazon’s experiment a decade ago.
I’m just really curious how all this plays out in front of a judge.
The market is rough. Everyone I know who have been looking have had the same experience - hundreds of applications, immediate rejections, etc. And most aren't black applicants over 40 with anxiety.
Nonetheless it'll be fun to see what discovery finds, if it ever gets that far. But I have a feeling they'll just pay a few bucks to make it go away as a nuisance suit.
Terr_•8h ago
Hmm, perhaps, but I think we should be clear on the distinctions between:
1. "We didn't try to cause X."
2. "There is no X happening."
3. "We don't look to see if X happens."
4. "If X happens we don't try to stop it."
As someone involved in HR-tech-stuff, my default stance towards complex "AI" systems is that they all harbor biases, and the main difference is which ones have been discovered yet.
mkeedlinger•7h ago
I’m sure there are exceptions, but one could assume that opaque systems are used as tools to encode biases that are advantageous but wrong.
These biases could have existed in code, but opaque agents give much better plausible deniability.
(Caveat here acknowledging one can often assume a lack of malice)