> The Court has provisionally certified an ADEA collective, which includes: “All individuals aged 40 and over who, from September 24, 2020, through the present, applied for job opportunities using Workday, Inc.’s job application platform and were denied employment recommendations.” In this context, being “denied” an “employment recommendation” means that (i) the individual’s application was scored, sorted, ranked, or screened by Workday’s AI; (ii) the result of the AI scoring, sorting, ranking, or screening was not a recommendation to hire; and (iii) that result was communicated to the prospective employer, or the result was an automatic rejection by Workday.
This is the best light you can shine on the discrimination. Most often it really is managers taking their “seniority” literally. As in, they don’t want to take the risk their reports are smarter, more experienced or capable of replacing them, so they discriminate on the basis of age. It’s counterintuitive, but this feels truest from my historical observation.
They said ethics demand that any AI that is going to pass judgment on humans must be able to explain its reasoning. An if-then rule says this, or even a statistical correlation between A and B indicates that would be fine. Fundamental fairness requires that if an automated system denies you a loan, a house, or a job, it be able to explain something you can challenge, fix, or at least understand.
LLMs may be able to provide that, but it would have to be carefully built into the system.
That could get interesting, as most companies will not provide feedback if you are denied employment.
parliament32•1h ago
I'm interested to see Workday's defense in this case. Will it be "we can't be held liable for our AI", and will it work against a law as "strong" as ADEA?