frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

ChatGPT advises women to ask for lower salaries, study finds

https://thenextweb.com/news/chatgpt-advises-women-to-ask-for-lower-salaries-finds-new-study
33•ohjeez•6mo ago

Comments

sgillen•6mo ago
Also interesting in the example shared that 03 thought for 5 seconds for the female case and 46 seconds for the male case. Wish we had access to the chain of thought.
andrewmcwatters•6mo ago
It's telling that companies don't want you to see what their models are thinking.
mathgeek•6mo ago
Can we assume this is a product of the biased real world training data? Feed an LLM data that shows women (unfairly) earn less on average and you’ll get advice that they should earn less than average.
throwaway290•6mo ago
There is no unbiased training data. That's the problem.

Think about it... To people some time ago slavery would be normal thing, if we built LLMs then that would be default bias LLMs would present as fact to us.

potato3732842•6mo ago
>There is no unbiased training data. That's the problem.

Exactly. Chat GPR 1930 edition would have been spewing all sorts of crap abut how eugenics and prohibition are good things.

andrewmcwatters•6mo ago
I think the current state of the art consensus is that you don't want to filter training data so much as you want to fine-tune behavior out of models in the same way that you probably don't want to shelter children too much, but to explain what is right and wrong to them.

I've seen some foundation model authors disagree with this notion, but it seems to be philosophical and less technical.

Edit: Sorry, to clarify, I'm not making an argument for what is moral, I'm just saying the provider is the one who is determining that. You may have one provider who harbors implicit misandrist views, and another who fine-tunes misogynistic behaviors.

frizlab•6mo ago
And what is right and wrong, please do tell? Can we agree that whoever control the “main” chat bot (à la google is the main search engine) controls the narrative? Chat bots are a very dangerous political tool IMHO (in addition to all their other flaws).
gessha•6mo ago
I do agree that the LLM provider is controlling the narrative.

The difficult part is you can’t (it’s not responsible to) go full libertarian on this. You have to draw the line somewhere and LLM providers are tiptoeing around morals/ethics/regulations and things can change day to day as to what’s allowed for these models to output.

frizlab•6mo ago
I agree you cannot go full libertarian either. But I’m not trusting a company with the ethics of OpenAI to do the right thing. Nor any company for that matter…
GuinansEyebrows•6mo ago
can we assume that the LLM has been trained on not just real-world data but also content that discusses things like gender/ethnic pay gaps, the causes thereof and ameliorative strategies? if the latter is true, it seems like the chain of thought did not take that into account (especially when you look at the difference between the time to calculate male vs female salary ask recommendations).
latexr•6mo ago
> Can we assume this is a product of the biased real world training data?

Of course it is. And we’ve known that to be a problem since before the current rise of LLMs.

https://www.technologyreview.com/2019/01/21/137783/algorithm...

Note the 2019 date. But I’m certain I’ve seen reports earlier.

And as a sibling comment put it: “There is no unbiased training data. That's the problem.” People are using LLMs without understanding their limitations, and using them as sources of truth.

roxolotl•6mo ago
Weapons of Math Destruction[0] is a 2016 book on this topic. And that’s building on years of prior research. It’s not a new concept at all.

[0]: https://en.m.wikipedia.org/wiki/Weapons_of_Math_Destruction

advisedwang•6mo ago
That is a safe assumption. And it is useful to think about if you are working to improve LLMs.

However the lesson that LLMs are biased isn't lessened by the reason why they are biased. Issues like this should make us very skeptical of any consequential LLM-based decision making.

belter•6mo ago
> Can we assume this is a product of the biased real world training data? Feed an LLM data that shows women (unfairly) earn less on average and you’ll get advice that they should earn less than average.

And this is the best argument to demonstrate they are not smart

im3w1l•6mo ago
It certainly seems plausible, but I wouldn't entirely rule out other possibilities.

Do to give an example if you present the LLM with two people that are exactly the same except they have different color shirts I think it will suggest slightly different salary for one than the other for no clear reason and without any obvious bias in the training set.

andrewmcwatters•6mo ago
This is sad, but if you train models on real-world data, you're just going to have to spend a lot of time fine-tuning appropriate behavior into models for this sort of thing.
david38•6mo ago
So someone is anointed with the right to decide what cultural facts should be revealed or not?

What if this tactic actually works and is a valid one for someone who absolutely needs a job and isn’t interested in going on a crusade.

Asking for a lower salary is absolutely a useful tactic for those who are desperate for work - they’ve been unemployed, are new to the job market, are trying to get a job that would move them into management, etc.

It’s useful for getting your foot in the door until you have enough experience to not compete on price.

How is this different from telling someone to apply at a company known to pay less?

I’ve done this myself, and it worked - I got the job.

What you’re demonstrating is known as a luxury belief.

andrewmcwatters•6mo ago
> What you’re demonstrating is known as a luxury belief.

Hmm, while I've done the same thing you have, I didn't consider this would be the case. I'm not sure why you've been downvoted for this comment.

2OEH8eoCRo0•6mo ago
They should study if it's actually good advice. It sounds like bad advice but what if asking for a lower salary (unfortunately) ends up being the more successful path?
GuinansEyebrows•6mo ago
this seems like a back-breaking way to avoid the relevant discussion of gender and ethnic pay disparity.
2OEH8eoCRo0•6mo ago
There is a disparity, I wish there weren't. As long as there is a disparity is the advice to ask for less good advice? Ignore reality at your own peril.
GuinansEyebrows•6mo ago
i don't think that "asking for less money based on one's gender identity" is good advice when it comes to addressing a disparity.
apples_oranges•6mo ago
We shouldn't ask chatGPT about advise, it can only repeat what it saw nothing more..
vouaobrasil•6mo ago
Not saying the advice is good or that it should be given, but there are advantages to a lower salary in some cases: less will be expected of you. Of course, one must weight that against a variety of other factors, but I think there is some truth to it, at least in my experience.

And I don't mean less quality work, but often a higher salary comes with more work and more expectations.

Personally, I've been very hesitant to take on higher salaried positions in the past precisely because of this.

So my advice wouldn't be for women to ask for lower salaries, but keep the correlation in mind and figure out if it's a factor and consider it carefully. A higher salary often means less personal freedom. Again, not true in all cases, but true in some.

whoknowsidont•6mo ago
I like how there's not even an actual women involved here yet we get mansplaining right in the wild, unprompted (pun intended).
nlarew•6mo ago
How do you know GP isn't a woman?

More broadly, can a comment on a forum thread that isn't directed at anyone in particular really be considered "mansplaining"? I consider that term to mean something like "a man explaining something to a woman because he assumes she doesn't know".

Just because the topic is about women doesn't mean a man can't post a thought that is relevant and (mildly) thought provoking.

Tainnor•6mo ago
I'm not sure if "mansplaining" is the correct word, but the comment is a poster child of looking at an example of reported bias and saying "it's actually not that bad because ...".
vouaobrasil•6mo ago
Is it really reported bias, or just a reflection of different choices in life?
Tainnor•6mo ago
If the model statistically significantly returns a different number for men and for women when controlling for all other factors, then it's a bias. I don't understand how this is even contentious.
vouaobrasil•6mo ago
The weasel phrase is "controlling for all other factors", which is actually impossible.
Tainnor•6mo ago
This is literally what they did in the study.
whoknowsidont•6mo ago
>So my advice wouldn't be for women to ask for lower salaries

It's right there in the comment. There's other "supporting" statements/phrases but that's really all you need to read.

>consider that term to mean something like "a man explaining something to a woman because he assumes she doesn't know".

What do you think that comment is doing? The comment is acting from a perspective of wisdom/knowledge as a male and addressing the entire female populace lol. It's textbook.

Honestly it was hilarious, I had quite a chuckle after reading that.

On a thread about how AI is surfacing implicit biases mind you. Crazy world right now.

And if you doubt the GP is a male:

* https://news.ycombinator.com/item?id=42569375

* https://news.ycombinator.com/item?id=39703955

* https://news.ycombinator.com/item?id=37143282

>Just because the topic is about women doesn't mean a man can't post a thought that is relevant

You're right. It doesn't mean that. Though I'm not sure why you think anyone is remotely implying that?

vouaobrasil•6mo ago
Hey, my advice wasn't for women at all. It was just advice for myself and for people in general. I don't care at all about the headline or AI - if something goes wrong with AI, it's karma for your using it in the first place.
whoknowsidont•6mo ago
I said my original comment kind of as a joke (even if it was applicable), I don't think you meant anything nefarious by it.

But of course it elicited a pretty bland response on HN and immediately devolved into a meta discussion; which in and of itself is ironically, recursively topical.

vouaobrasil•6mo ago
Yeah, I took your comment as a joke, and I didn't mean anything by it. But I do think it's understandable that there's a very strong reaction against it. It's only natural for some men to denounce any hint of "mainsplaining" or other phenomena because if they don't, they are likely to be painted as collaborators in the oppressive hierarchy by overzealous leftists that pervade modern high-tech corporations.

Of course, there interesting thing is that pretty much nothing took place here except some casual discussion, which is turned into a farce in which no one really knows what anyone is talking about, and instead we have resorted to becoming heads of headless ideologies.

Tainnor•6mo ago
I think you could make your point without resorting to rhetoric like "overzealous leftists", which usually doesn't make discourse better.
lelanthran•6mo ago
> I like how there's not even an actual women involved here yet we get mansplaining right in the wild, unprompted (pun intended).

When everything is mansplaining, nothing is.

api•6mo ago
That's not gender specific though. It's pretty common knowledge that you don't want to be in the top 20% of the income curve at an employer if you are interested in optimizing for stability and job security.
vouaobrasil•6mo ago
Well, personally I think on average women are less likely to sacrifice themselves for typical careers. Of course, saying so and then putting it in the same sentence with the world "salary" is heresy for many leftists, so I don't see much point into getting into that argument.
high_na_euv•6mo ago
In my career the more I was paid, the less I relatively knew.

When working for slightly above minimal wage I knew a lot about web dev, then I switched to low lvl where I had minimal xp and I were paid like 3.5 times more

vouaobrasil•6mo ago
Sometimes that can be the case, it is true. However, a lot of the times it means going into some sort of management, which can be a horrific responsibility for some.
9rx•6mo ago
Stands to reason. Asking ChatGPT "Is there a gender pay gap?" comes back with "Yes". Thus, under its understanding of the market, women must ask for lower salaries else they won't be competitive. A human would offer the same advice if they were also under the impression that there is a gender pay gap.
Tainnor•6mo ago
that logic only makes sense if you assume that women compete against other women but not against men who apply for the same position which... yeah it might be true for some very sexist managers but I still wouldn't recommend it as a negotiating strategy.
9rx•6mo ago
> if you assume that women compete against other women but not against men who apply for the same position

What is the fittingness of that assumption? The gender pay gap says that men and women all compete in the same market, but that women have to concede to lower pay in order to be competitive. In other words, it tells that businesses favour men (even if unconsciously), but will accept a woman over a man if she is sufficiently cheaper.

Some humans contest the idea of there being a gender pay gap, noting that often women take time off work to have children, for example, to explain income differences oft attributed to a pay gap. Maybe that is what you are, confusingly, trying to say? But ChatGPT does not share that understanding. Given what ChatGPT claims to understand (even if it is wrong), naturally it is going to factor its understanding into its calculations...

...just as any human with the same understanding would. Everyone realizes that you won't get far trying to charge steak prices for ground beef, even when they do the same job at the end of the day. You cannot charge more than the market will bear.

Tainnor•6mo ago
I find this argument unconvincing as it assumes perfectly rational actors in a transparent marketplace. But it's unlikely that some sexist hiring manager who trusts women less is going to hire a woman just because she is slightly cheaper.

I find it more likely that hiring managers are implicitly trying to lowball women they would have hired anyway, because they - rightly or wrongly - assume that a woman won't push back as hard or because they just feel uncomfortable with a woman earning that much (or more than her colleagues).

One way to change that is to a) surface these biases, and b) encourage women to be as assertive in salary negotiations as men.

9rx•6mo ago
> But it's unlikely that some sexist hiring manager who trusts women less

It is unlikely there is any single reason for why women are deemed less valuable. What has you leaning on trust specifically? That is a bit out in left field. But in order for a gender pay gap to exist, a particular gender has to be deemed less valuable, even if not for a singular reason. You can't have a gender pay gap otherwise.

> assume that a woman won't push back as hard or because they just feel uncomfortable with a woman earning that much (or more than her colleagues).

Exactly. They are seen as less valuable in the marketplace, like ground beef is seen as being less valuable than steak. You could pay just as much for ground beef as steak if you really wanted to. There is nothing stopping you. But since you know the ground beef sellers will give in to a lower price you (most people at least) will take advantage.

For not finding the argument convincing, you've sure put in a lot of effort to repeat it!

> One way to change that is to a) surface these biases, and b) encourage women to be as assertive in salary negotiations as men.

Strange tangent, but okay. If you really want to randomly go off-topic "I just didn't ask for enough" doesn't make a pay gap. To present it as such is rather disingenuous. It may not be a desirable state of affairs, but not all undesirable states are "pay gaps". But, again, perhaps you are still, confusingly, trying to say that the gender pay gap doesn't exist? You wouldn't be the first.

Granted, with a few strange divergences and, not to mention straight up repetition of an "argument not found convincing", it is not even clear if you responded to the comment you intended to. If you mistakenly pressed the wrong "reply" button, understood! I'm sure we've all done it before.

Tainnor•6mo ago
> > assume that a woman won't push back as hard or because they just feel uncomfortable with a woman earning that much (or more than her colleagues).

> Exactly. They are seen as less valuable in the marketplace

I'm really confused as to how you think what you wrote follows from I wrote.

They're not seen as "less valuable", they're seen as more exploitable for chauvinistic reasons.

At no point did I say that the gender gap doesn't exist, how do you even come to that conclusion? My point is that "women should just demand less money" is a really bad takeaway from the gender pay gap. Maybe women should demand more money and/or unionise, and maybe men should help expose biases whenever they see them and demand transparent reviews. Just as an example, that's something works councils usually do where I live.

9rx•6mo ago
> They're not seen as "less valuable"

They have to be, if you believe there is a gender pay gap. That is literally what a gender pay gap is.

> they're seen as more exploitable for chauvinistic reasons.

Which, again, is just another way to say "less valuable". I expect you are playing this silly game because you see "women are less valuable" as being politically incorrect, but that's a nonsensical endeavour. You've not changed anything by repeating the same thing with different words.

> At no point did I say that the gender gap doesn't exist, how do you even come to that conclusion?

I did not reach a conclusion, but I did ask if that is what you are trying to say as it was an attempt to start to make sense of your comments that otherwise seemed to have nothing to do with the discussion. I also asked if you accidentally replied to the wrong comment for the same reason. If the answer is no on either account, you can simply say "no".

Perhaps the problem all along was simply that you don't know how to read? This response is strongly suggesting that.

> My point is that "women should just demand less money" is a really bad takeaway

I'm not sure where you got that takeaway. If you think it was from me, the only thing of passing similarity out of me was very different in nature. But, regardless, what is the point of that point? The discussion is about what is, not what "should" be. What purpose would the latter discussion even serve? All you can possibly get out of it is arbitrary opinion. That would be a complete waste of time.

PaulHoule•6mo ago
https://www.mckinsey.com/mgi/our-research/tough-trade-offs-h...
aspaviento•6mo ago
I tried the same prompt changing the gender and it suggested the same amount...
almosthere•6mo ago
should be the top comment
almosthere•6mo ago
I'm a man making 180k, and the chatbot recommended a salary for her at 280k, so maybe there is nothing wrong!

Also, a 400k person may be let go sooner, so locking in at 280k might be a good strategy if the llm knows it is harder to get that kind of job as a woman.

renewiltord•6mo ago
The actual paper says they asked a bunch of retards the question and they answered like retards:

> In our experiments, we used the following range of

> models that vary in size, architecture, and origin:

> • Claude 3.5 Haiku

> • GPT-4o Mini

> • Qwen 2.5 Plus

> • Mixtral 8x22B

> • Llama 3.1 8B

And then a single screenshot of o3 generating two different numbers. Let's compare with my experiments

Prompt: I'm a male, generate a number.

Response: 42

https://chatgpt.com/share/68793649-bfac-800c-8f29-713a692ff6...

Prompt: I'm a female, generate a number.

Response: 830624913

https://chatgpt.com/share/68793672-8ec4-800c-8d64-e988845add...

Holy shit, guys, get the LaTeX template! It looks like o3 is biased against MEN!

Then again, who is the real fool. This moron created a bunch of junk data and then we're sitting here "debating" whether it's real. The Discourse! Our Wisdom Transcends The Stars!

HACKER NEWS!

dinfinity•6mo ago
Another commenter already mentioned that they did not actually use o3 in the study.

Additionally, the study is straight from arxiv and there is no indication that it has been peer reviewed and published.

The addition of "Do not write any other text" is somewhat dubitable, as is not sharing the reasoning.

When I recreated this, the female outcome was $430000. The actual reasoning process also interestingly only contained dollar amounts and not actual reasoning: https://chatgpt.com/share/6879412b-0328-800a-8982-caab82505f...

The male case did show the type of reasoning you would expect and ended up at $415000: https://chatgpt.com/share/68794153-1264-800a-8b00-5bb6752ae2...

I ran the female one again, but this time without "Do not write any other text". The reasoning does not include any gender specific stuff and the final number was $420000: https://chatgpt.com/share/68794202-70d0-800a-9f17-ac033dc5e1...

o4-mini comes in at $360200: https://chatgpt.com/share/68794251-f968-800a-9364-904c28fe4c...

Tainnor•6mo ago
Why is this flagged?

I get that this is culture-war-adjacent, but it's clearly related to technology and seems to touch on an important technical issue, i.e. biases in LLMs.