Is it so hard to understand that men can be parents too?
Edit: feel free to downvote me, but as a remainder: stress kills. ;)
Edit: This reply was written to a response that got completely rewritten in an edit. It may not make as much sense
I took my daughter to appointments and as soon as I started asking meaningful questions, doctors immediately switched to assuming I was the one to talk to.
When you act like you know what’s going on, act like you’re on top of it, I’ve never once had a doctor assume I was just babysitting. This was true in the Midwest and California.
Exactly! They do that. If a father takes the kid, they will ask for his number, not the mother's, in my experience. If both the mother and father goes with the kid, well, there are cues they pick up on. In my case my father typically was always in the background while my mother was the one doing the talking, meaning they ask for her number, not my dad's. So, all in all, whoever does the most talking, for example. And if my dad wanted to be the one called, my mom would have told them his number, or my dad would have. I do not see an issue here really.
But the fact that I'm bringing my daughter to a medical appointment should be a pretty clear indication that, you know, I bring my daughter to medical appointments.
Overton window and cultural norms take time to slide. Might be there after another generation, too early to tell.
The scheduler is trained to give higher weight to those sorts of questions apparently. This begs some questions for GPTs, questions like how are they supposed to model something not implied in the training data?
Thousands of hours of context engineering has shown me how LLMs will do their best to answer a question with insufficient context and can give all sorts of wrong answers. I've found that the way I prompt it and what information is in the context can heavily bias the way it responds when it doesn't have enough information to respond accurately.
You assume the bias is in the LLM itself, but I am very suspicious that the bias is actually in your system prompt and context engineering.
Are you willing to share the system prompt that led to this result that you're claiming is sexist LLM bias?
Edit: Oidar (child comment to this) did an A/B test with male names and it seems to have proven the bias is indeed in the LLM, and that my suspicion of it coming from the prompt+context was wrong. Kudos and thanks for taking the time.
Common large datasets being inherently biased towards some ideas/concepts and away from others in ways that imply negative things is something that there's a LOT of literature about
The OP is making a claim that an LLM assumes a meeting between two women is childcare. I've worked with LLMs enough to know that current gen LLMs wouldn't make that assumption by default. There is no way that whatever calendar related data that was used to train LLMs would include majority of sole-women 1:1s being childcare focused. That seems extremely unlikely.
The cleanup is going to be a grim task.
Emily / Sophia vs Bob / John https://imgur.com/a/9yt5rpA
Thank you for taking the time to approach this scientifically and share the evidence with us. I appreciate knowing the truth of the matter, and it seems my suspicion that the bias was from the prompt was wrong.
I admit I am surprised.
sophiabk•1h ago
Calendar: “Emily / Sophia.” Classification: “childcare.”
It was a perfect snapshot of how bias seeps into everyday AI. Most models still assume women = parents, planning = domestic, logistics = mom.
We’re designing from the opposite premise: AI that learns each family’s actual rhythm, values, and tone — without default stereotypes.
orochimaaru•1h ago
Is this right or wrong is the incorrect question - because AI doesn’t understand bias or morality. It needs to be taught and it’s being taught from heavily biased sources.
You should be able to craft prompt and guardrails to not have it do that. Just expecting it to behave that way is naive - if you have ever looked deeper into how AI is trained.
The big question is - what solutions exist to train it differently with a large enough corpus of public or private/paid for data.
Fwiw - I’m the father of two girls whom I have advised to stay off social media completely because it’s unhealthy. So far they have understood why.
daveguy•1h ago
orochimaaru•30m ago
I think they’re leaning on everyone - even traditional enterprise company boards, startups, etc. to get this going. It’s not organic growth - it’s a PR machine with a trillion $$ behind an experiment.
gwelner•19m ago