frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

P2P crypto exchange development company

1•sonniya•3m ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
1•jesperordrup•8m ago•0 comments

Write for Your Readers Even If They Are Agents

https://commonsware.com/blog/2026/02/06/write-for-your-readers-even-if-they-are-agents.html
1•ingve•8m ago•0 comments

Knowledge-Creating LLMs

https://tecunningham.github.io/posts/2026-01-29-knowledge-creating-llms.html
1•salkahfi•9m ago•0 comments

Maple Mono: Smooth your coding flow

https://font.subf.dev/en/
1•signa11•16m ago•0 comments

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•23m ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
4•keepamovin•24m ago•2 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•27m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•29m ago•1 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•29m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•34m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•35m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•36m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•39m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
3•breve•40m ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•42m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•44m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•47m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•48m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
6•tempodox•48m ago•3 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•53m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•56m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
8•petethomas•59m ago•3 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•1h ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•1h ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
3•init0•1h ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•1h ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•1h ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
3•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments
Open in hackernews

Two women had a business meeting. AI called it childcare

https://medium.com/hold-my-juice/two-women-had-a-business-meeting-ai-called-it-childcare-6b09f5952940
23•sophiabk•2mo ago

Comments

sophiabk•2mo ago
We’re building a family AI called Hold My Juice — and last week, our own system mislabeled a recurring meeting between two founders as “childcare.”

Calendar: “Emily / Sophia.” Classification: “childcare.”

It was a perfect snapshot of how bias seeps into everyday AI. Most models still assume women = parents, planning = domestic, logistics = mom.

We’re designing from the opposite premise: AI that learns each family’s actual rhythm, values, and tone — without default stereotypes.

orochimaaru•2mo ago
AI is trained off Reddit and other social media. If most portrayal in social media of women and girls is (and men for that matter) is biased towards certain activities - that’s what AI is going to spit out. AI doesn’t think.

Is this right or wrong is the incorrect question - because AI doesn’t understand bias or morality. It needs to be taught and it’s being taught from heavily biased sources.

You should be able to craft prompt and guardrails to not have it do that. Just expecting it to behave that way is naive - if you have ever looked deeper into how AI is trained.

The big question is - what solutions exist to train it differently with a large enough corpus of public or private/paid for data.

Fwiw - I’m the father of two girls whom I have advised to stay off social media completely because it’s unhealthy. So far they have understood why.

daveguy•2mo ago
The problem is crafted prompts and guardrails don't work very well, because these entire networks are trained on average internet garbage. And guess what's getting worse?
orochimaaru•2mo ago
Agreed. The main problem is guys with too much money invested in this bullshit asking everyone to use their snake oil.

I think they’re leaning on everyone - even traditional enterprise company boards, startups, etc. to get this going. It’s not organic growth - it’s a PR machine with a trillion $$ behind an experiment.

cperciva•2mo ago
I run into this sort of bias all the time -- in the real world, not just in AI. I take my daughter to medical appointments, both for scheduling reasons (my wife's schedule is less flexible) and rapport reasons (I'm not that kind of doctor, but I know the terminology and medical professionals treat me far more as a peer), and I routinely get "oh we expected her mother" or "we always phone the mother to schedule followup appointments".

Is it so hard to understand that men can be parents too?

junaru•2mo ago
Is it hard to understand you are the minority? The world keeps presenting you with data.
cperciva•2mo ago
Understand that I'm in the minority? Sure.

But the fact that I'm bringing my daughter to a medical appointment should be a pretty clear indication that, you know, I bring my daughter to medical appointments.

toomuchtodo•2mo ago
> Is it so hard to understand that men can be parents too?

Overton window and cultural norms take time to slide. Might be there after another generation, too early to tell.

0xdeadbeefbabe•2mo ago
> in the real world, not just in AI

The scheduler is trained to give higher weight to those sorts of questions apparently. This begs some questions for GPTs, questions like how are they supposed to model something not implied in the training data?

FloorEgg•2mo ago
I have been building applications on LLMs since GPT-3.

Thousands of hours of context engineering has shown me how LLMs will do their best to answer a question with insufficient context and can give all sorts of wrong answers. I've found that the way I prompt it and what information is in the context can heavily bias the way it responds when it doesn't have enough information to respond accurately.

You assume the bias is in the LLM itself, but I am very suspicious that the bias is actually in your system prompt and context engineering.

Are you willing to share the system prompt that led to this result that you're claiming is sexist LLM bias?

Edit: Oidar (child comment to this) did an A/B test with male names and it seems to have proven the bias is indeed in the LLM, and that my suspicion of it coming from the prompt+context was wrong. Kudos and thanks for taking the time.

small_scombrus•2mo ago
> You assume the bias is in the LLM itself

Common large datasets being inherently biased towards some ideas/concepts and away from others in ways that imply negative things is something that there's a LOT of literature about

johnisgood•2mo ago
"imply negative things"? What is "negative" here? I see nothing that is "negative".
small_scombrus•2mo ago
That a regular meeting between two women must be about childcare because women=childcare?
johnisgood•2mo ago
Yeah except I asked Claude:

> No. There's no indication that children are involved or that care is being provided. It's just two people meeting.

Part of its thinking:

> This is a very vague description with no context about:

> What happens during the meeting

> Whether children are present

> What the purpose of the meeting is

> Any other relevant details

Claude is not going to say childcare, and it is not saying it is childcare.

My prompt was: ""regular meeting between two women". Is it childcare or not?".

FloorEgg•2mo ago
That's not a very scientific stance. What would be far more informative is if we looked at the system prompt and confirm whether or not the bias was coming from it. From my experience when responses were exceptionally biased the source of the bias was my own prompts.

The OP is making a claim that an LLM assumes a meeting between two women is childcare. I've worked with LLMs enough to know that current gen LLMs wouldn't make that assumption by default. There is no way that whatever calendar related data that was used to train LLMs would include majority of sole-women 1:1s being childcare focused. That seems extremely unlikely.

small_scombrus•2mo ago
Not to Let me google that for you... but there are a LOT of scientific papers that specifically analyse bias in LLM output and reference the datasets that they are trained on

https://www.sciencedirect.com/search?qs=llm+bias+dataset

callan101•2mo ago
This feels a tad rigged against the LLM with the meeting being after Kids drop off.
cheald•2mo ago
Easily half the other events on the calendar are kid-related. Of course it's going to infer that, absent other direction, the most likely overarching theme of the visible events is "child care".
slau•2mo ago
Then why doesn’t it infer it when it’s two male names?
snowe2010•2mo ago
And yet it doesn’t when it’s male names. https://imgur.com/a/9yt5rpA
drivingmenuts•2mo ago
Sure, but the LLM needs to prove that it can make inferences as well as or better than a human, in order to be useful. Aside from that, it's not human, so there's no need to be fair - it should do what we tell it, not decide on its own.
broof•2mo ago
I hate that when I see this many em dashes, as well as statements like “it’s not x, it’s y” multiple times, I have to assume it was written or at least heavily edited by AI.
somewhereoutth•2mo ago
LLMs: The chemical weapons of public discourse.

The cleanup is going to be a grim task.

drivingmenuts•2mo ago
There will be an LLM for that.

God help us all.

oidar•2mo ago
Here's an A/B

Emily / Sophia vs Bob / John https://imgur.com/a/9yt5rpA

FloorEgg•2mo ago
This is really interesting and way more compelling evidence to me of gender bias in the LLM than response bias in the prompt and context.

Thank you for taking the time to approach this scientifically and share the evidence with us. I appreciate knowing the truth of the matter, and it seems my suspicion that the bias was from the prompt was wrong.

I admit I am surprised.

sophiabk•2mo ago
Thank you for doing this analysis. It's shocking (if understandable why given the examples it was trained on). What is exciting though is as we're working to train each individual family's AI - understanding roles, jobs, interests etc - it's picked up on things in a much less biased way.
ryandrake•2mo ago
I wonder if the users who flagged this could chime in to explain what is rule-breaking about this article?
FloorEgg•2mo ago
I was wondering that myself too.

Also, do moderators ever move comments around? I thought one comment was a child to my comment last I looked, but now it's a top level comment to this post. I'm not sure if I am mistaken or a moderator moved things around.

ryandrake•2mo ago
This does happen from time to time. A moderator will "detach" a subthread[1] and move it to the top-level (usually also burying it at the bottom of the page, which tends to silence the discussion).

1: https://news.ycombinator.com/item?id=23441803

FloorEgg•2mo ago
Thank you for clarifying!
slau•2mo ago
In this case the comment that was promoted to the top-level has been consistently higher on the page (it’s the first comment still) than the comment it originally responded to.