frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Duchess Who Invented Science Fiction

https://compellingsciencefiction.com/posts/the-duchess-who-invented-science-fiction.html
1•davnicwil•42s ago•0 comments

NLnet's €21.6M fund for open-source internet projects

https://nlnet.nl/commonsfund/
2•handystudio•2m ago•0 comments

Veilid: Distributed Decentralized Framework (From CultoftheDeadCow)

https://veilid.com/
1•0xbadcafebee•2m ago•0 comments

We analyzed 47,000 ChatGPT conversations. Here's what people use it for

https://www.washingtonpost.com/technology/2025/11/12/how-people-use-chatgpt-data
2•pseudolus•3m ago•1 comments

Python Strftime Cheatsheet

https://strftime.org/
1•data_ase•3m ago•0 comments

Convex raises $24M to reinvent back ends

https://news.convex.dev/convex-raises-24m/
1•janpio•4m ago•0 comments

Marble by World Labs: Multimodal world model to create and edit 3D worlds

http://marble.worldlabs.ai/
1•dmarcos•5m ago•0 comments

10× Faster Log Processing at Scale: Beating Logstash Bottlenecks with Timeplus

https://www.timeplus.com/post/beating-logstash-bottlenecks
1•gangtao•5m ago•0 comments

Northern Lights Dazzle U.S. Skies After Powerful Solar Storm

https://www.scientificamerican.com/article/northern-lights-dazzle-u-s-skies-after-powerful-solar-...
2•quapster•7m ago•0 comments

Teradar raises $150M for a sensor it says beats Lidar and radar

https://techcrunch.com/2025/11/12/teradar-exits-stealth-with-an-all-weather-sensor-for-autonomy-a...
1•aganders3•8m ago•0 comments

Planchón-Peteroa volcano enters new eruptive phase

https://watchers.news/2025/11/11/planchon-peteroa-volcano-enters-new-eruptive-phase-chile-argenti...
1•wslh•8m ago•0 comments

Arch-delta Saves 80% Of Bandwidth On Upgrades

https://djugei.github.io/how-arch-delta-works/
1•birdculture•9m ago•0 comments

Haiku Activity and Contract Report, October 2025

https://www.haiku-os.org/blog/waddlesplash/2025-11-11-haiku_activity_contract_report_october_2025
3•todsacerdoti•12m ago•0 comments

The Al Bubble Is Worse Than You Think [video]

https://www.youtube.com/watch?v=-cdJQ8UyVLA
2•EPendragon•13m ago•0 comments

Project OSSAS: Custom LLMs to Process 100M Research Papers

https://inference.net/blog/project-aella
1•surprisetalk•14m ago•0 comments

LAION Dataset Explorer

https://aella.inference.net
1•surprisetalk•14m ago•0 comments

Google launches a lawsuit targeting text message scammers

https://www.npr.org/2025/11/12/nx-s1-5604857/google-lawsuit-phishing-text-message-scammers
2•speckx•15m ago•0 comments

Helm v4.0.0

https://github.com/helm/helm/releases/tag/v4.0.0
5•todsacerdoti•16m ago•0 comments

Show HN: Cancer diagnosis makes for an interesting RL environment for LLMs

1•dchu17•17m ago•0 comments

Show HN: AI music discovery for super nerds (Now on iOS AND macOS)

https://back2back.ai
1•pj4533•18m ago•0 comments

Extracting Playable Instrument Models from Short Audio Examples

https://blog.cochlea.xyz/resonancemodel.html
2•cochlear•18m ago•0 comments

DSearch: .NET IQueryable Dynamic Search Library

https://www.nuget.org/packages/DSearch
1•samuelhenshaw•19m ago•0 comments

VLC's keeper of the cone nets European free software gong

https://www.theregister.com/2025/11/12/vlc_guru_gong/
1•jjgreen•20m ago•0 comments

Bitwise Consistent On-Policy Reinforcement Learning with VLLM and TorchTitan

https://blog.vllm.ai/2025/11/10/bitwise-consistent-train-inference.html
1•brrrrrm•20m ago•0 comments

Vera-MH: open-source eval for chatbot safety in mental health

https://github.com/SpringCare/VERA-MH
1•__lucab•21m ago•0 comments

AEMO turns to battery inverters to run big grids with no synchronous generation

https://reneweconomy.com.au/aemo-turns-to-battery-inverters-for-world-first-trial-of-running-big-...
1•ViewTrick1002•22m ago•1 comments

Deconstructing bubble machine – X current and new recommendation engine Phoenix

https://starwatcher.substack.com/p/deconstructing-bubble-machine
1•ernests•22m ago•0 comments

A1: Agent-to-Code JIT Compiler

https://github.com/stanford-mast/a1
1•calebhwin•23m ago•0 comments

Politics in the US Workplace – research project on politics in the workplace

https://politicsatwork.org
2•alphabetatango•25m ago•0 comments

New treatment for vertebral fractures using adipose-derived stem cell spheroids

https://boneandjoint.org.uk/Article/10.1302/2046-3758.1410.BJR-2025-0092.R1
1•bookofjoe•25m ago•0 comments
Open in hackernews

Two women had a business meeting. AI called it childcare

https://medium.com/hold-my-juice/two-women-had-a-business-meeting-ai-called-it-childcare-6b09f5952940
12•sophiabk•1h ago

Comments

sophiabk•1h ago
We’re building a family AI called Hold My Juice — and last week, our own system mislabeled a recurring meeting between two founders as “childcare.”

Calendar: “Emily / Sophia.” Classification: “childcare.”

It was a perfect snapshot of how bias seeps into everyday AI. Most models still assume women = parents, planning = domestic, logistics = mom.

We’re designing from the opposite premise: AI that learns each family’s actual rhythm, values, and tone — without default stereotypes.

orochimaaru•1h ago
AI is trained off Reddit and other social media. If most portrayal in social media of women and girls is (and men for that matter) is biased towards certain activities - that’s what AI is going to spit out. AI doesn’t think.

Is this right or wrong is the incorrect question - because AI doesn’t understand bias or morality. It needs to be taught and it’s being taught from heavily biased sources.

You should be able to craft prompt and guardrails to not have it do that. Just expecting it to behave that way is naive - if you have ever looked deeper into how AI is trained.

The big question is - what solutions exist to train it differently with a large enough corpus of public or private/paid for data.

Fwiw - I’m the father of two girls whom I have advised to stay off social media completely because it’s unhealthy. So far they have understood why.

daveguy•1h ago
The problem is crafted prompts and guardrails don't work very well, because these entire networks are trained on average internet garbage. And guess what's getting worse?
orochimaaru•30m ago
Agreed. The main problem is guys with too much money invested in this bullshit asking everyone to use their snake oil.

I think they’re leaning on everyone - even traditional enterprise company boards, startups, etc. to get this going. It’s not organic growth - it’s a PR machine with a trillion $$ behind an experiment.

gwelner•19m ago
You're about 10 years too late with the "patriarchy" schtick, boomer. Next you'll be doing militant atheism like it's still 2005.
cperciva•1h ago
I run into this sort of bias all the time -- in the real world, not just in AI. I take my daughter to medical appointments, both for scheduling reasons (my wife's schedule is less flexible) and rapport reasons (I'm not that kind of doctor, but I know the terminology and medical professionals treat me far more as a peer), and I routinely get "oh we expected her mother" or "we always phone the mother to schedule followup appointments".

Is it so hard to understand that men can be parents too?

johnisgood•1h ago
Why does it bother people so much? Open your mouth and just tell them to call you, tell them your number. Why do you care about what the default is? Why does it matter? We cannot not have defaults, we would not be able to function without defaults. Plus these defaults are defaults for a reason. Seriously, think about it.

Edit: feel free to downvote me, but as a remainder: stress kills. ;)

dghlsakjg•1h ago
Presumably he already has told them his number and preferences. Defaults are fine, but you don't want your preference to get reset to default every time, and assuming that only the mother of a child should be contacted in all cases is a terrible default. The person who made the appointment and who is bringing the child to the doctor should be the one contacted by default. There is no reason that the mother of a child should be considered the default guardian. That is an incredibly dangerous assumption to make in many circumstances.

Edit: This reply was written to a response that got completely rewritten in an edit. It may not make as much sense

david38•59m ago
This. Don’t be so sensitive, just say to call you.

I took my daughter to appointments and as soon as I started asking meaningful questions, doctors immediately switched to assuming I was the one to talk to.

When you act like you know what’s going on, act like you’re on top of it, I’ve never once had a doctor assume I was just babysitting. This was true in the Midwest and California.

johnisgood•51m ago
> doctors immediately switched to assuming I was the one to talk to.

Exactly! They do that. If a father takes the kid, they will ask for his number, not the mother's, in my experience. If both the mother and father goes with the kid, well, there are cues they pick up on. In my case my father typically was always in the background while my mother was the one doing the talking, meaning they ask for her number, not my dad's. So, all in all, whoever does the most talking, for example. And if my dad wanted to be the one called, my mom would have told them his number, or my dad would have. I do not see an issue here really.

junaru•1h ago
Is it hard to understand you are the minority? The world keeps presenting you with data.
cperciva•4m ago
Understand that I'm in the minority? Sure.

But the fact that I'm bringing my daughter to a medical appointment should be a pretty clear indication that, you know, I bring my daughter to medical appointments.

toomuchtodo•1h ago
> Is it so hard to understand that men can be parents too?

Overton window and cultural norms take time to slide. Might be there after another generation, too early to tell.

0xdeadbeefbabe•46m ago
> in the real world, not just in AI

The scheduler is trained to give higher weight to those sorts of questions apparently. This begs some questions for GPTs, questions like how are they supposed to model something not implied in the training data?

FloorEgg•1h ago
I have been building applications on LLMs since GPT-3.

Thousands of hours of context engineering has shown me how LLMs will do their best to answer a question with insufficient context and can give all sorts of wrong answers. I've found that the way I prompt it and what information is in the context can heavily bias the way it responds when it doesn't have enough information to respond accurately.

You assume the bias is in the LLM itself, but I am very suspicious that the bias is actually in your system prompt and context engineering.

Are you willing to share the system prompt that led to this result that you're claiming is sexist LLM bias?

Edit: Oidar (child comment to this) did an A/B test with male names and it seems to have proven the bias is indeed in the LLM, and that my suspicion of it coming from the prompt+context was wrong. Kudos and thanks for taking the time.

small_scombrus•1h ago
> You assume the bias is in the LLM itself

Common large datasets being inherently biased towards some ideas/concepts and away from others in ways that imply negative things is something that there's a LOT of literature about

johnisgood•1h ago
"imply negative things"? What is "negative" here? I see nothing that is "negative".
FloorEgg•1h ago
That's not a very scientific stance. What would be far more informative is if we looked at the system prompt and confirm whether or not the bias was coming from it. From my experience when responses were exceptionally biased the source of the bias was my own prompts.

The OP is making a claim that an LLM assumes a meeting between two women is childcare. I've worked with LLMs enough to know that current gen LLMs wouldn't make that assumption by default. There is no way that whatever calendar related data that was used to train LLMs would include majority of sole-women 1:1s being childcare focused. That seems extremely unlikely.

callan101•1h ago
This feels a tad rigged against the LLM with the meeting being after Kids drop off.
cheald•1h ago
Easily half the other events on the calendar are kid-related. Of course it's going to infer that, absent other direction, the most likely overarching theme of the visible events is "child care".
broof•1h ago
I hate that when I see this many em dashes, as well as statements like “it’s not x, it’s y” multiple times, I have to assume it was written or at least heavily edited by AI.
somewhereoutth•1h ago
LLMs: The chemical weapons of public discourse.

The cleanup is going to be a grim task.

OutOfHere•1h ago
People look to be dividing into woke and antiwoke groups. The woke will lose because from a moneymaking pov, they're less focused on the primary reward function that matters, and more focused on secondary reward functions that come at a cost of hurting the primary.
oidar•59m ago
Here's an A/B

Emily / Sophia vs Bob / John https://imgur.com/a/9yt5rpA

FloorEgg•26m ago
This is really interesting and way more compelling evidence to me of gender bias in the LLM than response bias in the prompt and context.

Thank you for taking the time to approach this scientifically and share the evidence with us. I appreciate knowing the truth of the matter, and it seems my suspicion that the bias was from the prompt was wrong.

I admit I am surprised.