frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
475•klaussilveira•7h ago•116 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
813•xnx•12h ago•487 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
33•matheusalmeida•1d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
157•isitcontent•7h ago•17 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
156•dmpetrov•7h ago•67 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
92•jnord•3d ago•12 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
50•quibono•4d ago•6 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
260•vecti•9h ago•123 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
207•eljojo•10h ago•134 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
328•aktau•13h ago•158 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
327•ostacke•13h ago•86 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
411•todsacerdoti•15h ago•219 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
23•kmm•4d ago•1 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
337•lstoll•13h ago•242 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
52•phreda4•6h ago•9 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
4•romes•4d ago•0 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
195•i5heu•10h ago•145 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
115•vmatsiiako•12h ago•38 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
152•limoce•3d ago•79 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
245•surprisetalk•3d ago•32 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
996•cdrnsf•16h ago•420 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
26•gfortaine•5h ago•3 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
46•rescrv•15h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
67•ray__•3h ago•30 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
38•lebovic•1d ago•11 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
78•antves•1d ago•59 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
30•betamark•14h ago•28 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
41•nwparker•1d ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
7•gmays•2h ago•2 comments

Evolution of car door handles over the decades

https://newatlas.com/automotive/evolution-car-door-handle/
41•andsoitis•3d ago•62 comments
Open in hackernews

Which Humans? (2023)

https://osf.io/preprints/psyarxiv/5b26t_v1
45•surprisetalk•1mo ago

Comments

jdkee•1mo ago
As an aside: Last year a student of mine (we're at a U.S. college) told me that his teenage cousins back in Mongolia were all learning English in order to use ChatGPT.
croisillon•1mo ago
is it a steppe up?
MengerSponge•1mo ago
/usr/bin/humans, presumably
dwa3592•1mo ago
/usr/bin/human/weird, to be precise.
Timwi•1mo ago
Homo peculiaris
mncharity•1mo ago
Since the page didn't load for me several times, and the title is ambiguous, here's the Abstract: Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained. We show that LLMs’ responses to psychological measures are an outlier compared with large-scale cross-cultural data, and that their performance on cognitive psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies but declines rapidly as we move away from these populations (r = -.70). Ignoring cross-cultural diversity in both human and machine psychology raises numerous scientific and ethical issues. We close by discussing ways to mitigate the WEIRD bias in future generations of generative language models.
rokizero•1mo ago
This was submitted 30 months ago. Still interesting. I would be interested if this got 'worse' or 'better' with newer models.
andy99•1mo ago
2023, and using some kind of in-page pdf reader from 2002
wellthisisgreat•1mo ago
It’s 2025 today, Andy :)
Timwi•1mo ago
The website is from 2023.
didgetmaster•1mo ago
Surprise, Surprise. LLMs will respond according to the set of data that their model was trained on!

While just about every LLM is trained on data that far surpasses the output of just one person, or even a decent sized group; it will still reflect the average sentiment of the corpus of data fed into it.

If the bulk of the training data was scraped from websites created in 'WEIRD' countries, then it's responses will largely mimic their culture.

cathyreisenwitz•1mo ago
I wonder whether it might be useful to the continued existence of humanity to correct for the individualism bias in WEIRD countries with some collectivism
levocardia•1mo ago
I think it is mostly a good thing that LLMs have "WEIRD" values. We are at a very fortuitous point in history, where the modal position in extant written text is classically liberal and believes in respecting human rights. Virtually no other point in history would be that way, and a true modal position among values and moral beliefs held among all 8 billion people currently alive on earth -- much less the modal position among all ~100 billion humans ever -- would, I'd hazard to guess, not be a very nice place to end up.
g8oz•1mo ago
The extant text also encodes Western hypocrisies and blind spots.
daymanstep•1mo ago
If you go back 200 years, you'll be smack bang in the middle of the Enlightenment with thinkers like Kant and Jeremy Bentham.

You could argue that if you trained LLMs on only texts from that time period, you would get something even more "classically liberal" or human-rights-respecting

rexpop•1mo ago
You're foolishly myopic if you think your ideology coincides with the global historical best. Have a little humility and consider that there are concepts you haven't even heard of that might be better.
buildsjets•1mo ago
Denisovans, clearly.
OrionNox•1mo ago
They deliberately picked "weird". That tells you enought about their ideological position and bias.