frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
102•guerrilla•3h ago•44 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
186•valyala•7h ago•34 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
110•surprisetalk•7h ago•116 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
43•gnufx•6h ago•45 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
130•mellosouls•10h ago•280 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
880•klaussilveira•1d ago•269 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
129•vinhnx•10h ago•15 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
166•AlexeyBrin•12h ago•29 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
97•zdw•3d ago•46 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
60•randycupertino•2h ago•90 comments

First Proof

https://arxiv.org/abs/2602.05192
96•samasblack•9h ago•63 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
265•jesperordrup•17h ago•86 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
167•valyala•7h ago•148 comments

Eigen: Building a Workspace

https://reindernijhoff.net/2025/10/eigen-building-a-workspace/
4•todsacerdoti•4d ago•1 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
85•thelok•9h ago•18 comments

The silent death of Good Code

https://amit.prasad.me/blog/rip-good-code
48•amitprasad•1h ago•45 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
549•theblazehen•3d ago•203 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
49•momciloo•7h ago•9 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
26•mbitsnbites•3d ago•2 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
24•languid-photic•4d ago•6 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
246•1vuio0pswjnm7•13h ago•388 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
80•josephcsible•5h ago•107 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
108•onurkanbkrc•12h ago•5 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
137•videotopia•4d ago•44 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
57•rbanffy•4d ago•17 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
215•limoce•4d ago•123 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
303•alainrk•12h ago•482 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
48•marklit•5d ago•9 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
121•speckx•4d ago•185 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
294•isitcontent•1d ago•39 comments
Open in hackernews

Which Humans? (2023)

https://osf.io/preprints/psyarxiv/5b26t_v1
45•surprisetalk•1mo ago

Comments

jdkee•1mo ago
As an aside: Last year a student of mine (we're at a U.S. college) told me that his teenage cousins back in Mongolia were all learning English in order to use ChatGPT.
croisillon•1mo ago
is it a steppe up?
MengerSponge•1mo ago
/usr/bin/humans, presumably
dwa3592•1mo ago
/usr/bin/human/weird, to be precise.
Timwi•1mo ago
Homo peculiaris
mncharity•1mo ago
Since the page didn't load for me several times, and the title is ambiguous, here's the Abstract: Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained. We show that LLMs’ responses to psychological measures are an outlier compared with large-scale cross-cultural data, and that their performance on cognitive psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies but declines rapidly as we move away from these populations (r = -.70). Ignoring cross-cultural diversity in both human and machine psychology raises numerous scientific and ethical issues. We close by discussing ways to mitigate the WEIRD bias in future generations of generative language models.
rokizero•1mo ago
This was submitted 30 months ago. Still interesting. I would be interested if this got 'worse' or 'better' with newer models.
andy99•1mo ago
2023, and using some kind of in-page pdf reader from 2002
wellthisisgreat•1mo ago
It’s 2025 today, Andy :)
Timwi•1mo ago
The website is from 2023.
didgetmaster•1mo ago
Surprise, Surprise. LLMs will respond according to the set of data that their model was trained on!

While just about every LLM is trained on data that far surpasses the output of just one person, or even a decent sized group; it will still reflect the average sentiment of the corpus of data fed into it.

If the bulk of the training data was scraped from websites created in 'WEIRD' countries, then it's responses will largely mimic their culture.

cathyreisenwitz•1mo ago
I wonder whether it might be useful to the continued existence of humanity to correct for the individualism bias in WEIRD countries with some collectivism
levocardia•1mo ago
I think it is mostly a good thing that LLMs have "WEIRD" values. We are at a very fortuitous point in history, where the modal position in extant written text is classically liberal and believes in respecting human rights. Virtually no other point in history would be that way, and a true modal position among values and moral beliefs held among all 8 billion people currently alive on earth -- much less the modal position among all ~100 billion humans ever -- would, I'd hazard to guess, not be a very nice place to end up.
g8oz•1mo ago
The extant text also encodes Western hypocrisies and blind spots.
daymanstep•1mo ago
If you go back 200 years, you'll be smack bang in the middle of the Enlightenment with thinkers like Kant and Jeremy Bentham.

You could argue that if you trained LLMs on only texts from that time period, you would get something even more "classically liberal" or human-rights-respecting

rexpop•1mo ago
You're foolishly myopic if you think your ideology coincides with the global historical best. Have a little humility and consider that there are concepts you haven't even heard of that might be better.
buildsjets•1mo ago
Denisovans, clearly.
OrionNox•1mo ago
They deliberately picked "weird". That tells you enought about their ideological position and bias.