frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
68•yi_wang•2h ago•23 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
233•valyala•10h ago•45 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
25•RebelPotato•2h ago•4 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
144•surprisetalk•10h ago•146 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
176•mellosouls•13h ago•333 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
62•gnufx•9h ago•55 comments

IBM Beam Spring: The Ultimate Retro Keyboard

https://www.rs-online.com/designspark/ibm-beam-spring-the-ultimate-retro-keyboard
19•rbanffy•4d ago•4 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
173•AlexeyBrin•15h ago•32 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
152•vinhnx•13h ago•16 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
41•swah•4d ago•91 comments

First Proof

https://arxiv.org/abs/2602.05192
125•samasblack•12h ago•75 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
298•jesperordrup•20h ago•95 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
69•momciloo•10h ago•13 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
96•randycupertino•5h ago•212 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
98•thelok•12h ago•21 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
35•mbitsnbites•3d ago•3 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
566•theblazehen•3d ago•206 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
7•ujjwalreddyks•5d ago•2 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
35•chwtutha•1h ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
286•1vuio0pswjnm7•16h ago•465 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
127•josephcsible•8h ago•155 comments

The silent death of good code

https://amit.prasad.me/blog/rip-good-code
81•amitprasad•4h ago•76 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
29•languid-photic•4d ago•9 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
180•valyala•10h ago•165 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
899•klaussilveira•1d ago•275 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
225•limoce•4d ago•125 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
115•onurkanbkrc•15h ago•5 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
141•speckx•4d ago•224 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
143•videotopia•4d ago•48 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
299•isitcontent•1d ago•39 comments
Open in hackernews

Claude Shannon's randomness-guessing machine

https://www.loper-os.org/bad-at-entropy/manmach.html
36•Kotlopou•3w ago

Comments

grayhatter•2w ago
> It is not hard to win this game. If you spent a whole day playing it, shame on you. But what if you did not know that you are playing a game? I dug up this toy when I saw people talking about generating 'random' numbers for cryptography by mashing keys or shouting into microphones. It is meant to educate you regarding the folly of such methods.

I wouldn't trust a human to generate enough entropy for any kind of key material. But I'd happily feed their output, and more importantly, the metadata around said output (like the ns delay between key presses) into the seed of a CSPRNG, (much more importantly, along with plenty of other sources of entropy).

The primary characteristic of a CSPRNG, is the inability to predict the next output, from the previous output. Once you get sufficient entropy to seed a CSPRNG, nothing you (correctly) mix into the state, can decrease it's security.

There is no folly in using human interactions to help seed a random number generator. Assuming you dont use the characters they type as the only seed input.

kurisufag•2w ago
mildly related: when i want a single bit of entropy in my day-to-day without fooling myself, i think of a random long-ish word and decide based on the evenness of the number of letters. probably this isn't an unbiased oracle, but it's good enough when i don't have a coin handy and care about avoiding self-delusion more than fair odds.
robertk•2w ago
It’s slightly biased. ( P(even) = 0.5702; Bias = +0.0702 (about 7 percentage points toward heads) ). You can use this Claude Code prompt to determine how much:

Use your web search tool call. Fetch a list of English words and find their incident frequency in common text (as a proxy for likelihood of someone knowing or thinking of the word on the fly). Take all words 10 characters or longer. Consider their parity (even number of letters or odd). What is the likelihood a coin comes up heads if and only if a word is even when sampled by incidence rate? You can compute this by grouping even and odd words, and summing up their respective incident rates in numerator and denominator. Report back how biased away this is from 0.5. Then do the same for words at least 9 characters to avoid “even start bias” given slight Zipf distribution statistics by word length. Average the two for a “fair sample” of the bias. Then run a bootstrap estimator with random choice of “at least N chars” (8 <= N <= 15) and random subsets of the dictionary (say 50% of words or whatever makes statistical sense). Report back the estimate of the bias with confidence interval (multiple bootstrap methods). How biased is this method from exactly random bits (0.5 prob heads/tails) at various confidence intervals?

rosseitsa•2w ago
good one, though it has to be a fairly long word. Personally I check the current minutes of the hour :P
RandomBK•2w ago
Additionally, so long as we can be sure the human's output is not actively adversarial, we can xor it into the entropy pool. Entropy can only increase this way.
arn3n•2w ago
There’s a basic approach to this using markov chains which works surprisingly well. Scott Aaronson once challenged some students to beat his algorithm — only one student could, who claimed he just “used his free will”. Human randomness isn’t so random. There’s a neat little writeup about it here: https://planetbanatt.net/articles/freewill.html
kelseyfrog•2w ago
I like to think that this is a measurement of free will in the literal, naïve sense. It makes just as much sense as other definitions (the ability to take action independent of external cause) and it has the bonus of being quantifiable.

The only downside? A LOT of people get very mad at the implications.

Tzt•2w ago
"free will" also known as digits of pi mod 2
continuational•2w ago
Got 50% in first try, the computer only made two guesses, one right and one wrong, and passed the rest.
3ple_alpha•2w ago
Did you stop after 14 iterations? Because the game is, in fact, infinite.
tucnak•2w ago
I did a couple runs without thinking much about it, and the computer never got more than 25%. I guess 0000 and 1111 don't feel random, but work pretty well. Probably by random chance is only 1/8 or 12.5%. In other words it will happen all the time.
ChocMontePy•2w ago
I got 58% after 100 attempts.

My method uses the fact that the letters a-k + u make up around 49.9% of letters in a normal text. So I just go through a text letter by letter in my mind, giving 0 if the letter is a-k or u, and a 1 if it's l-t or v-z.

For example, the Gettysburg Address:

f - 0

o - 1

u - 0

r - 1

s - 1

c - 0

o - 1

r - 1

e - 0