frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Epstein files reveal deeper ties to scientists than previously known

https://www.nature.com/articles/d41586-026-00388-0
1•XzetaU8•1m ago•0 comments

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•8m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
1•saiyampathak•12m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
1•tywells•14m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•17m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•18m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•19m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•19m ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•30m ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•32m ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•36m ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
1•pentagrama•39m ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•40m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
2•lostlogin•40m ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•43m ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•44m ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•44m ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•44m ago•0 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
1•monero-xmr•46m ago•0 comments

Busy Months in KDE Linux

https://pointieststick.com/2026/02/06/busy-months-in-kde-linux/
1•todsacerdoti•47m ago•0 comments

Zram as Swap

https://wiki.archlinux.org/title/Zram#Usage_as_swap
1•seansh•1h ago•1 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
1•mxfh•1h ago•0 comments

Nvidia CEO Says AI Capital Spending Is Appropriate, Sustainable

https://www.bloomberg.com/news/articles/2026-02-06/nvidia-ceo-says-ai-capital-spending-is-appropr...
1•virgildotcodes•1h ago•2 comments

Show HN: StyloShare – privacy-first anonymous file sharing with zero sign-up

https://www.styloshare.com
1•stylofront•1h ago•0 comments

Part 1 the Persistent Vault Issue: Your Encryption Strategy Has a Shelf Life

1•PhantomKey•1h ago•0 comments

Show HN: Teleop_xr – Modular WebXR solution for bimanual robot teleoperation

https://github.com/qrafty-ai/teleop_xr
1•playercc7•1h ago•1 comments

The Highest Exam: How the Gaokao Shapes China

https://www.lrb.co.uk/the-paper/v48/n02/iza-ding/studying-is-harmful
2•mitchbob•1h ago•1 comments

Open-source framework for tracking prediction accuracy

https://github.com/Creneinc/signal-tracker
1•creneinc•1h ago•0 comments

India's Sarvan AI LLM launches Indic-language focused models

https://x.com/SarvamAI
2•Osiris30•1h ago•0 comments

Show HN: CryptoClaw – open-source AI agent with built-in wallet and DeFi skills

https://github.com/TermiX-official/cryptoclaw
1•cryptoclaw•1h ago•0 comments
Open in hackernews

Claude Shannon's randomness-guessing machine

https://www.loper-os.org/bad-at-entropy/manmach.html
36•Kotlopou•3w ago

Comments

grayhatter•3w ago
> It is not hard to win this game. If you spent a whole day playing it, shame on you. But what if you did not know that you are playing a game? I dug up this toy when I saw people talking about generating 'random' numbers for cryptography by mashing keys or shouting into microphones. It is meant to educate you regarding the folly of such methods.

I wouldn't trust a human to generate enough entropy for any kind of key material. But I'd happily feed their output, and more importantly, the metadata around said output (like the ns delay between key presses) into the seed of a CSPRNG, (much more importantly, along with plenty of other sources of entropy).

The primary characteristic of a CSPRNG, is the inability to predict the next output, from the previous output. Once you get sufficient entropy to seed a CSPRNG, nothing you (correctly) mix into the state, can decrease it's security.

There is no folly in using human interactions to help seed a random number generator. Assuming you dont use the characters they type as the only seed input.

kurisufag•2w ago
mildly related: when i want a single bit of entropy in my day-to-day without fooling myself, i think of a random long-ish word and decide based on the evenness of the number of letters. probably this isn't an unbiased oracle, but it's good enough when i don't have a coin handy and care about avoiding self-delusion more than fair odds.
robertk•2w ago
It’s slightly biased. ( P(even) = 0.5702; Bias = +0.0702 (about 7 percentage points toward heads) ). You can use this Claude Code prompt to determine how much:

Use your web search tool call. Fetch a list of English words and find their incident frequency in common text (as a proxy for likelihood of someone knowing or thinking of the word on the fly). Take all words 10 characters or longer. Consider their parity (even number of letters or odd). What is the likelihood a coin comes up heads if and only if a word is even when sampled by incidence rate? You can compute this by grouping even and odd words, and summing up their respective incident rates in numerator and denominator. Report back how biased away this is from 0.5. Then do the same for words at least 9 characters to avoid “even start bias” given slight Zipf distribution statistics by word length. Average the two for a “fair sample” of the bias. Then run a bootstrap estimator with random choice of “at least N chars” (8 <= N <= 15) and random subsets of the dictionary (say 50% of words or whatever makes statistical sense). Report back the estimate of the bias with confidence interval (multiple bootstrap methods). How biased is this method from exactly random bits (0.5 prob heads/tails) at various confidence intervals?

rosseitsa•2w ago
good one, though it has to be a fairly long word. Personally I check the current minutes of the hour :P
RandomBK•2w ago
Additionally, so long as we can be sure the human's output is not actively adversarial, we can xor it into the entropy pool. Entropy can only increase this way.
arn3n•2w ago
There’s a basic approach to this using markov chains which works surprisingly well. Scott Aaronson once challenged some students to beat his algorithm — only one student could, who claimed he just “used his free will”. Human randomness isn’t so random. There’s a neat little writeup about it here: https://planetbanatt.net/articles/freewill.html
kelseyfrog•2w ago
I like to think that this is a measurement of free will in the literal, naïve sense. It makes just as much sense as other definitions (the ability to take action independent of external cause) and it has the bonus of being quantifiable.

The only downside? A LOT of people get very mad at the implications.

Tzt•2w ago
"free will" also known as digits of pi mod 2
continuational•2w ago
Got 50% in first try, the computer only made two guesses, one right and one wrong, and passed the rest.
3ple_alpha•2w ago
Did you stop after 14 iterations? Because the game is, in fact, infinite.
tucnak•2w ago
I did a couple runs without thinking much about it, and the computer never got more than 25%. I guess 0000 and 1111 don't feel random, but work pretty well. Probably by random chance is only 1/8 or 12.5%. In other words it will happen all the time.
ChocMontePy•2w ago
I got 58% after 100 attempts.

My method uses the fact that the letters a-k + u make up around 49.9% of letters in a normal text. So I just go through a text letter by letter in my mind, giving 0 if the letter is a-k or u, and a 1 if it's l-t or v-z.

For example, the Gettysburg Address:

f - 0

o - 1

u - 0

r - 1

s - 1

c - 0

o - 1

r - 1

e - 0