frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I failed to recreate the 1996 Space Jam website with Claude

https://j0nah.com/i-failed-to-recreate-the-1996-space-jam-website-with-claude/
276•thecr0w•7h ago•235 comments

Mechanical power generation using Earth's ambient radiation

https://www.science.org/doi/10.1126/sciadv.adw6833
35•defrost•3h ago•13 comments

The C++ standard for the F-35 Fighter Jet [video]

https://www.youtube.com/watch?v=Gv4sDL9Ljww
173•AareyBaba•7h ago•177 comments

Dollar-stores overcharge customers while promising low prices

https://www.theguardian.com/us-news/2025/dec/03/customers-pay-more-rising-dollar-store-costs
226•bookofjoe•10h ago•366 comments

Google Titans architecture, helping AI have long-term memory

https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/
375•Alifatisk•12h ago•127 comments

Bag of words, have mercy on us

https://www.experimental-history.com/p/bag-of-words-have-mercy-on-us
21•ntnbr•2h ago•13 comments

A two-person method to simulate die rolls (2023)

https://blog.42yeah.is/algorithm/2023/08/05/two-person-die.html
50•Fraterkes•2d ago•27 comments

An Interactive Guide to the Fourier Transform

https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/
132•pykello•5d ago•16 comments

Scala 3 slowed us down?

https://kmaliszewski9.github.io/scala/2025/12/07/scala3-slowdown.html
173•kmaliszewski•9h ago•102 comments

Build a DIY magnetometer with a couple of seasoning bottles

https://spectrum.ieee.org/listen-to-protons-diy-magnetometer
61•nullbyte808•1w ago•17 comments

The Anatomy of a macOS App

https://eclecticlight.co/2025/12/04/the-anatomy-of-a-macos-app/
183•elashri•12h ago•54 comments

Spinlocks vs. Mutexes: When to Spin and When to Sleep

https://howtech.substack.com/p/spinlocks-vs-mutexes-when-to-spin
9•birdculture•28m ago•0 comments

Estimates are difficult for developers and product owners

https://thorsell.io/2025/12/07/estimates.html
140•todsacerdoti•5h ago•167 comments

Minimum Viable Arduino Project: Aeropress Timer

https://netninja.com/2025/12/01/minimum-viable-arduino-project-aeropress-timer/
15•surprisetalk•5d ago•5 comments

How I block all online ads

https://troubled.engineer/posts/no-ads/
46•StrLght•2h ago•14 comments

The state of Schleswig-Holstein is consistently relying on open source

https://www.heise.de/en/news/Goodbye-Microsoft-Schleswig-Holstein-relies-on-Open-Source-and-saves...
513•doener•11h ago•236 comments

Work disincentives hit the near-poor hardest (2022)

https://www.niskanencenter.org/work-disincentives-hit-the-near-poor-hardest-why-and-what-to-do-ab...
11•folump•5d ago•0 comments

Nested Learning: A new ML paradigm for continual learning

https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
72•themgt•10h ago•2 comments

What the heck is going on at Apple?

https://www.cnn.com/2025/12/06/tech/apple-tim-cook-leadership-changes
30•methuselah_in•8h ago•37 comments

Java Hello World, LLVM Edition

https://www.javaadvent.com/2025/12/java-hello-world-llvm-edition.html
163•ingve•13h ago•58 comments

Evidence from the One Laptop per Child program in rural Peru

https://www.nber.org/papers/w34495
88•danso•5h ago•64 comments

Proxmox delivers its software-defined datacenter contender and VMware escape

https://www.theregister.com/2025/12/05/proxmox_datacenter_manager_1_stable/
50•Bender•3h ago•1 comments

Context plumbing

https://interconnected.org/home/2025/11/28/plumbing
11•gmays•5d ago•2 comments

Building a Toast Component

https://emilkowal.ski/ui/building-a-toast-component
88•FragrantRiver•4d ago•32 comments

Show HN: Spotify Wrapped but for LeetCode

https://github.com/collinboler/leetcodewrapped
21•collinboler2•6h ago•9 comments

Millions of Americans mess up their taxes, but a new law will help

https://www.wakeuptopolitics.com/p/millions-of-americans-mess-up-their
24•toomuchtodo•6h ago•9 comments

Over fifty new hallucinations in ICLR 2026 submissions

https://gptzero.me/news/iclr-2026/
450•puttycat•11h ago•353 comments

Vanity activities

https://quarter--mile.com/vanity-activities
56•surprisetalk•6d ago•53 comments

Semantic Compression (2014)

https://caseymuratori.com/blog_0015
48•tosh•8h ago•5 comments

Iced 0.14 has been released (Rust GUI library)

https://github.com/iced-rs/iced/releases/tag/0.14.0
55•airstrike•3h ago•27 comments
Open in hackernews

Bag of words, have mercy on us

https://www.experimental-history.com/p/bag-of-words-have-mercy-on-us
21•ntnbr•2h ago

Comments

palata•1h ago
Slightly unfortunate that "Bag of words" is already a different concept: https://en.wikipedia.org/wiki/Bag_of_words.

My second thought is that it's not the metaphor that is misleading. People have been told thousands of times that LLMs don't "think", don't "know", don't "feel", but are "just a very impressive autocomplete". If they still really want to completely ignore that, why would they suddenly change their mind with a new metaphor?

Humans are lazy. If it looks true enough and it cost less effort, humans will love it. "Are you sure the LLM did your job correctly?" is completely irrelevant: people couldn't care less if it's correct or not. As long as the employer believes that the employee is "doing their job", that's good enough. So the question is really: "do you think you'll get fired if you use this?". If the answer is "no, actually I may even look more productive to my employer", then why would people not use it?

viccis•1h ago
Every day I see people treat gen AI like a thinking human, Dijkstra's attitudes about anthropomorphizing computers is vindicated even more.

That said, I think the author's use of "bag of words" here is a mistake. Not only does it have a real meaning in a similar area as LLMs, but I don't think the metaphor explains anything. Gen AI tricks laypeople into treating its token inferences as "thinking" because it is trained to replicate the semiotic appearance of doing so. A "bag of words" doesn't sufficiently explain this behavior.

Kim_Bruning•1h ago
This is essentially Lady Lovelace's objection from the 19th century [1]. Turing addressed this directly in "Computing Machinery and Intelligence" (1950) [2], and implicitly via the halting problem in "On Computable Numbers" (1936) [3]. Later work on cellular automata, famously Conway's Game of Life [4], demonstrates more conclusively that this framing fails as a predictive model: simple rules produce structures no one "put in."

A test I did myself was to ask Claude (The LLM from Anthropic) to write working code for entirely novel instruction set architectures (e.g., custom ISAs from the game Turing Complete [5]), which is difficult to reconcile with pure retrieval.

[1] Lovelace, A. (1843). Notes by the Translator, in Scientific Memoirs Vol. 3. ("The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.") Primary source: https://en.wikisource.org/wiki/Scientific_Memoirs/3/Sketch_o.... See also: https://www.historyofdatascience.com/ada-lovelace/ and https://writings.stephenwolfram.com/2015/12/untangling-the-t...

[2] https://academic.oup.com/mind/article/LIX/236/433/986238

[3] https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf

[4] https://web.stanford.edu/class/sts145/Library/life.pdf

[5] https://store.steampowered.com/app/1444480/Turing_Complete/

darepublic•56m ago
Nice essay but when I read this

> But we don’t go to baseball games, spelling bees, and Taylor Swift concerts for the speed of the balls, the accuracy of the spelling, or the pureness of the pitch. We go because we care about humans doing those things.

My first thought was does anyone want to _watch_ me programming?

skybrian•45m ago
No, but open source projects will be somewhat more willing to review your pull request than one that's computer-generated.
Fwirt•40m ago
No, but watching a novelist at work is boring, and yet people like books that are written by humans because they speak to the condition of the human who wrote it.

Let us not forget the old saw from SICP, “Programs must be written for people to read, and only incidentally for machines to execute.” I feel a number of people in the industry today fail to live by that maxim.

hansvm•34m ago
A number of people make money letting people watch them code.
awesome_dude•30m ago
I mean, I like to watch Gordon Ramsey... not cook, but have very strong discussions with those that dare to fail his standards...
1659447091•28m ago
I vaguely remember a site where you could watch random people live streaming their programming environment, but I think twitch ate it, or maybe it was twitch -- not sure, but was interesting
Herring•48m ago
Give it time. The first iPhone sucked compared to the Nokia/Blackberry flagships of the day. No 3G support, couldn't copy/paste, no apps, no GPS, crappy camera, quick price drops, negligible sales in the overall market.

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

awesome_dude•32m ago
The first VHS sucked when compared to Beta video

And it never got better, the superior technology lost, and the war was won through content deals.

Lesson: Technology improvements aren't guaranteed.

grogenaut•29m ago
Your analogy makes no sense. VHS spawned the entire home market, which went through multiple quality upgrades well above beta. It would only make sense if in 2025 we were using vhs everywhere and that the current state of the art for LLMs is all there ever is.
Ukv•40m ago
I'm not convinced that "It's just a bag of words" would do much to sway someone who is overestimating an LLM's abilities. Feels too abstract/disconnected from what their experience using the LLM will be that it'll just sound obviously mistaken.