frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What rare disease AI teaches us about longitudinal health

https://myaether.live/blog/what-rare-disease-ai-teaches-us-about-longitudinal-health
1•takmak007•1m ago•0 comments

The Brand Savior Complex and the New Age of Self Censorship

https://thesocialjuice.substack.com/p/the-brand-savior-complex-and-the
1•jaskaransainiz•3m ago•0 comments

Show HN: A Prompting Framework for Non-Vibe-Coders

https://github.com/No3371/projex
1•3371•3m ago•0 comments

Kilroy is a local-first "software factory" CLI

https://github.com/danshapiro/kilroy
1•ukuina•13m ago•0 comments

Mathscapes – Jan 2026 [pdf]

https://momath.org/wp-content/uploads/2026/02/1.-Mathscapes-January-2026-with-Solution.pdf
1•vismit2000•16m ago•0 comments

80386 Barrel Shifter

https://nand2mario.github.io/posts/2026/80386_barrel_shifter/
2•jamesbowman•16m ago•0 comments

Training Foundation Models Directly on Human Brain Data

https://arxiv.org/abs/2601.12053
1•helloplanets•17m ago•0 comments

Web Speech API on HN Threads

https://toulas.ch/projects/hn-readaloud/
1•etoulas•19m ago•0 comments

ArtisanForge: Learn Laravel through a gamified RPG adventure – 100% free

https://artisanforge.online/
1•grazulex•20m ago•1 comments

Your phone edits all your photos with AI – is it changing your view of reality?

https://www.bbc.com/future/article/20260203-the-ai-that-quietly-edits-all-of-your-photos
1•breve•21m ago•0 comments

DStack, a small Bash tool for managing Docker Compose projects

https://github.com/KyanJeuring/dstack
1•kppjeuring•21m ago•1 comments

Hop – Fast SSH connection manager with TUI dashboard

https://github.com/danmartuszewski/hop
1•danmartuszewski•22m ago•1 comments

Turning books to courses using AI

https://www.book2course.org/
2•syukursyakir•24m ago•0 comments

Top #1 AI Video Agent: Free All in One AI Video and Image Agent by Vidzoo AI

https://vidzoo.ai
1•Evan233•24m ago•1 comments

Ask HN: How would you design an LLM-unfriendly language?

1•sph•26m ago•0 comments

Show HN: MuxPod – A mobile tmux client for monitoring AI agents on the go

https://github.com/moezakura/mux-pod
1•moezakura•26m ago•0 comments

March for Billionaires

https://marchforbillionaires.org/
1•gscott•26m ago•0 comments

Turn Claude Code/OpenClaw into Your Local Lovart – AI Design MCP Server

https://github.com/jau123/MeiGen-Art
1•jaujaujau•27m ago•0 comments

An Nginx Engineer Took over AI's Benchmark Tool

https://github.com/hongzhidao/jsbench/tree/main/docs
1•zhidao9•29m ago•0 comments

Use fn-keys as fn-keys for chosen apps in OS X

https://www.balanci.ng/tools/karabiner-function-key-generator.html
1•thelollies•30m ago•1 comments

Sir/SIEN: A communication protocol for production outages

https://getsimul.com/blog/communicate-outage-to-ceo
1•pingananth•31m ago•1 comments

Show HN: OpenCode for Meetings

https://getscripta.app
2•whitemyrat•32m ago•1 comments

The chaos in the US is affecting open source software and its developers

https://www.osnews.com/story/144348/the-chaos-in-the-us-is-affecting-open-source-software-and-its...
1•pjmlp•33m ago•0 comments

The world heard JD Vance being booed at the Olympics. Except for viewers in USA

https://www.theguardian.com/sport/2026/feb/07/jd-vance-boos-winter-olympics
66•treetalker•35m ago•14 comments

The original vi is a product of its time (and its time has passed)

https://utcc.utoronto.ca/~cks/space/blog/unix/ViIsAProductOfItsTime
1•ingve•42m ago•0 comments

Circumstantial Complexity, LLMs and Large Scale Architecture

https://www.datagubbe.se/aiarch/
1•ingve•49m ago•0 comments

Tech Bro Saga: big tech critique essay series

1•dikobraz•52m ago•0 comments

Show HN: A calculus course with an AI tutor watching the lectures with you

https://calculus.academa.ai/
1•apoogdk•56m ago•0 comments

Show HN: 83K lines of C++ – cryptocurrency written from scratch, not a fork

https://github.com/Kristian5013/flow-protocol
1•kristianXXI•1h ago•0 comments

Show HN: SAA – A minimal shell-as-chat agent using only Bash

https://github.com/moravy-mochi/saa
1•mrvmochi•1h ago•0 comments
Open in hackernews

Humans Learn to Read/Write from a Few Books but LLMs Require Thousands: why?

5•giardini•7mo ago

Comments

tolerance•7mo ago
This is a great question.

My remedial guess is that the human mind is more efficient at the pattern recognition that LLMs excel at in their own right.

We can do a lot more with less data, exert less effort and come to a reasonably accurate conclusion.

LLMs can artificially reason, but it requires intricate software that took decades to develop to the standard that it's reached now, and computers that suck the earth of its resources at a hair-raising scale, and like you've mentioned a lot of data. A lot of data. Apparently the entire internet and then some on a carousel.

Intelligence is an innate faculty of man and man's measure of intelligence generally doesn't require that much, depending on what's expected of the man throughout the course of his life.

Because AI is a technology the expectations we place on it are way higher.

A manuscript with a few errors, blotches, misspellings, omissions, what have you, is excused. If your printer does the same thing for every four or five jobs, it's defective.

fasthands9•7mo ago
I think this is mostly right, but also I'm not sure I agree completely with the premise. Humans have years of conversations they've heard before they attempt to read or write. They already have a concept of what a 'dog' is before they see the word, and know what it is likely to do. Not the same with something that only sees text.
tolerance•7mo ago
I agree with you 100% and I'm not sure if it contradicts my point that humans have a natural advantage over LLMs in the way I tried to illustrate.

My initial comment was going to make an abstract reference to how human beings are pretty much wired for reasoning from the time that they're being breastfed, or at least reared in the clutch of their mother. It has something to do with the impression I've picked up of how the inheritance of a language, and subsequently literacy, starts with your mom—in ideal cases.

I don't know if this is a strike against humans in the whole argument for efficiency. But I don't think it does.

Computers don't have Moms. Go Moms.

techpineapple•7mo ago
Yeah one thing I’ve wondered (and maybe they do this) but find ways to cross encode different kinds of data, words yes, but auditory and visual data too. The algorithms to do this might be complicated (or incomprehensible) but for sure lots of creativity say comes from the interrelationship between senses, combine that with emotion as well, and I imagine it partially comes down to, our writing ability isn’t limited to the collection of what we’ve read.

Then maybe the other thing is that rules and relationships must be encoded in a special way. In LLM’s I assume rules are emergent, but maybe we have a specific rules engine that gets trained based on the emotional salience of what we read/hear.

Maybe another reason is what’s encoded in our DNA, which might imagine our brain structure is fundamentally designed for some of this stuff.

NoahZuniga•7mo ago
Humans have tons of "pretraining" encoded in their DNA
JohnFen•7mo ago
My guess is that it's because humans are intelligent. What I mean by that is that humans are actually understanding what they're reading. If you understand what the words you're reading mean, that makes it easier to read the same words in other contexts.