frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
86•valyala•4h ago•16 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•17 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
37•zdw•3d ago•5 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
89•mellosouls•6h ago•172 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
49•surprisetalk•3h ago•52 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
133•valyala•4h ago•102 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
143•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
96•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
850•klaussilveira•23h ago•257 comments

First Proof

https://arxiv.org/abs/2602.05192
66•samasblack•6h ago•51 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1092•xnx•1d ago•618 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•5h ago•10 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
4•mbitsnbites•3d ago•0 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
234•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
516•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
94•onurkanbkrc•8h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
30•momciloo•4h ago•5 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
337•ColinWright•3h ago•404 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
254•alainrk•8h ago•415 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
183•1vuio0pswjnm7•10h ago•255 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
613•nar001•8h ago•270 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
35•marklit•5d ago•6 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
47•rbanffy•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
124•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
97•speckx•4d ago•111 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•117 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
287•isitcontent•1d ago•38 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
32•sandGorgon•2d ago•15 comments
Open in hackernews

Understanding Moravec's Paradox

https://hexhowells.com/posts/moravecs-paradox.html
23•hexhowells•5mo ago

Comments

cwmoore•5mo ago
Put all the bad robots in jail, for UBI.

EDIT: someone has to order the license plates

joe_the_user•5mo ago
Any attention on Moravec's paradox is good imo because it is important.

That said, the article starts with several problems.

1) Claims that it isn't a paradox, which is just silly. A paradox is a counter-intuitive result. The result is generally counter-intuitive whatever explanation you give. Zeno's paradox remains a paradox despite calculus essentially explaining it, etc.

2) Calls the article "Understanding Moravec's Paradox" when it should be called "My Explanation of Moravec's Paradox".

3) The author's final explanation seems kind of simplistic; "Human activities just have a large search space". IDK. Human activity sometimes does still in things that aren't walking also. I mean, "not enough data" is an explanation why neural networks can't do a bunch of things. But not all programs are neural networks. One of the things humans are really good at is learning things from a few examples. A serious explanation of Moravec's Paradox would have to explain this as well imo.

cwmoore•5mo ago
Indeed, also ideally, the 2 second rule.
neerajsi•5mo ago
> mean, "not enough data" is an explanation why neural networks can't do a bunch of things... One of the things humans are really good at is learning things from a few examples

I dispute the search space problem for something like folding clothes. Like a lot of human actions in space, folding clothes and other motor tasks are hierarchical sequences of smaller tasks that are strung together, similar to a sentence or paragraph of text.

We can probably learn things from each other from few examples because we are leaning on a large library of subtasks that all have learned or which are innate, and the actual novel learning of sequencing and ordering is relatively small to get to the new reward.

I expect soon we'll get AIs that have part of their training be unsupervised rl in a physics simulation, if it's not being done already.

hexhowells•5mo ago
> Like a lot of human actions in space, folding clothes and other motor tasks are hierarchical sequences of smaller tasks that are strung together

I disagree, you can model those tasks as hiearchical sequences of smaller tasks. But the terminal goal of folding clothes is to turn a pile of unfolded clothes into a neat pile of folded clothes.

The reason you would break down the task is because getting between those two states with the only reward signal being "the clothes are now folded" takes a lot of steps, and given the possible actions the robot can take, results in a large search space.

hexhowells•5mo ago
The human ability to learn from few examples can be explained with evolution (and thus search). We evolved to be fast learners as it was key to our survival. If you touched fire and felt pain, you better learn quickly not to keep touching it. This learning from reward signals (neurotransmitters) in our brain generalises to pretty much all learning tasks
joe_the_user•5mo ago
Everything can "be explained by evolution" but such an explanation doesn't tell you how a particular form serves a particular task.
famouswaffles•5mo ago
The point is that to be good at 'learning from a few examples', the architecture of the human brain had to be constructed from a enormous amount of trial and error data. This is not something you can just brush off or ignore. 'not enough data' is a perfectly valid for a 'serious' explanation.
qrios•5mo ago
Why the name „Moravec“ is two times correct in this article, but it is misspelled if it is a link text.
xg15•5mo ago
> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

He just states that this description would be incorrect multiple times but never gives a reason why it would be incorrect.

Then he tries to simplify the paradox to a question of degree, e.g. "hard" problems for computers just have a larger search space and require more compute.

But wasn't a big part about the paradox also that we didn't even have insight as how the problems could be solved?

E.g. if you play chess or do math as a human, you're consciously aware if the patterns, strategies and "algorithms" you use - and there is a clear path to formalize them so a computer could recreate them.

However, with vision, walking, "thinking", etc, the process are entirely subconscious and we get very little information on the "algorithms" by introspection. Additionally, not just the environment and the input data is chaotic and "messy", but so is the goal of what we would want to archive in the first place. If you ever hand-labeled a classification corpus, you could experience this firsthand: If the classification criteria were even moderately abstract, labelers would often disagree how to label individual examples.

Machine learning didn't really solve this problem, it just sort of routed around it and threw it under a rug: Instead of trying to formulate a clear objective, just come up with a million examples and have the algorithm guess the objective from the examples.

I think this kind of stuff is meant with "the hard problems are easy and the easy problems are hard".

YeGoblynQueenne•5mo ago
>> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

From Wikipedia, quoting Hans Moravec:

Moravec's paradox is the observation that, as Hans Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

https://en.wikipedia.org/wiki/Moravec's_paradox

Note that Moravec is not saying anything about "much less computation" and he's also not talking about "reasoning", particularly since he's talking in the 1980's when AI systems excelled at reasoning (because they were still predominantly logic-based and not LLMs; then again, that's just a couple of years before the AI winter of the '90s hit and took all that away).

In my opinion the author should have started by quoting Moravec directly instead of paraphrasing so that we know he's really discussing Moravec's saying and not his own, idiosyncratic, interpretation of it.

ASalazarMX•5mo ago
From Wikipedia:

Moravec's paradox is the observation that, as Hans Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility". This counterintuitive pattern happens because skills that appear effortless to humans, such as recognizing faces or walking, required millions of years of evolution to develop, while abstract reasoning abilities like mathematics are evolutionarily recent."

If that's the explanation, it's crazy to think what abstract reasoning evolved through millions of years would be like. Thought with layers upon layers above ours.