frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•4m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•6m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•6m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•8m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•8m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•9m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•9m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•10m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•13m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
4•codexon•13m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•14m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•17m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•18m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•19m ago•1 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•19m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•19m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•22m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•22m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•24m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•26m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•27m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•27m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•27m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•29m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•30m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•33m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•37m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•38m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
2•latentio•39m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•42m ago•0 comments
Open in hackernews

Understanding Moravec's Paradox

https://hexhowells.com/posts/moravecs-paradox.html
23•hexhowells•5mo ago

Comments

cwmoore•5mo ago
Put all the bad robots in jail, for UBI.

EDIT: someone has to order the license plates

joe_the_user•5mo ago
Any attention on Moravec's paradox is good imo because it is important.

That said, the article starts with several problems.

1) Claims that it isn't a paradox, which is just silly. A paradox is a counter-intuitive result. The result is generally counter-intuitive whatever explanation you give. Zeno's paradox remains a paradox despite calculus essentially explaining it, etc.

2) Calls the article "Understanding Moravec's Paradox" when it should be called "My Explanation of Moravec's Paradox".

3) The author's final explanation seems kind of simplistic; "Human activities just have a large search space". IDK. Human activity sometimes does still in things that aren't walking also. I mean, "not enough data" is an explanation why neural networks can't do a bunch of things. But not all programs are neural networks. One of the things humans are really good at is learning things from a few examples. A serious explanation of Moravec's Paradox would have to explain this as well imo.

cwmoore•5mo ago
Indeed, also ideally, the 2 second rule.
neerajsi•5mo ago
> mean, "not enough data" is an explanation why neural networks can't do a bunch of things... One of the things humans are really good at is learning things from a few examples

I dispute the search space problem for something like folding clothes. Like a lot of human actions in space, folding clothes and other motor tasks are hierarchical sequences of smaller tasks that are strung together, similar to a sentence or paragraph of text.

We can probably learn things from each other from few examples because we are leaning on a large library of subtasks that all have learned or which are innate, and the actual novel learning of sequencing and ordering is relatively small to get to the new reward.

I expect soon we'll get AIs that have part of their training be unsupervised rl in a physics simulation, if it's not being done already.

hexhowells•5mo ago
> Like a lot of human actions in space, folding clothes and other motor tasks are hierarchical sequences of smaller tasks that are strung together

I disagree, you can model those tasks as hiearchical sequences of smaller tasks. But the terminal goal of folding clothes is to turn a pile of unfolded clothes into a neat pile of folded clothes.

The reason you would break down the task is because getting between those two states with the only reward signal being "the clothes are now folded" takes a lot of steps, and given the possible actions the robot can take, results in a large search space.

hexhowells•5mo ago
The human ability to learn from few examples can be explained with evolution (and thus search). We evolved to be fast learners as it was key to our survival. If you touched fire and felt pain, you better learn quickly not to keep touching it. This learning from reward signals (neurotransmitters) in our brain generalises to pretty much all learning tasks
joe_the_user•5mo ago
Everything can "be explained by evolution" but such an explanation doesn't tell you how a particular form serves a particular task.
famouswaffles•5mo ago
The point is that to be good at 'learning from a few examples', the architecture of the human brain had to be constructed from a enormous amount of trial and error data. This is not something you can just brush off or ignore. 'not enough data' is a perfectly valid for a 'serious' explanation.
qrios•5mo ago
Why the name „Moravec“ is two times correct in this article, but it is misspelled if it is a link text.
xg15•5mo ago
> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

He just states that this description would be incorrect multiple times but never gives a reason why it would be incorrect.

Then he tries to simplify the paradox to a question of degree, e.g. "hard" problems for computers just have a larger search space and require more compute.

But wasn't a big part about the paradox also that we didn't even have insight as how the problems could be solved?

E.g. if you play chess or do math as a human, you're consciously aware if the patterns, strategies and "algorithms" you use - and there is a clear path to formalize them so a computer could recreate them.

However, with vision, walking, "thinking", etc, the process are entirely subconscious and we get very little information on the "algorithms" by introspection. Additionally, not just the environment and the input data is chaotic and "messy", but so is the goal of what we would want to archive in the first place. If you ever hand-labeled a classification corpus, you could experience this firsthand: If the classification criteria were even moderately abstract, labelers would often disagree how to label individual examples.

Machine learning didn't really solve this problem, it just sort of routed around it and threw it under a rug: Instead of trying to formulate a clear objective, just come up with a million examples and have the algorithm guess the objective from the examples.

I think this kind of stuff is meant with "the hard problems are easy and the easy problems are hard".

YeGoblynQueenne•5mo ago
>> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

From Wikipedia, quoting Hans Moravec:

Moravec's paradox is the observation that, as Hans Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".[1]

https://en.wikipedia.org/wiki/Moravec's_paradox

Note that Moravec is not saying anything about "much less computation" and he's also not talking about "reasoning", particularly since he's talking in the 1980's when AI systems excelled at reasoning (because they were still predominantly logic-based and not LLMs; then again, that's just a couple of years before the AI winter of the '90s hit and took all that away).

In my opinion the author should have started by quoting Moravec directly instead of paraphrasing so that we know he's really discussing Moravec's saying and not his own, idiosyncratic, interpretation of it.

ASalazarMX•5mo ago
From Wikipedia:

Moravec's paradox is the observation that, as Hans Moravec wrote in 1988, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility". This counterintuitive pattern happens because skills that appear effortless to humans, such as recognizing faces or walking, required millions of years of evolution to develop, while abstract reasoning abilities like mathematics are evolutionarily recent."

If that's the explanation, it's crazy to think what abstract reasoning evolved through millions of years would be like. Thought with layers upon layers above ours.