frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
450•klaussilveira•6h ago•109 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
791•xnx•12h ago•481 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
152•isitcontent•6h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
143•dmpetrov•7h ago•63 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
19•matheusalmeida•1d ago•0 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
46•quibono•4d ago•4 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
84•jnord•3d ago•8 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
257•vecti•8h ago•120 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
191•eljojo•9h ago•127 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
320•aktau•13h ago•155 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
317•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
403•todsacerdoti•14h ago•218 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
328•lstoll•13h ago•236 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
19•kmm•4d ago•1 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
50•phreda4•6h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
110•vmatsiiako•11h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
189•i5heu•9h ago•132 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
149•limoce•3d ago•79 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•3 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
240•surprisetalk•3d ago•31 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
985•cdrnsf•16h ago•417 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
21•gfortaine•4h ago•2 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
43•rescrv•14h ago•17 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
58•ray__•3h ago•14 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
36•lebovic•1d ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
77•antves•1d ago•57 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
40•nwparker•1d ago•10 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
20•MarlonPro•3d ago•4 comments

How virtual textures work

https://www.shlom.dev/articles/how-virtual-textures-really-work/
28•betamark•13h ago•23 comments
Open in hackernews

Predictions from the METR AI scaling graph are based on a flawed premise

https://garymarcus.substack.com/p/the-latest-ai-scaling-graph-and-why
50•nsoonhui•9mo ago

Comments

Nivge•9mo ago
TL;DR - the benchmark depends on its specific dataset, and it isn't a perfect representation to evaluate AI progress. That doesn't mean it doesn't make sense, or doesn't have value.
hatefulmoron•9mo ago
I had assumed that the Y axis was corresponding to some measurement of the LLM's ability to actually work/mull over a task in a loop while making progress. In other words, I thought it meant something like "you can leave Sonnet 3.7 for a whole hour and it will meaningfully progress on a problem", but the reality is less impressive. Serves me right for not looking at the fine print.
dist-epoch•9mo ago
> Abject failure on a task that many adults could solve in a minute

Maybe author should check before pressing "Publish" if the info in the post is not already outdated.

ChatGPT passed the image generation test mentioned: https://chatgpt.com/share/68171e2a-5334-8006-8d6e-dd693f2cec...

frotaur•9mo ago
Even excluding the fact that this image is simply to illustrate, and it's really not the main point of the article, in the chat you posted, ChatGPT actually failed again, because the r's are not circled.
comex•9mo ago
That's true, but it illustrates a point about 'jagged intelligence'. Just like there's a tendency to cherry-pick the tasks AI is best at and equate it with general intelligence, there's a counter-tendency to cherry-pick the tasks AI is worst at and equate it with a general lack of intelligence.

This case is especially egregious because of how there were probably two different models involved. I assume Marcus' images came from some AI service that followed what until very recently was the standard pattern: you ask an LLM to generate an image; the LLM goes and fluffs out your text, then passes it to a completely separate diffusion-based image generation model, which has only a rudimentary understanding of English grammar. So of course his request for "words and nothing else" was ignored. This is a real limitation of the image generation model, but that has no relevance to the strengths and weaknesses of the LLM itself. And 'AI will replace humans' scenarios typically focus on text-based tasks that use the LLM itself.

Arguably AI services are responsible for encouraging users to think of what are really two separate models (LLM and image generation) as a single 'AI'. But Marcus should know better.

And so it's not surprising that ChatGPT was able to produce dramatically better results now that it has "native" image generation, which supposedly uses the native multimodal capabilities of the LLM (though rumors are that that description is an oversimplification). The results are still not correct. But it's a major advancement that the model now respects grammar; it no longer just spots the word "fruit" and generates a picture of fruit. Illustration or no, Marcus is misrepresenting the state of the art by not including this advancement.

If Marcus had used a recent ChatGPT output instead, the comparison would be more fair, but still somewhat misleading. Even with native capabilities, LLMs are simply worse at both understanding and generating images than they are at understanding and generating text. But again, text capability matters much more. And you can't just assume that a model's poor performance on images will correlate with poor performance on text.

The thing is, I tend to agree with the substance of Marcus's post, including the part where portrayals of current AI capabilities are suspect because they don't pass the 'sniff test', or in other words, because they don't take into account how LLMs continue to fall down on some very basic tasks. I just think the proper tasks for this evaluation should be text-based. I'd say the original "count the number of 'r's in strawberry" task is a decent example, even if it's been patched, because it really showcases the 'confidently wrong' issue that continues to plague LLMs.

croes•9mo ago
So OpenAI fixed that, but the next simple task on which AI fails is just around the corner.

The problem is AI doesn’t think and if a task is totally new it doesn’t produce the correct answer.

https://news.ycombinator.com/item?id=43800686

yorwba•9mo ago
> you could probably put together one reasonable collection of word counting and question answering tasks with average human time of 30 seconds and another collection with an average human time of 20 minutes where GPT-4 would hit 50% accuracy on each.

So do this and pick the one where humans do best. I doubt that doing so would show all progress to be illusory.

But it would certainly be interesting to know what the easiest thing is that a human can do but current AIs struggle with.

xg15•9mo ago
> But it would certainly be interesting to know what the easiest thing is that a human can do but current AIs struggle with.

Still "Count the R's" apparently.

K0balt•9mo ago
The problem , really, is human cognitive dissonance. We draw false conclusions that competence at some tasks implies competence at another. It’s not a universal human problem, we intuit that a front end loader , just because it can dig really well, is not therefore good at all other tasks. But when it comes down to cognition, our models break down quickly.

I suspect this is because our proxies are predicated on a task set that inherently includes the physical world, which at some level connects all tasks and creates links between capabilities that generally pervade our environment. LLMs do not exist in this physical world, and are therefore not within the set of things that can be reasoned about with those proxies.

This will probably gradually change with robotics, as the competencies required to exist and function in the physical world will (I postulate) generalize to other tasks in such a way that it more closely matches the pattern that our assumptions are based on.

Of course, if we segregate intelligence into isolated modules for motility and cognition, this will not be the case as we will not be taking advantage of that generalization. I think that would be a big mistake, especially in light of the hypotheses that the massive leap in capabilities of LLMs came more from the training on things we weren’t specifically trying to achieve- the bulk of seemingly irrelevant data that unlocked simple language processing into reasoning and world modeling.

the8472•9mo ago
> LLMs do not exist in this physical world, and are therefore not within the set of things that can be reasoned about with those proxies.

Perhaps not the mainstream models, but deepmind has been working on robotics models with simulated and physical RL for years https://deepmind.google/discover/blog/rt-2-new-model-transla...

mentalgear•9mo ago
what you are describing are world models and physical AI, which has recently become much more mainstream after the recent nvidia GDC.
AIPedant•9mo ago
Dogs can pass a dog-appropriate variant of this test: https://xcancel.com/SpencerKSchiff/status/191010636820533676... (the dog test uses a treat on one string and junk on the other, they have to pull the correct string to get the treat)

This was before o3, but another tweet I saw (don't have the link) suggests it's also completely incapable of getting it.

Sharlin•9mo ago
> Unfortunately, literally none of the tweets we saw even considered the possibility that a problematic graph specific to software tasks might not generalize to literally all other aspects of cognition.

How am I not surprised?

aoeusnth1•9mo ago
This post is a very weak and incoherent criticism of a well formulated benchmark: task length bucket for which a model succeeds 50% of the time.

Gary says: - This is just the task length that the models were able to solve in THIS dataset. What about other tasks?

Yeah, obviously. The point is that models are improving on these tasks in a predicable fashion. If you care about software, you should care how good ai is at software.

- Gary says: Task length is a bad metric. What about a bunch of other factors of difficulty which might not factor into task length?

Task length is a pretty good proxy for difficulty, that's why people grade a bug in days. Of course many factors contribute to this estimate, but averaged over many tasks, time is a great metric for difficulty.

Finally, Gary just ignores that despite his perspective that the metric makes no sense and is meaningless, it has extremely strong predictive value. This should give you pause - how can an arbitrary metric with no connection to the true difficulty of a task, with no real way of comparing its validity of measuring difficulty across tasks or across task-takers, result in such a retrospectively smooth curve, and so closely predict the recent data points from sonnet and o3? something IS going on there, which cannot fit into Gary's ~spin~ narrative that nothing ever happens.

sandspar•9mo ago
Gary Marcus could save himself lots of time. He just has to write a post called "Here's today's opinion." Because he's so predictable, he could just leave the body text blank. Everyone knows his conclusions anyways. This way he could save himself and his readers lots of time.