frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
1•DEntisT_•1m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
1•tosh•2m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•2m ago•1 comments

The Path to Mojo 1.0

https://www.modular.com/blog/the-path-to-mojo-1-0
1•tosh•5m ago•0 comments

Show HN: I'm 75, building an OSS Virtual Protest Protocol for digital activism

https://github.com/voice-of-japan/Virtual-Protest-Protocol/blob/main/README.md
4•sakanakana00•8m ago•0 comments

Show HN: I built Divvy to split restaurant bills from a photo

https://divvyai.app/
3•pieterdy•10m ago•0 comments

Hot Reloading in Rust? Subsecond and Dioxus to the Rescue

https://codethoughts.io/posts/2026-02-07-rust-hot-reloading/
3•Tehnix•11m ago•1 comments

Skim – vibe review your PRs

https://github.com/Haizzz/skim
2•haizzz•13m ago•1 comments

Show HN: Open-source AI assistant for interview reasoning

https://github.com/evinjohnn/natively-cluely-ai-assistant
3•Nive11•13m ago•4 comments

Tech Edge: A Living Playbook for America's Technology Long Game

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2026-01/260120_EST_Tech_Edge_0.pdf?Version...
2•hunglee2•16m ago•0 comments

Golden Cross vs. Death Cross: Crypto Trading Guide

https://chartscout.io/golden-cross-vs-death-cross-crypto-trading-guide
2•chartscout•19m ago•0 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
3•AlexeyBrin•22m ago•0 comments

What the longevity experts don't tell you

https://machielreyneke.com/blog/longevity-lessons/
2•machielrey•23m ago•1 comments

Monzo wrongly denied refunds to fraud and scam victims

https://www.theguardian.com/money/2026/feb/07/monzo-natwest-hsbc-refunds-fraud-scam-fos-ombudsman
3•tablets•28m ago•1 comments

They were drawn to Korea with dreams of K-pop stardom – but then let down

https://www.bbc.com/news/articles/cvgnq9rwyqno
2•breve•30m ago•0 comments

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•33m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
2•pastage•33m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
2•billiob•33m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
2•birdculture•39m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•45m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•46m ago•1 comments

Slop News - The Front Page right now but it's only Slop

https://slop-news.pages.dev/slop-news
1•keepamovin•50m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•53m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
4•tosh•58m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
4•oxxoxoxooo•1h ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•1h ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
4•goranmoomin•1h ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

4•throwaw12•1h ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
3•senekor•1h ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
2•myk-e•1h ago•0 comments
Open in hackernews

Predictions from the METR AI scaling graph are based on a flawed premise

https://garymarcus.substack.com/p/the-latest-ai-scaling-graph-and-why
50•nsoonhui•9mo ago

Comments

Nivge•9mo ago
TL;DR - the benchmark depends on its specific dataset, and it isn't a perfect representation to evaluate AI progress. That doesn't mean it doesn't make sense, or doesn't have value.
hatefulmoron•9mo ago
I had assumed that the Y axis was corresponding to some measurement of the LLM's ability to actually work/mull over a task in a loop while making progress. In other words, I thought it meant something like "you can leave Sonnet 3.7 for a whole hour and it will meaningfully progress on a problem", but the reality is less impressive. Serves me right for not looking at the fine print.
dist-epoch•9mo ago
> Abject failure on a task that many adults could solve in a minute

Maybe author should check before pressing "Publish" if the info in the post is not already outdated.

ChatGPT passed the image generation test mentioned: https://chatgpt.com/share/68171e2a-5334-8006-8d6e-dd693f2cec...

frotaur•9mo ago
Even excluding the fact that this image is simply to illustrate, and it's really not the main point of the article, in the chat you posted, ChatGPT actually failed again, because the r's are not circled.
comex•9mo ago
That's true, but it illustrates a point about 'jagged intelligence'. Just like there's a tendency to cherry-pick the tasks AI is best at and equate it with general intelligence, there's a counter-tendency to cherry-pick the tasks AI is worst at and equate it with a general lack of intelligence.

This case is especially egregious because of how there were probably two different models involved. I assume Marcus' images came from some AI service that followed what until very recently was the standard pattern: you ask an LLM to generate an image; the LLM goes and fluffs out your text, then passes it to a completely separate diffusion-based image generation model, which has only a rudimentary understanding of English grammar. So of course his request for "words and nothing else" was ignored. This is a real limitation of the image generation model, but that has no relevance to the strengths and weaknesses of the LLM itself. And 'AI will replace humans' scenarios typically focus on text-based tasks that use the LLM itself.

Arguably AI services are responsible for encouraging users to think of what are really two separate models (LLM and image generation) as a single 'AI'. But Marcus should know better.

And so it's not surprising that ChatGPT was able to produce dramatically better results now that it has "native" image generation, which supposedly uses the native multimodal capabilities of the LLM (though rumors are that that description is an oversimplification). The results are still not correct. But it's a major advancement that the model now respects grammar; it no longer just spots the word "fruit" and generates a picture of fruit. Illustration or no, Marcus is misrepresenting the state of the art by not including this advancement.

If Marcus had used a recent ChatGPT output instead, the comparison would be more fair, but still somewhat misleading. Even with native capabilities, LLMs are simply worse at both understanding and generating images than they are at understanding and generating text. But again, text capability matters much more. And you can't just assume that a model's poor performance on images will correlate with poor performance on text.

The thing is, I tend to agree with the substance of Marcus's post, including the part where portrayals of current AI capabilities are suspect because they don't pass the 'sniff test', or in other words, because they don't take into account how LLMs continue to fall down on some very basic tasks. I just think the proper tasks for this evaluation should be text-based. I'd say the original "count the number of 'r's in strawberry" task is a decent example, even if it's been patched, because it really showcases the 'confidently wrong' issue that continues to plague LLMs.

croes•9mo ago
So OpenAI fixed that, but the next simple task on which AI fails is just around the corner.

The problem is AI doesn’t think and if a task is totally new it doesn’t produce the correct answer.

https://news.ycombinator.com/item?id=43800686

yorwba•9mo ago
> you could probably put together one reasonable collection of word counting and question answering tasks with average human time of 30 seconds and another collection with an average human time of 20 minutes where GPT-4 would hit 50% accuracy on each.

So do this and pick the one where humans do best. I doubt that doing so would show all progress to be illusory.

But it would certainly be interesting to know what the easiest thing is that a human can do but current AIs struggle with.

xg15•9mo ago
> But it would certainly be interesting to know what the easiest thing is that a human can do but current AIs struggle with.

Still "Count the R's" apparently.

K0balt•9mo ago
The problem , really, is human cognitive dissonance. We draw false conclusions that competence at some tasks implies competence at another. It’s not a universal human problem, we intuit that a front end loader , just because it can dig really well, is not therefore good at all other tasks. But when it comes down to cognition, our models break down quickly.

I suspect this is because our proxies are predicated on a task set that inherently includes the physical world, which at some level connects all tasks and creates links between capabilities that generally pervade our environment. LLMs do not exist in this physical world, and are therefore not within the set of things that can be reasoned about with those proxies.

This will probably gradually change with robotics, as the competencies required to exist and function in the physical world will (I postulate) generalize to other tasks in such a way that it more closely matches the pattern that our assumptions are based on.

Of course, if we segregate intelligence into isolated modules for motility and cognition, this will not be the case as we will not be taking advantage of that generalization. I think that would be a big mistake, especially in light of the hypotheses that the massive leap in capabilities of LLMs came more from the training on things we weren’t specifically trying to achieve- the bulk of seemingly irrelevant data that unlocked simple language processing into reasoning and world modeling.

the8472•9mo ago
> LLMs do not exist in this physical world, and are therefore not within the set of things that can be reasoned about with those proxies.

Perhaps not the mainstream models, but deepmind has been working on robotics models with simulated and physical RL for years https://deepmind.google/discover/blog/rt-2-new-model-transla...

mentalgear•9mo ago
what you are describing are world models and physical AI, which has recently become much more mainstream after the recent nvidia GDC.
AIPedant•9mo ago
Dogs can pass a dog-appropriate variant of this test: https://xcancel.com/SpencerKSchiff/status/191010636820533676... (the dog test uses a treat on one string and junk on the other, they have to pull the correct string to get the treat)

This was before o3, but another tweet I saw (don't have the link) suggests it's also completely incapable of getting it.

Sharlin•9mo ago
> Unfortunately, literally none of the tweets we saw even considered the possibility that a problematic graph specific to software tasks might not generalize to literally all other aspects of cognition.

How am I not surprised?

aoeusnth1•9mo ago
This post is a very weak and incoherent criticism of a well formulated benchmark: task length bucket for which a model succeeds 50% of the time.

Gary says: - This is just the task length that the models were able to solve in THIS dataset. What about other tasks?

Yeah, obviously. The point is that models are improving on these tasks in a predicable fashion. If you care about software, you should care how good ai is at software.

- Gary says: Task length is a bad metric. What about a bunch of other factors of difficulty which might not factor into task length?

Task length is a pretty good proxy for difficulty, that's why people grade a bug in days. Of course many factors contribute to this estimate, but averaged over many tasks, time is a great metric for difficulty.

Finally, Gary just ignores that despite his perspective that the metric makes no sense and is meaningless, it has extremely strong predictive value. This should give you pause - how can an arbitrary metric with no connection to the true difficulty of a task, with no real way of comparing its validity of measuring difficulty across tasks or across task-takers, result in such a retrospectively smooth curve, and so closely predict the recent data points from sonnet and o3? something IS going on there, which cannot fit into Gary's ~spin~ narrative that nothing ever happens.

sandspar•9mo ago
Gary Marcus could save himself lots of time. He just has to write a post called "Here's today's opinion." Because he's so predictable, he could just leave the body text blank. Everyone knows his conclusions anyways. This way he could save himself and his readers lots of time.