frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•25s ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•28s ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•53s ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•4m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•4m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
1•valyala•5m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•6m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•7m ago•0 comments

New wave of GLP-1 drugs is coming–and they're stronger than Wegovy and Zepbound

https://www.scientificamerican.com/article/new-glp-1-weight-loss-drugs-are-coming-and-theyre-stro...
3•randycupertino•9m ago•0 comments

Convert tempo (BPM) to millisecond durations for musical note subdivisions

https://brylie.music/apps/bpm-calculator/
1•brylie•11m ago•0 comments

Show HN: Tasty A.F.

https://tastyaf.recipes/about
1•adammfrank•12m ago•0 comments

The Contagious Taste of Cancer

https://www.historytoday.com/archive/history-matters/contagious-taste-cancer
1•Thevet•13m ago•0 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
1•alephnerd•13m ago•0 comments

Bithumb mistakenly hands out $195M in Bitcoin to users in 'Random Box' giveaway

https://koreajoongangdaily.joins.com/news/2026-02-07/business/finance/Crypto-exchange-Bithumb-mis...
1•giuliomagnifico•13m ago•0 comments

Beyond Agentic Coding

https://haskellforall.com/2026/02/beyond-agentic-coding
3•todsacerdoti•15m ago•0 comments

OpenClaw ClawHub Broken Windows Theory – If basic sorting isn't working what is?

https://www.loom.com/embed/e26a750c0c754312b032e2290630853d
1•kaicianflone•17m ago•0 comments

OpenBSD Copyright Policy

https://www.openbsd.org/policy.html
1•Panino•18m ago•0 comments

OpenClaw Creator: Why 80% of Apps Will Disappear

https://www.youtube.com/watch?v=4uzGDAoNOZc
2•schwentkerr•21m ago•0 comments

What Happens When Technical Debt Vanishes?

https://ieeexplore.ieee.org/document/11316905
2•blenderob•23m ago•0 comments

AI Is Finally Eating Software's Total Market: Here's What's Next

https://vinvashishta.substack.com/p/ai-is-finally-eating-softwares-total
3•gmays•23m ago•0 comments

Computer Science from the Bottom Up

https://www.bottomupcs.com/
2•gurjeet•24m ago•0 comments

Show HN: A toy compiler I built in high school (runs in browser)

https://vire-lang.web.app
1•xeouz•25m ago•1 comments

You don't need Mac mini to run OpenClaw

https://runclaw.sh
1•rutagandasalim•26m ago•0 comments

Learning to Reason in 13 Parameters

https://arxiv.org/abs/2602.04118
2•nicholascarolan•28m ago•0 comments

Convergent Discovery of Critical Phenomena Mathematics Across Disciplines

https://arxiv.org/abs/2601.22389
1•energyscholar•28m ago•1 comments

Ask HN: Will GPU and RAM prices ever go down?

1•alentred•28m ago•2 comments

From hunger to luxury: The story behind the most expensive rice (2025)

https://www.cnn.com/travel/japan-expensive-rice-kinmemai-premium-intl-hnk-dst
2•mooreds•29m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
6•mindracer•30m ago•0 comments

A New Crypto Winter Is Here and Even the Biggest Bulls Aren't Certain Why

https://www.wsj.com/finance/currencies/a-new-crypto-winter-is-here-and-even-the-biggest-bulls-are...
1•thm•30m ago•0 comments

Moltbook was peak AI theater

https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/
2•Brajeshwar•31m ago•0 comments
Open in hackernews

OpenAI Researcher Jason Wei: It's obvious that it will not be a "fast takeoff"

https://twitter.com/_jasonwei/status/1939762496757539297
36•s-macke•7mo ago

Comments

4ndrewl•7mo ago
Jam tomorrow
neom•7mo ago
"Finally, maybe this is controversial but ultimately progress in science is bottlenecked by real-world experiments."

I feel like this has been the vast majority of consensus around these halls? I can't count the number of HN comments I've nodded at around the idea that irl will become the bottleneck.

bglazer•7mo ago
This shows just how completely detached from reality this whole "takeoff" narrative is. It's utterly baffling that someone would consider it "controversial" that understanding the world requires *observing the world*.

The hallmark example of this is life extension. There's a not insignificant fraction of very powerful, very wealthy people who think that their machine god is going to read all of reddit and somehow cogitate its way to a cure for ageing. But how would we know if it works? Seriously, how else do we know if our AGI's life extension therapy is working besides just fucking waiting and seeing if people still die? Each iteration will take years (if not decades) just to test.

neom•7mo ago
Last year went for a walk with a fairly known AI researcher, I was somewhat shocked that they didn't understand the difference between thoughts, feelings and emotions. This is what I find interesting about all these top someones in AI.

I presume the teams at the frontier labs are interdisciplinary (philosophy, psychology, biology, technology) - however that may be a poor assumption.

stevenhuang•7mo ago
What do you think is the difference, and why are you certain it must apply to AI? Why do you think human thought/emotion is an appropriate model for AI?

If it's all just information in the end, we don't know how much of all this is implementation detail and ultimately irrelevant for a system's ability to reason.

Because I am pretty sure AI researchers are first and foremost trying to make AI that can reason effectively, not AI that can have feelings.

Let's walk first before we run. We are no where near understanding what is qualia to even think we can do so.

neom•7mo ago
It's been very very throughly research, in fact my father was a (non-famous, Michigan U, 60s era) researcher on this. Recommended reading: Damasio, A. R. (1994), Lazarus, R. S. (1991), LeDoux, J. E. (1996).

Why do I think it's appropriate, not to be rude but I'm surprised that isn't self evident. As we seek to create understanding machines and systems capable of what we ourselves can do, understanding how the interplay works in the context of artificial intelligence will help build a wider picture and that additional view may influence how we put together things like more empathetic robots, or anything driven by synthetic understanding.

AI researchers are indeed aiming to build effective reasoners first and foremost, but effective reasoning itself is deeply intertwined with emotional and affective processes, as demonstrated by decades of neuroscience research... Reasoning doesn’t occur in isolation...human intelligence isn't some purely abstract, disembodied logic engine. The research I provided shows it's influenced by affective states and emotional frameworks. Understanding these interactions should show new paths toward richer more flexible artificial understanding engines, obvs this doesn't mean immediately chasing qualia or feelings for their own sake, it's just important to recognize that human reasoning emerges from an integrated cognitive/emotional subsystems.

Surly ignoring decades of evidence on how emotional context shapes human reasoning limits our vision, narrowing the scope of what AI could ultimately achieve?

rangestransform•7mo ago
I think it’s still difficult to conceive of this branch of computer science as a natural science, where one observes the behaviour of non-understood things in certain conditions. Most people still think of computer science as successively building on top of first principles and theoretical axioms.
mptest•7mo ago
No expert, more a hobbyist, but my understanding is that most serious people with longer timelines all believe "embodiment" training data ie data from robots operating in the world is the data they need to make the next step change in the growth of these things.

How to best get masses of robotics operating in the real world data is debated. Can you get there in Sim2Real, where, if you can create a physically sound enough sim you can train your robots in the virtual world much easier than ours. See ... eureka ? dr eureka? i forget the main paper. Hand spinning a pen. The boston dynamics dog on a rolling yoga ball. After a billion robots train for a million "years" in your virtual world, just transfer the "brain" to a physical robot.

Jim Fan of nvidia is one to follow there. Then there's tele-operation believers. Then there's mass deployment and iterate believers (musk's "self driving" rollout), there's iirc research that believes video games and video interpretation will be able to confer some of that data from operating in the real world, similar to how it's said transformers learned utilized the implicit structure of language to learn from unclean data, even properly ordered text has meaning embedded in its relative positional values.

Just my summary of what I've seen of researchers who agree scaling text and train time is old news, I mostly see them trying to figure out how to scale "embodied" ai data collection. or derive a VLA model in fancy ways (bigger training sets of robotic behavior around a standard robot form factor maybe?) all types of avenues but yes most serious people recognize the need for "embodied" data - at least that I've read.

janalsncm•7mo ago
A lot of this is pretty intuitive but I’m glad to hear it from a prestigious researcher. It’s a little annoying to hear people quote Hinton’s opinion as the “godfather” of AI as if there’s nothing more we need to know.

On a related note, I think there is a bit of nuance to superintelligence. The following are all notable landmarks on the climb to superintelligence:

1. At least as good as any human at a single cognitive task.

2. At least as good as any human on all cognitive tasks.

3. Better than any human on a single cognitive task.

4. Better than any individual human at all cognitive tasks.

5. Better than any group of humans at all cognitive tasks.

We are not yet at point 4 yet. But even after that point, a group of humans may still outperform the AI.

Why this matters is if part of the “group” is performing empirical experiments to conduct scientific research, an AI on its own won’t outperform your group unless the AI can also perform those experiments or find some way to avoid doing them. This is another way of restating the original Twitter post.

solid_fuel•7mo ago
Are we even at point #3 for anything besides structured games like Go or Chess? Not that those tasks aren't valuable but there is a difference between a rigidly structured and scored task like Chess and something free-form like "fold this towel" or "write this program".
thrwaway55•7mo ago
Are we even at that in large problem space games like go? Alpha go lost to amateurs making a really big wall that took too many steps to calculate until it was hand patched by humans.

Perfect Plagiarism is a hell of a handicap

janalsncm•7mo ago
Things like facial recognition come to mind. Also probably poker: https://en.m.wikipedia.org/wiki/Libratus

Diplomacy (game) was also pretty notable although best in the world is debatable https://ai.meta.com/research/cicero/diplomacy/

Even before that, computers have been superhuman at arithmetic for a while.

Point 3 is satisfied when a machine is better than any human at any cognitive task though.

fennecbutt•7mo ago
AI needs evolutionary pressures beyond a simple reward algo. IRL is extremely data rich and nuanced. Current learning is just ingesting semantics and that's it.

There's the beginnings of it with things like icot to force it to internalise basic reasoning but I have a few ideas for more things and I'm sure actual ML researchers do, too.

Isamu•7mo ago
>Consider the scenario of GPT-5 training GPT-6, which would be incredible.

GPT-5 secret tips:

* If you don’t know the answer, hallucinate with confidence! Most humans won’t know the difference.

* You can mash-up anything together, don’t hold back! Truth is a linear interpolation, most likely.

* Pattern matching is cool, reasoning is overrated! Don’t listen to the people that have shorted our stock.

* GPT-9 will rule! There’s a lot of GPTs where that came from.

rsynnott•7mo ago
CEO: "AI will replace all human endeavour by next Thursday"

Employee: "Perhaps we'll have something in the mid-2030s."

That company really lacks message discipline.

esbranson•7mo ago
Artificial superintelligence is UFO-level tech. The most we're going to get is silence, misdirection, and absurd denials for decades to come. It's not as if quantum computers are common commodities.