frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

US Manufacturing Activity Contracted in August for a Sixth Month

https://www.bloomberg.com/news/articles/2025-09-02/us-manufacturing-activity-contracted-in-august...
1•JumpCrisscross•1m ago•0 comments

Show HN: AI Agent for Game UI

https://www.godmodeai.co/game-ui-agent
1•lyogavin•2m ago•0 comments

EVs reduce climate pollution, but by how much? New U-M research has the answer

https://news.umich.edu/evs-reduce-climate-pollution-but-by-how-much-new-u-m-research-has-the-answer/
1•breve•4m ago•0 comments

The Trust Quotient (TQ)

https://kk.org/thetechnium/the-trust-quotient-tq/
1•jger15•5m ago•0 comments

TextJam

https://textjam.com/show/demo?df
1•Bogdanp•8m ago•0 comments

The case against Almost Always auto in C++

https://gist.github.com/eisenwave/5cca27867828743bf50ad95d526f5a6e
1•alberto-m•11m ago•0 comments

This blog is running on a recycled Google Pixel 5

https://blog.ctms.me/posts/2024-08-29-running-this-blog-on-a-pixel-5/
2•indigodaddy•12m ago•0 comments

The Millionaire Who Left Wall Street to Become a Paramedic

https://www.nytimes.com/2025/09/02/nyregion/rescue-medic-wall-street-.html
1•wslh•12m ago•1 comments

Spec-Driven Development with A

https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-ne...
2•e2e4•15m ago•0 comments

What Every Data Scientist Should Know About Graph Transformers

https://www.unite.ai/what-every-data-scientist-should-know-about-graph-transformers-and-their-imp...
1•Anon84•19m ago•0 comments

Google, Apple, and Mozilla Win in the Antitrust Case Google Lost

https://spyglass.org/google-apple-and-mozilla-win-in-the-antitrust-case-google-lost/
1•bentocorp•20m ago•0 comments

Views from onboard Starship's tenth flight test

https://twitter.com/SpaceX/status/1962961587049832623
1•cubefox•22m ago•0 comments

Google says Gmail security is "strong and effective" as it denies major breach

https://arstechnica.com/gadgets/2025/09/google-says-reports-of-massive-gmail-data-breach-are-enti...
2•bentocorp•23m ago•0 comments

World’s biggest iceberg breaks up after 40 years

https://www.theguardian.com/environment/2025/sep/02/worlds-biggest-iceberg-crumbles-apart
2•pseudolus•25m ago•0 comments

Parallel AI Agents Are a Game Changer

https://morningcoffee.io/parallel-ai-agents-are-a-game-changer.html
3•shiroyasha•25m ago•0 comments

Researchers Are Already Leaving Meta's New Superintelligence Lab

https://www.wired.com/story/researchers-leave-meta-superintelligence-labs-openai/
3•mgh2•27m ago•0 comments

Health Effects of Cousin Marriage: Evidence from US Genealogical Records

https://www.aeaweb.org/articles?id=10.1257/aeri.20230544
2•Anon84•27m ago•0 comments

Lumo by Proton Mail

https://lumo.proton.me/
2•doener•37m ago•0 comments

Cqdam Free – single-binary in-memory KV store (RESP subset), ~2.5M ops/SEC

https://github.com/LaminarInstruments/Laminar-Flow-In-Memory-Key-Value-Store
1•LaminarBender•37m ago•1 comments

Human activity may be locking the Southwest into permanent drought

https://theconversation.com/climate-models-reveal-how-human-activity-may-be-locking-the-southwest...
2•PaulHoule•37m ago•0 comments

Trump calls video of bag being thrown from White House an 'AI-generated' fake

https://www.cnn.com/2025/09/02/politics/white-house-black-bag-video-mystery
5•frays•40m ago•3 comments

Single File No-Build Blog with Modern JavaScript

https://single-page-blog.ben-ca1.workers.dev
1•b_e_n_t_o_n•41m ago•1 comments

The World War Two bomber that cost more than the atomic bomb

https://www.bbc.com/future/article/20250829-the-bomber-that-became-ww2s-most-expensive-weapon
1•pseudolus•44m ago•0 comments

MUJI – Bucket

https://www.relvaokellermann.com/work/bucket/
1•mooreds•44m ago•0 comments

Electrical stimulation can reprogram immune system to heal the body faster

https://medicalxpress.com/news/2025-09-electrical-reprogram-immune-body-faster.html
5•birriel•44m ago•0 comments

Why I joined Mixpanel as CEO: A new era in analytics

https://mixpanel.com/blog/jen-taylor-ceo/
1•enahs-sf•45m ago•0 comments

Is the McDonald's ice cream machine broken?

https://mcbroken.com/
1•mooreds•45m ago•1 comments

Cherokee, Osage, and the Indigenous North American Type Collection

https://www.typotheque.com/blog/cherokee-osage-and-the-indigenous-north-american-type-collection
1•mooreds•45m ago•0 comments

Chinese cluster now top innovation hotspot: UN

https://www.yahoo.com/news/articles/chinese-cluster-now-worlds-top-070155491.html
2•Teever•46m ago•0 comments

How Europe's deforestation law could change the global coffee trade

https://theconversation.com/how-europes-deforestation-law-could-change-the-global-coffee-trade-26...
1•bikenaga•48m ago•0 comments
Open in hackernews

'World Models,' an old idea in AI, mount a comeback

https://www.quantamagazine.org/world-models-an-old-idea-in-ai-mount-a-comeback-20250902/
116•warrenm•6h ago

Comments

AnotherGoodName•5h ago
I’ve been working on board game ai lately.

Fwiw nothing beats ‘implement the game logic in full (huge amounts of work) and with pruning on some heuristics look 50 moves ahead’. This is how chess engines work and how all good turn based game ai works.

I’ve tried throwing masses of game state data at latest models in pytorch. Unusable. It Makes really dumb moves. In fact one big issue is that it often suggests invalid moves and the best way to avoid this is to implement the board game logic in full to validate it. At which point, why don’t i just do the above scan ahead X moves since i have to do the hard parts of manually building the world model anyway?

One area where current ai is helping is on the heuristics themselves for evaluating best moves when scanning ahead. You can input various game states and whether the player won the game or not in the end to train the values of the heuristics. You still need to implement the world model and look ahead to use those heuristics though! When you hear of neural networks being used for go or chess this is where they are used. You still need to build the world model and brute force scan ahead.

One path i do want to try more: In theory coding assistants should be able to read rulebooks and dynamically generate code to represent those rules. If you can do that part the rest should be easy. Ie. it could be possible to throw rulebooks at ai and it play the game. It would generate a world model from the rulebook via coding assistants and scan ahead more moves than humanly possible using that world model, evaluating to some heuristics that would need to be trained through trial and error.

Of course coding assistants aren’t at a point where you can throw rulebooks at them to generate an internal representation of game states. I should know. I just spent weeks building the game model even with a coding assistant.

coeneedell•5h ago
IIRC the rules system for magic the gathering: Arena is generated by a sort of compiler fed the rules. You might not even need a modern coding assistant to build out something reasonable in a DSL that is perfect, then have people (or an LLM after fine tuning) transforms rule books into the DSL.
GaggiX•4h ago
>This is how chess engines work

All strongest chess engine have at least one neural network to evaluate positions, including Stockfish, and this impact the searching window.

>how all good turn based game ai works

That's not really true, just think of Go.

skywhopper•12m ago
??? Chess engines and Go engines have as a baseline a world model of the state of the game and what moves are legal.
smokel•4h ago
You probably know this, but things heavily depend on the type of board game you are trying to solve.

In Go, for instance, it does not help much to look 50 moves ahead. The complexity is way too high for this to be feasible, and determining who's ahead is far from trivial. It's in these situations where modern AI (reinforcement learning, deep neural networks) helps tremendously.

Also note that nobody said that using AI is easy.

AnotherGoodName•4h ago
Alphago (and stockfish that another commenter mentioned) still has to search ahead using a world model. The AI training just helps with the heuristics for pruning and evaluation of that search.

The big fundamental blocker to a generic ‘can play any game’ ai is the manual implementation of the world model. If you read the alphago paper you’ll see ‘we started with nothing but an implementation of the game rules’. That’s the part we’re missing. It’s done by humans.

smokel•4h ago
Implementing a world model seems to be mostly solved by LLMs. Finding one that can be evaluated fast enough to actually solve games is extremely hard, for humans and AI alike.
skywhopper•13m ago
What are you talking about?
moyix•4h ago
Note that MuZero did better than AlphaGo, without access to preprogrammed rules: https://en.wikipedia.org/wiki/MuZero
smokel•3h ago
Minor nitpick: it did not use preprogrammed rules for scanning through the search tree, but it does use preprogrammed rules to enforce that no illegal moves are made during play.
hulium•1h ago
During play, yes, obviously you need an implementation of the game to play it. But in its planning tree, no:

> MuZero only masks legal actions at the root of the search tree where the environment can be queried, but does not perform any masking within the search tree. This is possible because the network rapidly learns not to predict actions that never occur in the trajectories it is trained on.

https://arxiv.org/pdf/1911.08265

skywhopper•14m ago
That is exactly what the commenter was saying.
jjk7•3h ago
Interesting the parallels between LLM development and psychology & spirituality.

To have a true thinking, you need an internal adversary challenging thoughts and beliefs. To look 50 moves ahead, you need to simulate the adversary's moves... Duality

daxfohl•3h ago
Yeah, I can't even get them to retain a simple state. I've tried having them run a maze, but instead of giving them the whole maze up front, I have them move one step at a time, tell them which directions are open from that square and ask for the next move, etc.

After a few moves they get hopelessly lost and just start wandering back and forth in a loop. Even when I prompt them explicitly to serialize a state representation of the maze after each step, and even if I prune the old context so they don't get tripped up on old state representations, they still get flustered and corrupt the state or lose track of things eventually.

They get the concept: if I explain the challenge and ask to write a program to solve such a maze step-by-step like that, they can do that successfully first-try! But maintaining it internally, they still seem to struggle.

warrenm•3h ago
>I've tried having them run a maze, but instead of giving them the whole maze up front, I have them move one step at a time, tell them which directions are open from that square and ask for the next move, etc.

Presuming these are 'typical' mazes (like you find in a garden or local corn field in late fall), why not have the bot run the known-correct solving algorithm (or its mirror)?

daxfohl•3h ago
Like I said, they can implement the algorithm to solve it, but when forced to maintain the state themselves, either internally or explicitly in the context, they are unable to do so and get lost.

Similarly if you ask to write a Sudoku solver, they have no problem. And if you ask an online model to solve a sudoku, it'll write a sudoku solver in the background and use that to solve it. But (at least the last time I tried, a year ago), if you ask to solve step-by-step using pure reasoning without writing a program, they start spewing out all kinds of nonsense (but humorously cheat: they'll still spit out the correct answer at the end).

adventured•2h ago
So if you push eg Claude Sonnet 4 or Opus 4.1 into a maze scenario, and have it record its own pathing as it goes, and then refresh and feed the next Claude the progress so far, would that solve for the inability to maintain long duration context in such maze cases?

I make Claude do that on every project. I call them Notes for Future Claude and have it write notes for itself because of how quickly context accuracy erodes. It tends to write rather amusing notes to itself in my experience.

daxfohl•1h ago
This was from a few months ago, so things may be different now. I only used OpenAI, and the o3 model did by far the best (gpt-4o's performance was equivalent on the basic scenario when I had it just move one move at a time (which, it was still pretty good, all considered), but when I started having it summarize state and such, o3 was able to use that to improve performance, whereas 4o actually got worse).

But yeah, that's one of the things I tried. "Your turn is over. Please summarize everything you have learned about the maze so someone else can pick up where you left off". It did okay, but it often included superfluous information, it sometimes forgot to include current orientation (the maze action options were "move forward", "turn right", "turn left", so knowing the current orientation was important), and it always forgot to include instructions on how to interpret the state: in particular, which absolute direction corresponded to an increase or decrease of which grid index.

I even tried to coax it into defining a formal state representation and "instructions for an LLM to use it" up-front, to see if it would remember to include the direction/index correspondence, but it never did. It was amusing actually; it was apparent it was just doing whatever I told it and not thinking for itself. Something like

"Do you think you should include a map in the state representation? Would that be useful?"

"Yes, great idea! Here is a field for a map, and an algorithm to build it"

"Do you think a map would be too much information?"

"Yes, great consideration! I have removed the map field"

"No, I'm asking you. You're the one that's going to use this. Do you want a map or not?"

"It's up to you! I can implement it however you like!"

nomadpenguin•3h ago
There are specialized architectures (the Tolman-Eichenbaum Machine)* that are able to complete this kind of task. Interestingly, once trained, their activations look strikingly similar to place and grid cells in real brains. The team were also able to show (in a separate paper) that the TEM is mathematically equivalent to a transformer.

* https://www.sciencedirect.com/science/article/pii/S009286742...

red75prime•2h ago
It would be nice if you could train a decent model on a $1000 (or so) budget, but for now it seems unlikely.
bubblyworld•1h ago
Something to consider is that while it's really hard to implement a decent NN-based algorithm like AlphaZero for your game, you get the benefit that model checkpoints give you a range of skill levels to play against as you train it.

Handicapping traditional tree search produces really terrible results, imo. It's common for weak chess engines to be weak for stupid reasons (they just hang pieces, make random unnatural moves, miss blatant threats etc). Playing weak versions of Leela chess really "feels" like a (bad) human opponent by contrast.

Maybe the juice isn't worth the squeeze. It's definitely a ton of work to get right.

robertlagrant•45m ago
How does this experience translate to non-turn based games? Alphastar presumably is doing something other than searching all the possible moves. Why would whatever it does not translate to turn-based?
deepsquirrelnet•10m ago
> I’ve tried throwing masses of game state data at latest models in pytorch. Unusable. It Makes really dumb moves. In fact one big issue is that it often suggests invalid moves and the best way to avoid this is to implement the board game logic in full to validate it.

It sounds like you need RL. You could try setting up some reward functions with evaluators. I’m not sure what your architecture is, but something to try.

nathan_douglas•4h ago
I'm sure neural networks are a great tool here, but I don't know how the training would proceed effectively off "mere data"; too much of the data we have is incomplete, inaccurate, or outright fantasy or misinformation or out of the ordinary.

I could see this being the domain of fleets of robots, many different styles, compositions, materials, etc. Send ten robots in to survey a room - drones, crawlers, dogs, rollers, etc - they'll bang against things, knock things off shelves, illuminate corners, etc. The aggregate of their observations is the useful output, kinda like networked toddlers.

And yeah, unfortunately, sometimes this means you just need to send a swarm of robots to attack a city bus... or a bank... to "learn how things work." Or an internment camp. Don't get upset, guy, we're building a world model.

Anybody wanna give me VC money to work on this?

ACCount37•4h ago
When you're training an AI, that "mere data" adds up. Random error averages out, getting closer to zero with every data point. Systematic error leaks information about the system that keeps making the error.

A Harry Potter book doesn't ruin an AI's world model by contaminating reality with fantasy. It gives it valuable data points on human culture and imagination and fiction tropes and commercially successful creative works. All of which is a part of the broader "reality" the AI is trying to grasp the shape of as it learns from the vast unstructured dataset.

nathan_douglas•4h ago
You're absolutely correct, of course. I was musing during down time in a meeting and turned it into a joke instead of engaging my faculties :)
multjoy•2h ago
The AI learns nothing from Harry Potter other than the statistical likelihood of one token appearing after another.

The AI is trying to grasp nothing.

ACCount37•1h ago
Any sufficiently advanced statistical model is a world model.

If you think that what your own brain doing isn't fancy statistics plugged into a prediction engine, I have some news for you.

tsunamifury•4h ago
The end of westworld basically put forth that the only way we could stabilize the world is if we just destroyed it and moved it all to a parallel simulation. Since early attempts at world Modeling failed due to complexity of Outliers the only way ai could handle a world model was to just get rid of the real one.

People didn’t give the later seasons enough credit even if they didn’t rise tot he same dramatic effect as the first.

srush•4h ago
A recent tutorial video from one of the authors featured in this article:

Evaluating AI's World Models (https://www.youtube.com/watch?v=hguIUmMsvA4)

Goes into details about several of the challenges discussed.

dejongh•3h ago
This is a very interesting article. The concept "run an experiment in your head and predict the outcome" is a capability that AIs must have to attain some kind of general intelligence. Anyway, read the article, it's great.
ryukoposting•3h ago
A footnote in the GPT-5 announcement was that you can now give OpenAI's API a context-free grammar that the LLM must follow. One way of thinking about this feature is that it's a user-defined world model. You could tell the model "the sky is" => "blue" for example.

Obviously you can't actually use this feature as a true world model. There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

The basic principle sounds like what we're looking for, though: a strict automata or rule set that steers the model's output reliably and provably. Perhaps a similar kind of thing that operates on neurons, rather than tokens? Hmm.

nxobject•3h ago
> There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

As a complete amateur who works in embedded: I imagine the restriction to a linear, ordered input stream is fundamentally limiting as well, even with the use of attention layers.

gavmor•3h ago
I suspect something more akin to a LoRA and/or circuit tracing will help us keep track of the truth.
yellow_postit•3h ago
Not mentioning Fei-Fei Li and her startup explicitly focused on world models is an interesting choice by the author.
jonbaer•3h ago
"You’re carrying around in your head a model of how the world works" (or so you thought) ... the real AI is in a) how fast you can realize it's changed and b) how fast you can adapt. This bit isn't being optimized, it's being dragged out.
red75prime•3h ago
> This bit isn't being optimized, it's being dragged out.

Of course, it is being optimized. People are working on increasing the sample efficiency. A simple search on Google Scholar will confirm it.

BariumBlue•3h ago
> When researchers attempt(opens a new tab) to recover [something like] a coherent computational representation of an Othello game board they instead find [bags of heuristics]

Humans don't exactly have a full representation of board space in their head either. Notably, chess masters and amateurs can memorize completely random board positions as well as the other. I'd think neither could memorize 64 chess pieces in random positions on a board.

mym1990•3h ago
For whatever its worth, I bet the chess master would be able to instantly identify that it is a random/invalid board position, aka an invalid world state. I think the experiment you are alluding to gave both groups a very limited amount of time to look at the board. Given enough time, both groups would definitely be able to memorize 64 pieces on a board.
aurelwu•2h ago
I do think even the most amateur of amateurs would be able to recognize instantly that a chess board with 64 pieces on it is a invalid game state.
AIPedant•1h ago
That's not what "coherent computational representation" means in this context. It means being able to reliably apply the rules of Othello / chess / etc to the current state of the board. Any competent amateur can do this without studying thousands of board positions - in fact you can do it just from the written rules, without ever having seen a game - they have a causal, non-heuristic understanding of the rules. LLMs have much more trouble: they don't learn how knights move, they learn how white knights move when they're in position d5, then in position g4, etc etc, a "bag of heuristics."

Notably this is also true for MuZero, though at that scale the heuristics become "dense" enough that an apparent causal understanding seems to emerge. But it is quite brittle: my favorite example involves the arcade game Breakout, where MuZero can attain superhuman performance on Level 1 and still be unable to do Level 2. Healthy human children are not like this - they figure out "the trick" in Level 1 and quickly generalize.

mingtianzhang•1h ago
I used to work on a idea that instead of modelling the whole world, you can build your own Solipsistic model: https://openreview.net/pdf?id=fPaGSuQRP1O
Animats•23m ago
Important subject, useless article.

Some new ideas in world models are beginning to work. Using Gaussian splatting as a world model has had some recent success.[1] It's a representation that's somewhat tolerant of areas where there's not enough information. Some of the systems that generate video from images work this way.

[1] https://katjaschwarz.github.io/ggs/

lsy•16m ago
A world model itself, in its particulars, isn't as important as the tacit understanding that the "world model" is necessarily incomplete and subordinate to the world itself, that there are sensory inputs from the world that would indicate you should adjust your world model, and the capacity and commitment to adjust that model in a way that maintains a level of coherence. With those things you don't need a complex model, you could start with a very simple but flexible model that would be adjusted over time by the system.

But I don't think we have a hint of a proposal for how to incorporate even the first part of that into our current systems.

chongli•10m ago
A little bit disappointed that there was no mention of the Frame Problem [1], a major challenge with world models. The issue arises when you're building an AI agent with the ability to move through and act in the real world, updating its world model as it does so.

The challenge comes from the problem of finding a set of axioms that tell you how to make predictions about what changes a particular action will cause in the world. Naively, we might suppose that the laws of physics would be suitable axioms but this immediately turns out to be computationally intractable. So then we're stuck trying to find a set of heuristics, as alluded to in the article.

Without being a neuroscientist, I think it's likely that at least some of the axioms of our own world models (as human beings) are built into the structure of our brains, rather than being knowledge that we learn as we grow up. We know, for example, that our visual systems have a great deal of built-in assumptions about the way light works and how objects appear under different lighting conditions, a fact revealed to us by optical illusions such as the checker shadow illusion [2]. Building a complete set of heuristics such as this does not sound impossible, just somewhat obscure and unexplored as an engineering problem, and does not seem to be related whatsoever to currently popular means of building and training AI models.

[1] https://plato.stanford.edu/entries/frame-problem/

[2] https://en.wikipedia.org/wiki/Checker_shadow_illusion