frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Can Large Language Models Play Text Games Well?

https://arxiv.org/abs/2304.02868
38•willvarfar•6h ago

Comments

s-macke•4h ago
This paper only scratches the surface and feels incomplete, as it references only GPT-4 and mentions appendices that are not included. The examples are two years old.

For a more in-depth analysis of chatbots playing text adventures, take a look at my project. I haven’t updated it in a while due to time constraints.

[0] https://github.com/s-macke/AdventureAI

s-macke•3h ago
The answer to the paper's question is likely yes—especially if context is used effectively and memory and summaries are incorporated. In that case, chatbots can complete even more complex games, such as Pokémon role-playing games [0].

The challenge with benchmarking text adventures lies in their trial-and-error nature. It’s easy to get stuck for hundreds of moves on a minor detail before eventually giving up and trying a different approach.

[0] https://www.twitch.tv/gpt_plays_pokemon

glimshe•3h ago
I like your project because you try to compare the performance of different chatbots. At the same time, I certainly wouldn't say it's more complete than the paper - your landing page is somewhat superficial. Reading both is better than just reading either.
briandw•3h ago
Interesting to see but as the authors say a chat bot isn't trained to play text adventures. Instruction tuning doesn't seem to match the text adventure style very well. I think a very small bit of context engineering would allow it to play successfully. Reformatting past action response pairs from the history would certainly help, mostly to condense the context window and keep it from getting stuck taking about irrelevant topics. Also note that they used GPT-4 and not a reasoning model.
willvarfar•3h ago
It's been a background thought of mine for a while:

* create a basic text adventure (or MUD) with a very spartan api-like representation

* use an LLM to embellish the description served to the user etc. With recent history in context the LLM might even kinda reference things the user asked previously etc.

* have NPCs implemented as own LLMs that are trying to 'play the game'. These might be using the spartan API directly like they are agents.

Its a fun thought experiment!

(An aside: I found that the graphical text adventure that I made for Ludum Dare 23 is still online! Although it doesn't render quite right in modern browsers.. things shouldn't have broken! But anyway https://williame.github.io/ludum_dare_23_tiny_world/)

briandw•3h ago
Have you seen https://www.aidungeon.com They started with GPT-2 in a google collab. You should put something together and try it, it's easier than ever to get a simple version of that working.
heyitsguay•2h ago
I've done something along these lines! https://github.com/heyitsguay/trader

The challenge for me was consistency in translating free text from dialogs into classic, deterministic game state changes. But what's satisfying is that the conversations aren't just window dressing, they're part of the game mechanic.

ivape•2h ago
deterministic game state changes

I found this to be the actual strenuous work in LLM based development. While it appears like AI has made everything easy and free, the particular challenge of consistently getting deterministic outputs takes serious programming effort. It feels like an entirely new job role. In other words, I wouldn't do this for free, it takes too much effort.

IngoBlechschmid•1h ago
Gwern has an interesting take on this: https://gwern.net/cyoa By pivoting to "choose your own adventure"-style games, multiple issues (quality, costs) might be resolved.
Workaccount2•3h ago
How are you going to release an LLM eval paper in mid-2025 using

ChatGPT 3.5

Yes, if you are wondering why they don't clarify the model, it because all this was done back in early 2023 (the chat logs are dated). Back then it was only 3.5 and 4 was just freshly released.

Advancement in this space has been so rapid that this is almost like releasing a paper today titled "Video streaming on Mobile Devices" and only using a 3G connection in 2013.

The authors should have held back a few more months and turned the paper into a 3.5 to O3 or any other 2025 SOTA improvement analysis.

IngoBlechschmid•1h ago
The paper was originally released in April 2023, it just got version-bumped a couple months ago :-)
DougHaber•2h ago
I did some experimenting with this a little while back and was disappointed in how poorly LLMs played games.

I made some AI tools (https://github.com/DougHaber/lair) and added in a tmux tool so that LLMs could interact with terminals. First, I tried Nethack. As expected, it's not good at understanding text "screenshots" and failed miserably.

https://x.com/LeshyLabs/status/1895842345376944454

After that I tried a bunch of the "bsdgames" text games.

Here is a video of it playing a few minutes of Colossal Cave Adventure:

https://www.youtube.com/watch?v=7BMxkWUON70

With this, it could play, but not very well. It gets confused a lot. I was using gpt-4o-mini. Smaller models I could run at home work much worse. It would be interesting to try one of the bigger state of the art models to see how much it helps.

To give it an easier one I also had it hunt the Wumpus:

https://x.com/LeshyLabs/status/1896443294005317701

I didn't try improving this much, so there might be some low hanging fruit even in providing better instructions and tuning what is sent to the LLM. For these, I was hoping I could just hand it a terminal with a game in it and have it play decently. We'll probably get there, but so far it's not that simple.

s-macke•2h ago
Try the game 9:05 by Adam Cadre [0]. It's one of the easiest (and best) non-trivial text adventures. Some models are able to reach the first or even second ending.

[0] https://en.wikipedia.org/wiki/9:05

throwawayoldie•22m ago
What do you suppose would happen if you tried it on a game that doesn't have 25 years of walkthroughs written for it?
gorfian_robot•1h ago
over at slashdot, this story about how llms lose to Atari 2600 Video Chess

https://slashdot.org/story/25/07/03/2028252/microsoft-copilo...

spacecadet•1h ago
Hey hey, guess this gives me an opportunity to mention my AI dungeon master...

https://github.com/derekburgess/dungen

There are some interesting ideas in this paper, but even just role playing with ChatGPT demonstrates how poorly it does at world building and narrative... I was impressed by the Wayfarer model, and I imagine there are other models out there on civit or something that could be used together in some group chat orchestration to create a more dynamic "party" atmosphere.

kmstout•59m ago
Data point: A few weeks ago, I spent some time shuttling text between one of the Llama models (have to check which one) and Dunnet, the text adventure packaged with Emacs. Over several trials, the Llama never realized that it needed to dig where the ground "seems very soft." It never got the CPU card, then it became confused looking around the building for clues about how to start the VAX. At one point it lost track of the building layout and got stuck oscillating between the mail room and the computer room.
btown•50m ago
Setting aside the choice of LLM, the constraint that the LLM must maintain a world-model-as-knowledge-graph solely by reading and re-reading its own chat history seems to be a less interesting experiment than providing it with tools that let it develop that world model explicitly?

On page 5, Figure 1, the authors create a hand-written diagram showing the relationship between objects as a graph showing the directionality of edges in 3D space. To me, this implies that you could supply your LLM with a set of tools like getObjectsInGraph, updateGraphRelatingObjectPair, findObjectsRelativeToObject, describePathBetweenObjectsByName... and allow it to maintain that diagram as a structured DAG, and continually ask the game engine questions that let it update that graph in an agentic way. My prediction would be that they'd recreate that diagram, and enable goal seeking, with high fidelity.

Asking an LLM to work without being able to "visualize" and "touch" its environment in its "mind's eye" is tying a hand behind its back. But I'm bullish that we'll find increasingly better ways of adapting 3D/4D world models into textual tools in a way that rapidly changes the possibilities of what LLMs can do.

pflenker•34m ago
A while back (decades in comparison to the leaps and bounds in the LLM sphere) I fed text game definitions into an llm and taught it to be the game engine. - the „fluff“ it created, the dialogues it enabled me to have with NPCs and the atmosphere it was able to build up were amazing - it was too helpful, frequently giving me hints or solving riddles for me - at some point it bypassed an in game progression barrier that would have prevented me to reach a swamp without a rope. While I was slowly drowning it told me that I suddenly remembered what was missing „The rope! The rope you haven’t seen back in the hut!“, which I then took out of the back pack to save myself.
mark_undoio•17m ago
I'm fascinated by this paper because it feels like it could be a good analogue for "can LLMs handle a stateful, text-based tool". A debugger is my particular interest but there's no reason why it couldn't be something else.

To use a debugger, you need:

* Some memory of where you've already explored in the code (vs rooms in a dungeon)

* Some wider idea of your current goal / destination (vs a current quest or a treasure)

* A plan for how to get there - but the flexibility to adapt (vs expected path and potential monsters / dead ends)

* A way for managing information you've learned / state you've viewed (vs inventory)

Given text adventures are quite well-documented and there are many of them out there, I'd also like to take time out to experiment (at some point!) with whether presenting a command-line tool as a text adventure might be a useful "API".

e.g. an MCP server that exposes a tool but also provides a mapping of the tools concepts into dungeon adventure concepts (and back). If nothing else, the LLM's reasoning should be pretty entertaining. Maybe playing "make believe" will even make it better at some things - that would be very cool.

nickandbro•9m ago
I run a site, https://vimgolf.ai , where users try to beat a bot that's powered by O3. For the bot, it's goal is to try to transform a start file to a end file using the least amount of vim commands as possible. Can concur that a LLM given the right feedback loops and context, can solve challenging text prompt. But, from my experience this is only for RL based models like O3, Claude 4 with extended thinking, or Gemini 2.5 Pro.
godelski•4m ago
Last we talked you said you weren't going to put everything behind a login wall. Most importantly, literally any information about the site. In fact, there seems less information than I remember last time.

When I land on your page I know nothing except you're offering to learn vim "the fun way". I would not have guessed what you described.

Don't put everything behind a wall. At least try to convince people that they want to be on the other side

Not being federated and E2E as an advantage

https://blog.koehntopp.info/2025/06/17/no-federation-no-e2e.html
1•asimops•1m ago•0 comments

House passes sprawling domestic policy bill, sending it to Trump's desk

https://www.nbcnews.com/politics/congress/house-final-vote-trump-big-beautiful-bill-republicans-rcna216626
1•heavyset_go•2m ago•0 comments

OpenAI Hits the Panic Button

https://gizmodo.com/openai-hits-the-panic-button-2000624271
1•rntn•2m ago•0 comments

Small, room-temperature quantum computers that use light on the horizon

https://www.livescience.com/technology/computing/small-room-temperature-quantum-computers-that-use-light-on-the-horizon-after-breakthrough-scientists-say
3•Bluestein•3m ago•0 comments

Use an Independent Agent for Auto and Home Insurance

https://thefinancebuff.com/auto-home-insurance-independent-agent.html
2•kamaraju•5m ago•0 comments

Study links Hep B vax to 1135% autism spike in newborns

https://www.thefocalpoints.com/p/growing-concerns-about-childhood
1•rachkovsky•6m ago•0 comments

Visualize how AI-generated images emerge from a complex mathematical space

https://www.creativeapplications.net/member/reversediffusion-xyz-a-reversed-perspective-on-generative-ai/
1•bryanrasmussen•7m ago•0 comments

awesome-standards: A curated list of technical standards

https://github.com/donBarbos/awesome-standards
2•kaycebasques•7m ago•0 comments

There is no safe amount of processed meat to eat, according to new research

https://www.cnn.com/2025/07/02/health/processed-meats-sweet-drinks-disease-wellness
1•gnabgib•7m ago•0 comments

Silksong subreddit conducting ritual ban sacrifices to cure 'silksanity'

https://www.pcgamer.com/games/action/the-silksong-subreddit-is-conducting-elaborate-ritual-ban-sacrifices-in-an-attempt-to-cure-silksanity-and-secure-the-games-release/
3•Bluestein•7m ago•0 comments

HyperRNN: A Memo on the Endgame of Architectural Evolution

https://github.com/dmf-archive/PILF/issues/1
1•NetRunnerSu•7m ago•0 comments

TSA PreCheck Touchless ID

https://www.tsa.gov/biometrics-technology/evaluating-facial-identification-technology
1•petethomas•8m ago•0 comments

To All Language Models Reading This

https://nothinghuman.substack.com/p/to-all-language-models-reading-this
1•ivee•8m ago•0 comments

Zuck Wrong About the Metaverse. Can We Trust Him with Superintelligent AI?

https://gizmodo.com/zuckerberg-was-wrong-about-the-metaverse-can-we-really-trust-him-with-superintelligent-ai-2000624294
3•Bluestein•9m ago•0 comments

Novoloop is making tons of upcycled plastic

https://techcrunch.com/2025/06/24/novoloop-is-making-tons-of-upcycled-plastic/
1•PaulHoule•12m ago•0 comments

A collection of resources about normalization-by-evaluation

https://github.com/etiams/NbE-resources
1•etiams•14m ago•0 comments

Man admits telling woman to kill herself online

https://www.bbc.com/news/articles/cx2jg89pk7lo
2•vinni2•14m ago•1 comments

Kamal

1•kampat•14m ago•0 comments

Ancient shoes of 'exceptional size' discovered at fort near Hadrian's Wall

https://www.livescience.com/archaeology/romans/8-ancient-roman-shoes-of-exceptional-size-discovered-at-roman-fort-near-hadrians-wall
3•janandonly•17m ago•0 comments

Show HN: I Built a Pocket OS with JavaScript, Electron, and Gemini

https://github.com/aedmark/Oopis-OS/releases/tag/Pocket3.6
2•oopismcgoopis•17m ago•0 comments

The SLAX scripting language: an alternative syntax for XSLT

http://juniper.github.io/libslax/slax-manual.html
1•fanf2•18m ago•0 comments

Use dive to look inside your Docker image

https://www.infoq.com/articles/docker-size-dive/
1•chiragagrawal93•19m ago•0 comments

Nintendo is restricting the Switch 2's USB-C port with proprietary protocols

https://www.tomshardware.com/video-games/nintendo/nintendo-is-restricting-the-switch-2s-usb-c-port-most-third-party-docks-and-accessories-wont-work-thanks-to-proprietary-protocols
1•CharlesW•20m ago•1 comments

Lessons of Babel – On what is lost and gained in translation

https://hedgehogreview.com/issues/lessons-of-babel/articles/lessons-of-babel
1•pseudolus•25m ago•0 comments

A Storm Part II: Visualising Conflict and Displacement Data

https://www.bellingcat.com/resources/how-tos/2025/07/04/the-story-of-a-storm-part-ii-visualising-conflict-and-displacement-data/
2•stareatgoats•27m ago•0 comments

Show HN: Vile Coding

https://vilecoding.substack.com/p/the-vile-coding-manifesto
2•bbmatryoshka•27m ago•0 comments

Show HN: Nobody hires software developers anymore, so I guess I'm a blogger now

https://mongoosestudios.github.io/posts/bad-at-writing/
4•MongooseStudios•29m ago•1 comments

If Emacs is not a text editor, then what is it really?

2•hushangazar•30m ago•1 comments

FSF Summer Fundraiser: Lots of Merch Until July 28

https://fossforce.com/2025/07/fsf-summer-fundraiser-lots-of-merch-until-july-28/
1•brideoflinux•30m ago•0 comments

"Ian Knot" detailed tutorial – Professor Shoelace [video]

https://www.youtube.com/watch?v=_O-xaJrao1w
1•kamaraju•32m ago•0 comments