Some other cool ones I've seen: https://store.steampowered.com/app/2542850/1001_Nights/ https://www.playsuckup.com/
https://www.dexerto.com/gaming/where-winds-meet-players-are-...
https://www.rockpapershotgun.com/where-winds-meet-player-con...
(Off-topic AMA question: Did you see my voxel grid visibility post?)
We use a ton of smaller models (embeddings, vibe checks, TTS, ASR, etc) and if we had enough scale we'll try to run those locally for users that have big enough GPUs.
(You mean the voxel grid visibility from 2014?! I'm sure I did at the time... but I left MC in 2020 so don't even remember my own algorithm right now)
ONNX and DirectML seem sort of promising right now, but it's all super raw. Even if that worked, local models are bottlenecked by VRAM and that's never been more expensive. And we need to fit 6gb of game in there as well. Even if _that_ worked, we'd need to timeslice the compute inside the frame so that the game doesn't hang for 1 second. And then we'd get to fight every driver in existence :) Basically it's just not possible unless you have a full-time expert dedicated to this IMO. Maybe it'll change!
About the voxel visibility: yeah that was awesome, I remember :) Long story short MC is CPU-bound and the frustum clippings' CPU cost didn't get paid off by the reduced overdraw, so it wasn't worth it. Then a guy called Jonathan Hoof rewrote the entire thing to be separated in a 360° scan done on another thread when you changed chunk + a in-frustum walk that worked completely differently, and I don't remember the details but it did fix the ravine issue entirely!
PS: I think MCP/Tool Calls are a boondoggle and LLMs yearn to just run code. It's crazy how much better this works than JSON schema etc.
Do you have any idea what the actual probability is? Because if millions of people start using the system, 'very unlikely' can turn into 'virtual certainty' pretty quickly.
It's definitely a research project, this has never been done before.
It's almost always better to pay more for the smarter model, than to potentially give a worse player experience.
If they had 1M+ players there would certainly be room to optimize, but starting out you'd certainly spend more trying engineer the model switcher than you would save in token costs.
Not only that but I think our selling point is rewarding creativity with emergent behavior. I think baked dialogue would turn into traditional game with worse writing pretty quick and then you got a problem. For example, this AI game here does multiple choices with a local model and people seem a bit mild about it.
We could use it to cache popular QA, but in my experience humans are insane and nobody ever says even remotely similar things to robots :)
[1] https://store.steampowered.com/app/2828650/The_Oversight_Bur...
When applied smartly and with human supervision, I think that AI could easily help humans build game worlds and stories that were previously impossible to achieve.
4b11b4•4w ago
tom_0•4w ago