The best approach would be using something like if(game_is_paused) return; in the game loops.
Do people actually do that? What's the plan for when the user sleeps their machine? All the events just inexplicably happen all at once when they wake it?
Inside the game loop, we would keep the global tick counter that incremented on every tick, and timeouts would be based on that rather than on UTC.
The tick counter was updated only when the game logic was actually running. Our approach to pausing was to not run the functions that handled frame updates or physics updates, and to only run the rendering functions.
Generally we would never care about actual world time other than for some timeouts like for network (as the time passes for everyone), or for easter eggs like changing the tree models for Christmas or so.
I don't think anyone serious would implement event timers based on real time.
Slowing down time applies it universally. Otherwise you're going to need that condition to every single object in the game.
I haven’t tried this yet, but for a custom engine I would introduce a second delta time that is set to 0 in the paused state. Multiplying with the paused-dt „bakes in“ the pause without having to sprinkle ifs everywhere. Multiplying with the conventional dt makes the thing happen even when paused (debug camera, UI animations).
It always surprised me how few games had that feature - though a few important ones, like StarCraft, did - and it only became rarer over the years.
Thank you for still prioritizing it.
There's no scenario in which that's desirable.
And yet even Rockstar gets it wrong. (GTA V has several framerate dependent bugs)
The main downside which probably caused the diseapearance is that any patch to the game will make the replay file unusable. Also at the time (not sure for quake) there was often fixed framerate, today the upsides of using delta time based frame calculation AND multithreading/multi platform target probably make it harded to stay deterministic (specialy for game where you want to optimize input latency)
I think if I remember right there were also funny moments where things didn't look right after patches?
Networked games have a "tickrate", just for the networking/state aspect. For example, Counter-Strike 2 has a 64Hz tickrate by default. They also typically have a fixed time interval for physics engines. Both of these should be completely independent of framerate, because that's jittery and unpredictable.
The bigger problem is that floating point math isn't deterministic. So replays need to save key frames to avoid drift.
Quake used fixed point math.
As a kid, I couldn't wait to see what came next. Sadly, Q1 was rather one of a kind, and it was many years until anything else like it showed up.
What's totally insane is that the modern engine rewrite Aleph One can also play back such old recordings, for M2 Durandal (1995) and Infinity (1996) at least.
This used to be a promoted feature in CS, with "HLTV/GOTV", but sadly disappeared when they moved to CS2.
Spectating in-client is such as powerful way to learn what people are doing that you can't always see even from a recording from their perspective.
Halo 3's in-engine replay system was the high water mark of gaming for me.
Like torch flames and trees swaying in the wind.
It suggests a level of control way below what I would ordinarily consider required for game development.
I have made maybe around 50 games, and I think the level of control of time has only ever gone up. Starting at move one step when I say, to move a non-integer amount when I say, to (when network stuff comes into play) return to time X and then move forward y amount.
A system is only correct relative to the transition system you wrote down. If the real system admits extra transitions that you care about (pause, crash, re-entry, partial commits), and you didn't model them, then you proved correctness of the wrong system.
bitwize•1h ago