This feels like an unideal architectural choice, if this is the case!?
Sounds like each game server is independent. I wonder if anyone has more shared state multi-hosting? Warm up a service process, then fork it as needed, so there's some share i-cache? Have things like levels and hit boxes in immutable memfd, shared with each service instance, so that the d-cache can maybe share across instances?
With heartbleed et al, a context switch probably has to totally burn down the caches now a days? So maybe this wouldn't be enough to keep data hot, that you might need a multi-threaded not multi-process architecture to see shared caching wins. Obviously I dunno, but it feels like caches are shorter lived than they used to be!
I remember being super hopeful that maybe something like Google Stadia could open up some interesting game architecture wins, by trying to render multiple different clients cooperatively rather than as individual client processes. Afaik nothing like that ever emerged, but it feels like there's some cool architecture wins out there & possible.
This is one of those things that might take weeks just to _test_. Personally I suspect the speedup by merging them would be pretty minor, so I think they've made the right choice just keeping them separate.
I've found context switching to be surprisingly cheap when you only have a few hundred threads. But ultimately, no way to know for sure without testing it. A lot of optimization is just vibes and hypothesize.
A "tick", or an update, is a single step forward in the game's state. UPS (as I'll call it from here) or tick rate is the frequency of those. So, 128 ticks/s == 128 updates per sec.
That's a high number. For comparison, Factorio is 60 UPS, and Minecraft is 20 UPS.
At first I imagined an FPS's state would be considerably smaller, which should support a higher tick rate. But I also forgot about fog of war & visibility (Factorio for example just trusts the clients), and needing to animate for hitbox detection. (Though I was curious if they're always animating players? I assume there'd be a big single rectangular bounding box or sphere, and only once a projectile is in that range, then animations occur. I assume they've thought of this & it just isn't in there. But then there was the note about not animating the "buy" portion, too…)
CSGO was at 64 for the standard servers and 128 for Faceit (IIRC CS2 is doing some dynamic tick schenanigans unless they changed back on that)
Overwatch is I think at 60
In practice it seems to have been an implementation nightmare because they've regularly shipped both bugs and fixes for the "sub-tick" system.
The netcode in CS2 is generally much worse than CSGO or other source games. The game transmits way more data for each tick and they disabled snapshot buffering by default. Meaning that way more players are experiencing jank when their network inevitably drops packets.
I also remember reading a few posts about their new subtick system but never put two and two together. Hopefully they keep refining it.
With that being said: totally agree on the netcode.
[0]: https://old.reddit.com/r/GlobalOffensive/comments/1fwgd59/an...
I don't think its ticks per second are great, because the game is known for significant lag when more than a dozen of players are in the same place shooting at things.
This was the same as BF3, but there were also some issues with server load making things worse and high-ping compensation not working great.
After much pushback from players, including some great analysis by Battle(non)sense[2] that really got traction, the devs got the green light on improving the network code and worked a long time on that. In the end they got high-tickrate servers[3][4], up to 144Hz though I mostly played on 120Hz servers, along with a lot of other improvements.
The difference between a 120Hz server and a 30Hz was night and day for anyone who could tell the difference between the mouse and the keyboard. Problem was that by then the game was half-dead... but it was great for the 15 of us or so still playing it at that time.
[1]: https://www.reddit.com/r/battlefield_4/comments/1xtq4a/battl...
[2]: https://www.youtube.com/@BattleNonSense
[3]: https://www.reddit.com/r/battlefield_4/comments/35ci2r/120hz...
[4]: https://www.reddit.com/r/battlefield_4/comments/3my0re/high_...
Having played both of these games for years (literally, years of logged-in in-game time), most FPS games with faster tick systems generally feel pretty fluid to me, to the point where I don't think I've ever noticed the tick system acting strange in an FPS beyond extreme network issues. The technical challenges that go into making this so are incredible, as outlined in TFA.
Also not just for performance reasons, I wouldn’t call BeamVM hard realtime, but also for code. Your game server would usually be the client but headless (without rendering). Helps with reuse and architecture.
Erlang actually has good enough performance for many types of multiplayer games. Though you are correct that it may not cut it for fast paced twitch shooters. Well...I'm not exactly sure about that. You can offload lots of expensive physics computations to NIF's. In my game the most expensive computation is AI path-finding. Though this never occurs on the main simulation tick. Other processes run this on their own time.
I was imagining some blindingly fast C or Rust on bare metal.
That UE4 code snippet is brutal on the eyes.
CoD Black Ops used/uses Erlang for most of its backend afaik. https://www.erlang-factory.com/upload/presentations/395/Erla...
I don't know why its not more popular. Before I started the project, some people said that BeamVM would not cut it for performance. But this was not true. For many types of games, we are not doing expensive computation on each tick. Rather its just checking rules for interactions between clients and some quick AABB + visibility checks.
The other reason is that the client and the server have to be written in the same language.
This isn't true at all.
Sure, it can help to have both client and server built using the same engine or framework, but it's not a hard requirement.
Heck, the fact that you can have browser-based games when the server is written in Python is proof enough that they don't need to be the same language.
At any given time, ~50 of those games are going to be in the buy phase. Players will be purchasing equipment safely behind their spawn barriers and no shots can hurt them. We realized we don’t even need to do any server-side animation during the buy phase, we could just turn it off.
That explains the current trend of "online" video game that is so annoying: For 10 minutes of play, you have to wait for 10 minutes of lobby time and forced animations, like end game animations.On BO6 it kills me, you just want to play, sometimes you don't have more than 30 minutes for a quick video game session, and with the current games, you always have to wait a very very long time. Painfully annoying.
In Valorant (similar to Counter Strike), at the start of the game you have 60 seconds to buy your weapons and abilities for the round. Valorant/CS is typically a best-of-13, and before each round is a 60 second "buy" period.
It's a deceptive way to sell people less game.
That's a dumb take. The buying phase is an integral part of the game mode. And the game is free.
If you just make a list of “performance tweaks” you might learn about in, say, a game dev blog post on the internet, and execute them without considering your application’s specific needs and considerations, you might hurt performance more than you help it.
nice.
The modern matchmaking approach groups people by skill not latency, so you get a pretty wild mix of latency.
It feels nothing like the old regional servers. Sure the skill mix was varied, but at least you got your ass handed to you in crisp <10ms by actual skill. Now it's all getting knife noscoped around a corner by a guy that rubberbanded 200ms into the next sector of the map already while insulting your mom and wearing a unicorn skin
mmmeff•1h ago
Linkd•1h ago
Hopefully competition from Valorant and others puts more pressure to make things happen at Valve.
koakuma-chan•57m ago
Thev00d00•1h ago
ozgrakkurt•26m ago
fngjdflmdflg•1h ago
Hikikomori•1h ago
Tiberium•1h ago
https://help.steampowered.com/en/faqs/view/4D81-BB44-4F5C-9B...
AceJohnny2•37m ago
(Veering offtopic here) Remember that Valve invented the free-to-play business model when they made TF2 free. As Gabe Newell said in some interview long ago, they made more money from TF2 after it went F2P ("sell more hats!")
Point being, being a paid vs free game is largely irrelevant to the profitability & engineering budget.
That said, I'm not sure why you say CS is a paid game. It is also free-to-play. Is some playable content locked behind a paywall?