For some DBs (SQL Server definitely), RAND() and similar are handled as if they are deterministic and so are called once per use. For instance:
SELECT TOP 10 RAND() FROM sys.objects
SELECT TOP 10 RAND() FROM sys.objects
just returned ten lots of 0.680862566387624 and ten lots of 0.157039657790194. SELECT TOP 10 RAND(), RAND(), RAND()-RAND() FROM sys.objects
returns a different value for each column (0.451758385842036 & 0.0652620609942665, -0.536618123021777), so the optimisation is per use not per statement or even per column (if it were per column that last value would be 0, or as close to as floating point arithmetic oddities allow).This surprises a lot of people when they try “… ORDER BY RAND()” and get the same order on each run.
One workaround for this is to use a non-deterministic function like NEWID(), though you need some extra jiggery-pokery to get a 0≤v<1 value to mimic rand:
SELECT TOP 10 CAST(CAST(CAST(NEWID() AS VARBINARY(4)) AS BIGINT) AS FLOAT)/(4.0*1024*1024*1024) FROM sys.objects
For the example of sorting, the outer cast is not needed. You might think just using “ORDER BY NEWID()” would be sufficient, but that is an undefined behaviour so you shouldn't rely upon it. It might work now, a quick test has just worked as expected here, but at any point the optimiser could decide it is more efficient to consider all UUIDs as having the same weight for sorting purposes.You know it gets wild when you read "... Here's the core of the raycasting algorithm in SQL"!
You can play it here: https://patricktrainer.github.io/duckdb-doom/
Pressing “L” enables (very) verbose logging in the dev console and prints much of the sql being executed.
It really is magic!
You can check it out here.
This is a tricky one when writing games using async APIs. The game I've been working on is written in C# but I occasionally hit the same issue when game code ends up needing async, where I have to carefully ensure that I don't kick off two asynchronous operations at once if they're going to interact with the same game state. In the old days all the APIs you're using would have been synchronous, but these days lots of libraries use async/await and/or promises and it kind of infects all the code around it.
It does depend on the sort of game you're building though. Some games end up naturally having a single 'main loop' you spend most of your time in, i.e. Doom where you spend all your time either navigating around the game world or looking at a pause/end-of-stage menu - in that case you can basically just have an is_menu_open bool in your update and draw routines, and if you load all your assets during your loading screen(s), nothing ever needs to be async.
Other games are more modal, and might have a dozen different menus/scenes (if not hundreds), i.e. something like Skyrim. And sometimes you have modals that can appear in multiple scenarios, like a settings menu, so you need to be able to start a modal loop in different contexts. You might have the player in a conversation with an NPC, and then during the conversation you show a popup menu asking them to choose what to say to the NPC, and they decide while the conversation menu is open they want to consult the conversation log, so you're opening a modal on top of a modal, and any modal might need to load some assets asynchronously before it appears...
In the old days you could solve a lot of this by starting a new main loop inside of the current one that would exit when the modal went away. Win32 modal dialogs work this way, for example (which can cause unpleasant re-entrant execution surprises if you trigger a modal in the wrong place). I'm still uncertain whether async/await is a good modern replacement for it.
mritchie712•8h ago
Queries that normally take 1s to 2s can run in 25ms, so you get under the "100ms rule" which is very uncommon in analytics applications.
We DuckDB server side and have experimental support for DuckDB WASM on the client-side at https://www.definite.app/ and sometimes I don't trust that a query ran because of how fast it can happen (we need some UX work there).
esafak•7h ago
randomtoast•7h ago
jasonjmcghee•6h ago
GP comment is (seemingly) describing keeping an entirely client side instance (data stored locally / in memory) snapshot of the back-end database.
Parent comment is asking how the two are kept in sync.
It's hard to believe it would be the method you're describing and take 25ms.
If you're doing http range requests, that suggests you're reading from a file which means object storage or disk.
I have to assume there is something getting triggered when back end is updating to tell the client to update their instance. (Which very well could just be telling it to execute some sql to get the new / updated information it needs)
Or the data is entirely in memory on the back end in an in memory duckdb instance with the latest data and just needs to retrieve it / return it from memory.
immibis•4h ago
mritchie712•4h ago
1. user writes a `select` statement that return 20k records. We cache the 20k.
2. user can now query the results of #1
we're also working on more complex cases (e.g. caching frequently used tables).