This is a great breakdown — especially the separation between what the engine does vs what the host/runtime is responsible for.
One thing this made me think about is how deterministic everything here is at the runtime level, but once you start layering AI into systems on top of it, that property kind of disappears.
Same input, same system → different models can give subtly different outputs, sometimes meaningfully different depending on phrasing or interpretation.
Feels like there’s a missing layer somewhere between execution and application logic that handles “what do we do when outputs don’t actually agree?”
Curious if anyone building lower-level runtimes or infra has run into that, or if it’s still mostly being handled ad hoc at the application layer.
benh2477•1h ago
One thing this made me think about is how deterministic everything here is at the runtime level, but once you start layering AI into systems on top of it, that property kind of disappears.
Same input, same system → different models can give subtly different outputs, sometimes meaningfully different depending on phrasing or interpretation.
Feels like there’s a missing layer somewhere between execution and application logic that handles “what do we do when outputs don’t actually agree?”
Curious if anyone building lower-level runtimes or infra has run into that, or if it’s still mostly being handled ad hoc at the application layer.