Love the 100% client-side approach. I took a similar path with a project of mine - keeping everything in the browser means zero privacy concerns and instant load times. How are you handling the threshold calculation logic? Curious if you considered using Web Workers for heavier simulations.
vigneshj•1h ago
Thanks! Client-side is the only way that made sense for this - I wouldn't upload production metrics to a random SaaS either.
Great question on Web Workers - yes, I'm using them! The simulation engine has three modes:
1. *Web Worker* (default for large files): Runs threshold evaluation off-thread. UI stays completely responsive. This is what kicks in for your scenario with millions of rows.
2. *Chunked processing* (fallback): Processes data in batches with setTimeout between chunks if Workers aren't available. Slower but still keeps UI alive.
3. *Synchronous* (small files only): Direct processing for datasets where the overhead of spawning a worker isn't worth it.
The threshold logic itself is pretty straightforward - for each data point, check value against warn/crit/emrg thresholds based on the operator (>, <, >=, etc.). The Worker handles iterating through potentially millions of rows and building the alert timeline.
Progress updates come back from the Worker every ~1000 rows so users see the progress bar move in real-time.
maxxmini•1h ago
vigneshj•1h ago
Great question on Web Workers - yes, I'm using them! The simulation engine has three modes:
1. *Web Worker* (default for large files): Runs threshold evaluation off-thread. UI stays completely responsive. This is what kicks in for your scenario with millions of rows.
2. *Chunked processing* (fallback): Processes data in batches with setTimeout between chunks if Workers aren't available. Slower but still keeps UI alive.
3. *Synchronous* (small files only): Direct processing for datasets where the overhead of spawning a worker isn't worth it.
The threshold logic itself is pretty straightforward - for each data point, check value against warn/crit/emrg thresholds based on the operator (>, <, >=, etc.). The Worker handles iterating through potentially millions of rows and building the alert timeline.
Progress updates come back from the Worker every ~1000 rows so users see the progress bar move in real-time.