Minimal reproducibility layer for ML - config + code + runtime fingerprints.
Comments
drbt•1h ago
Hi HN. I built Ato because I wanted a layer that works wherever I attach it, and tells me why things changed when they do.
I'm bad at rituals (perfect logging, tidy checkpoints). Ato still gives me answers by fingerprinting three pieces:
- Config structure: architecture-level changes, not just values
- Code logic: function-bytecode fingerprints, not commit IDs
- Runtime outputs: detects silent drift even when code "didn't change"
No dashboards, no servers; just SQLite and a CLI. It sits beside Hydra/argparse/MLflow/W&B rather than replacing them.
If your runs sometimes diverge or fail and you just want "why" without adopting a platform, I'd love your feedback.
Thanks for checking this out!
drbt•1h ago
I've been using Ato personally for a while (it's at 2.x; ~100 tests must pass before release).
If anything is unclear, I'm happy to clarify or add examples. Curious what edge cases you'd try.
(English isn't my first language and this is my first Show HN - if something reads oddly, please correct me. Thanks!)
drbt•1h ago
I'm bad at rituals (perfect logging, tidy checkpoints). Ato still gives me answers by fingerprinting three pieces:
- Config structure: architecture-level changes, not just values - Code logic: function-bytecode fingerprints, not commit IDs - Runtime outputs: detects silent drift even when code "didn't change"
No dashboards, no servers; just SQLite and a CLI. It sits beside Hydra/argparse/MLflow/W&B rather than replacing them.
If your runs sometimes diverge or fail and you just want "why" without adopting a platform, I'd love your feedback.
Thanks for checking this out!
drbt•1h ago
If anything is unclear, I'm happy to clarify or add examples. Curious what edge cases you'd try. (English isn't my first language and this is my first Show HN - if something reads oddly, please correct me. Thanks!)