- Works within your OSS stack: PyTorch, HuggingFace TRL/PEFT), MLflow.
- Hyperparallel search: launch as many configs as you want together, even on a single GPU
- Dynamic real-time control: stop laggards, resume them later to revisit, branch promising configs in flight.
- Deterministic eval + run tracking: Metrics curves are automatically plotted and are comparable.
- Apache License v2.0: No vendor lock in. Develop on your IDE, launch from CLI.
Repo: https://github.com/RapidFireAI/rapidfireai/
PyPI: https://pypi.org/project/rapidfireai/
Docs: https://oss-docs.rapidfire.ai/
We hope you enjoy the power of rapid experimentation with RapidFire AI for your LLM customization projects! We’d love to hear your feedback–both positive and negative–on the UX and UI, API, any rough edges, and what integrations and extensions you’d be excited to see.