Given that GPT-5 reportedly cost $100 million to train, being able to create one, even a terrible one, for $100, shows how the field keeps marching on.
Once you have your fine tuned model, then you wouldn't be paying OpenAI to use it, but it would need to be run somewhere, and those somewheres range in quality and price. Models come in various shapes and sizes and the bigger the model, the beefier (and more expensive to rent) a computer you need to operate this SaaS business.
If you want to teach an LLM to answer questions about private documents you should look into RAG or agentic search - techniques where the LLM can take a user's question and then look for additional information by searching some documents before answering.
The good news is that these tricks work reasonably with small models that you can run on your own hardware - even a 4B or 8B model (a few GBs to download) can often handle these cases.
But... even then, it's still usually cheaper to pay for the APIs from OpenAI and suchlike. Their API costs are so low that it's hard to save money by running your own model somewhere, since you have to pay to keep it in RAM the whole time while OpenAI share that cost between thousands of users.
Tepix•3mo ago
From the readme:
All code will run just fine on even a single GPU by omitting torchrun, and will produce ~identical results (code will automatically switch to gradient accumulation), but you'll have to wait 8 times longer. If your GPU(s) have less than 80GB, you'll have to tune some of the hyperparameters or you will OOM / run out of VRAM. Look for --device_batch_size in the scripts and reduce it until things fit. E.g. from 32 (default) to 16, 8, 4, 2, or even 1. Less than that you'll have to know a bit more what you're doing and get more creative.