One friction point I keep running into is how to handle logging and evaluation of the models. Right now I'm using Jupyter Notebook, I'll train the model, then produce a few graphs for different metrics with the test set.
This whole workflow seems to be the standard among the folks in my program but I can't shake the feeling that it seems vibes-based and sub optimal.
I've got a few projects coming up and I want to use them as a chance to improve my approach to training models. What method works for you? Are there any articles or libraries that you would recommend? What do you wish Jr. Engineers new about this?
Thanks!
calepayson•1h ago
I'm hoping the text editor + project directory approach helps force ML projects away from a single file and towards some sort of codified project structure. Sometimes it just feels like there's too much information in a file and it becomes hard to assign it to a location mentally (a bit like reading a physical copy of a tough book vs a kindle copy). Any advice or thoughts on this would be appreciated!