The only thing missing is for the agents to publish and peer-review their research.
ting0•10m ago
That's a great idea.
AlexCoventry•1h ago
Wow, Gemini suggested a very similar experiment to me yesterday. Guess I know where it got the idea from, now. :-)
lostmsu•39m ago
Non-zero based chart makes it look like it was very successful.
kubb•7m ago
He's burning Claude tokens to slightly improve his tiny and not very capable LLM? It's fun, I bet, but wake me up when it leads to a research breakthrough.
abeppu•6m ago
but the experiments it did that "improved" validation BPB in the GH screenshot were all basically hyperparameter changes right? So is this better or worse, either per experiment or per unit time, than hyperparameter tuning techniques that don't involve an LLM? It's not clear from this if the LLM is more or less making random changes which sometimes work , and or the LLM thinking actually finds "good" changes because of what the LLM has internalized.
E.g. how does this compare to a hyperparameter tuning pass with e.g. BayesOpt that does the same number of 5-min training experiments?
falcor84•1h ago
ting0•10m ago