What if new proofs are included in LLM training so LLM rediscover it?
2•folderquestion•1h ago
If I were to sell the power of LLMs as powerful research agents, and if I had enough money, I could think about introducing little "gems" into the training set of LLM so that my model would be able to discover new theorems and proofs. There is a lot of money at the table, and I am sure there are a lot of genius people with little pay. Perhaps this kind of thinking is wrong?, only bad people would think like this?, how could one detect such a trick without knowing the training set?
Comments
gostsamo•36m ago
test only on data generated after the training has ended.
gostsamo•36m ago