This recent paper from Fudan University is a highly relevant read given the current industry focus on RL for LLMs (like GRPO). The authors investigate a very practical question: do the improvements brought by reinforcement fine-tuning (RFT) actually generalize beyond their training distribution when applied to multi-turn agents?
tsurg_dot_com•1h ago