We’re a small team of CS researchers and engineers launching an early version of CSPaper, a tool that provides AI-generated conference-style feedback for computer science papers.
You upload your arXiv or PDF draft, choose a target conference (like ICML, NeurIPS, ICLR, KDD, etc.), and within about a minute receive structured feedback written in the style of a real conference review. The goal is to help researchers identify weaknesses and iterate faster before submitting to actual venues.
We’re particularly interested in feedback from two groups:
Authors who want to test how useful this is before submission
Reviewers who can help us assess whether this could become a useful aid (or even a problem!) in the peer review process
This is an early version—so it’s far from perfect. We expect rough edges and appreciate honest, critical feedback. We’d love to hear if this kind of tool is actually helpful, how it could be misused, and where it needs to improve. You can try it at https://review.cspaper.org (more info at https://cspaper.org/topic/89/introducing-cspaper-reviews-get... ).
We’re not charging anything during this early stage. If you do try it, please let us know what you think—we’re here to learn and iterate.
Thanks! CSPaper Team