The dataset: 16K human posts from Reddit, Hacker News, and Yelp, each paired with AI generations from 6 models across two providers (Anthropic and OpenAI) at three capability tiers. Same prompt, length-matched, no adversarial coaching — just the model’s natural voice with platform context. Every vote is logged with model, tier, source, response time, and position.
Early findings from testing: Reddit posts are easy to spot (humans are too casual for AI to mimic), HN is significantly harder.
I'll be releasing the full dataset on HuggingFace and I'll publish a paper if I can get enough data via this crowdsourced study.
If you play the HN-only mode, you’re helping calibrate how detectable AI is on here specifically.
Would love feedback on the pairs — are any trivially obvious? Are some genuinely hard?
lucastonelli•59m ago
eigen-vector•38m ago