Each day, you answer 1–3 short civic questions. It takes under a minute. Responses are anonymous, and results are shown only in aggregate. Over time, those aggregates stick around and remain open for anyone to explore or analyze.
This is not scientific polling. Samples are self-selected, nuance gets flattened, and the results shouldn’t be treated as population-level truth. That’s an intentional trade. The goal isn’t methodological purity, but making participation easy and safe enough that more people are willing to show up at all.
What I’m trying to test: 1) Does anonymity actually lower friction for people who otherwise wouldn’t participate? 2) Does reducing effort change who participates, not just how often? 3) Can imperfect but longitudinal data still function as signals when viewed over time or alongside other datasets?
If this works, the aspiration is twofold. Primary: Reduce the psychological and practical cost of civic participation to near zero. Secondary: Build an open, cumulative dataset that researchers, journalists, educators, or other builders can apply data science to, with clear caveats about what it is and isn’t.
There’s intentionally no debate, feeds, or identity performance. The product is opinion capture and aggregation, not discourse (the internet has plenty of that already).
I’m early and actively looking for beta users who are willing to try it out and help poke holes in it, especially around anonymity, abuse vectors, data quality, and interpretation.
Blunt feedback welcome. Gentle feedback also accepted.