There would be no user voting at all. Instead, one AI handles every upvote and downvote according to guidance written by the subreddit moderator(s).
For example, it might assign 20 votes to one post and -5 votes to another. (Of course, this would require Reddit to implement a feature to allow this for these voting AIs.)
The key part is that the voting guidance is public. Anyone can read the rules that explain how the AI is supposed to vote. For example, the AI might be instructed to reward originality, clarity, kindness, strong evidence, or creative thinking, and to downvote low effort posts, repetition, hostility, or bad faith arguments.
Why this could be interesting:
* It removes mob dynamics, karma farming, and timing effects. Visibility depends on meeting the stated values, not popularity.
* The subreddit develops a very coherent culture. People learn how to write for the AI rather than reminding other humans to “read the rules.”
* Posting becomes a kind of skill. You are not chasing vibes, you are demonstrating that you understood and followed the principles.
* The advice itself becomes part of the experiment. Users can debate whether the AI’s guidance is good, flawed, biased, or incomplete.
Moderators could update the guidance over time and keep a changelog explaining why priorities shifted. There could even be meta threads where users suggest amendments, even if mods keep final control.
What do you think of this idea?
andsoitis•2h ago
Voting is the process of making collective decisions by means of submitting and then adding up individual choices.
amichail•1h ago
andsoitis•1h ago
But if we ignore semantics for a moment, yours is a testable hypothesis.
> reward originality, clarity, kindness, strong evidence, or creative thinking, and to downvote low effort posts, repetition, hostility, or bad faith arguments.
However, I think there are better ways to improve contributions than taking away the ability for other humans to express explicit judgment on someone's post without also having to write something.
For instance, perhaps the UI where you add your post can do real-time evaluation and suggestions for improvement (e.g. pointing out snark, personal attacks, etc.). That gives the poster the opportunity to make a different decision of what to write.
One trap with your model worth considering is that if the AI gets things wrong (e.g. gives you a negative count because it thinks you're not kind or don't give a sufficiently substantiated rebuttal in your argument), it will be very frustrating for participants and they will blame the board, not other users (who are free to disagree).