I built a web-based quiz that uses humor and psychological profiling to introduce a serious idea:
Should everyone have equal access to powerful AI models?
The app assigns users to one of several AI Trust Tiers based on how they respond to quirky but ethically-themed scenarios. While the tone is lighthearted, the underlying goal is to encourage reflection on digital responsibility, misuse potential, and access control — especially as we move toward increasingly capable models.
Think of it as part personality test, part educational experience about AI privilege, governance, and risk stratification.
I'm looking for thoughtful feedback on the concept, the UX, and whether this kind of playful-yet-serious tool could help spark broader discussions around AI alignment and safety.
CitizenOfEarth•1d ago
Should everyone have equal access to powerful AI models?
The app assigns users to one of several AI Trust Tiers based on how they respond to quirky but ethically-themed scenarios. While the tone is lighthearted, the underlying goal is to encourage reflection on digital responsibility, misuse potential, and access control — especially as we move toward increasingly capable models.
Think of it as part personality test, part educational experience about AI privilege, governance, and risk stratification.
I'm looking for thoughtful feedback on the concept, the UX, and whether this kind of playful-yet-serious tool could help spark broader discussions around AI alignment and safety.
[https://mindbomber.github.io/ai-trust-tier-access-control/ai...]
Thanks for checking it out.