TrustVector is an open-source evaluation framework + public directory where each system gets a multi-dimensional trust score across: - Security (prompt injection/jailbreak resistance, data leakage) - Privacy & compliance - Trust & transparency (hallucination/bias, documentation quality) - Performance & reliability - Operational excellence
Key idea: every score is evidence-based (sources + confidence), and you can re-weight dimensions CVSS-style depending on your use case.
Current coverage: 100+ evaluations across models, agents, and MCP servers.
GitHub + methodology are linked from the site. I’d love feedback on: 1) whether the dimensions/weighting are sane, 2) what evidence sources we’re missing, 3) What contribution workflow would make this actually community-maintained?
(Also: this project is not affiliated with trustvector.ai.)
hckdisc•1h ago
- Data format: each evaluation is structured JSON in /data, and the site renders from that. - Scoring: overall score is computed from the 5 dimensions; you can compute custom scores by applying your own weights (CVSS-style). - Evidence: each dimension includes citations + a confidence level, so disagreements can be about the evidence, not vibes.
If you want to critique one thing: pick a single evaluation you know well and tell me what’s wrong/missing in the evidence, then we’ll fix the methodology or the rubric.