A few notes about the methodology: we focus on organization accounts (excluding personal accounts and forks), split them into two divisions by starting star count so smaller and larger ecosystems are evaluated among peers, measure growth across three signals (GitHub stars, contributors, and package downloads from npm, PyPI, and Cargo), normalize each signal within its division using a log–minmax transform, and combine the resulting scores into a single composite via the L² norm. Organizations are then ranked by this composite within their division. Full methodology is on the site.
This is v1 of the ranking, so there’s a lot of room for improvement. In future versions we want to add other package managers, new signals and refine the methodology. Things we'd love opinions on:
1. What do you think about the signals we are using? Which important ones are we missing? 2. Is log-minmax a robust scaler in this setting, or is there something better? 3. Is L^2 norm the right aggregation strategy for sparse, heterogeneous signals?
Everything's open: the website, the data and the scoring pipeline. We'd love your feedback on the methodology, the rankings, signals we're missing, or anything that looks wrong. Issues and PRs very welcome.