I often wished something like this would exist for physical products as right now I’m usually stuck doing days of manual research online myself.
Over the past 4 weeks I’ve been building an experiment that applies the same aggregation idea to tech products. It collects professional reviews, extracts and normalizes scores, and produces a single “critic score” per product.
So far the dataset includes ~9,630 reviews across 1339 products. As a small sanity check, I compared the results against two recent purchases of mine, and the “best for most” recommendation matched what I eventually chose after many hours of manual research.
I’m curious what you think about this approach, especially around score normalization, bias between publications, and whether you agree that a single aggregated score is super useful when evaluating products.
gghootch•59m ago