"A+" "B" "C-" "F", etc. feel a lot more intuitive than how stars are used.
I used to rate three stars for what "performs as expected" until I realized that it's punishing good products. Switch to A-F would result in the same behavior, except it'd be Uber drivers trying to make a living instead of noxious parents declaring that their kid deserves an A.
In US education you are taught that you need to get an A. Anything below a C, gets you on the equivalent of a “Performance Improvement Plan” in corporate world. And B is… well… B.
So with that rating engrained, people would probably feel bad about rating their ride-share driver a C when they did what was expected. And it wouldn’t stop companies from pushing for A ratings.
Even elsewhere like the food industry where they do have letter ratings, A is the norm with anything lower being an outlier.
Perhaps for this to work, it would need a complete systemic shift where C truly is the average and A and F are the outliers. In school C would need to be “did the student do the assignment.” And A would need to be “the student did the assignment, and then some.”
Consider for example the "S" as a better grade than "A", originating from Japan but widely applied in gaming.
I wonder if companies are afraid of being accused of "cooking the books", especially in contexts where the individual ratings are visible.
If I saw a product with 3x 5-star reviews and 1x 3-star review, I'd be suspicious if the overall rating was still a perfect 5 stars.
You would start by estimating each driver's rating as the average of their ratings - and then estimate the bias of each rider by comparing the average rating they give to the estimated score of their drivers. Then you repeat the process iteratively until you see both scores (driver rating, and user bias) converge.)
[0] https://en.wikipedia.org/wiki/Expectation%E2%80%93maximizati...
Alternatively, there might be some hidden reason why a broken rating system is better than a good one, but if so I don't know it.
Anything really bad can be dealt with via a complaint system.
Anything exceptional could be asked by a free text field when giving a tip.
Who is going to read all those text fields and classify them? AI!
The big rating problem I have is with sites like boardgamegeek where ratings are treated by different people as either an objective rating of how good the game is within its category, or subjectively how much they like (or approve of) the game. They're two very different things and it makes the ratings much less useful than they could be.
They also suffer a similar problem in that most games score 7 out of 10. 8 is exceptional, 6 is bad, and 5 is disastrous.
2 and 4 are irrelevant and/or a wild guess or user defined/specific.
Most of the time our rating systems devolve into roughly this state anyways.
E.g.
5 is excellent 4.x is fine <4 is problematic
And then there's a sub domain of the area between 4 and 5 where a 4.1 is questionable, 4.5 is fine and 4.7+ is excellent
In the end, it's just 3 parts nested within 3 parts nested within 3 parts nested within....
Let's just do 3 stars (no decimal) and call it a day
The trick is collecting enough ratings to average out the underlying issues and keeping context. IE: You want rankings relative to the area, but also on some kind of absolute scale, and also relative to the price point etc.
A reviewer might round up a 7/10 to a 3 as it’s better than average, while someone else might round down a 8/10 because it’s not at that top tier. Both systems are equally useful with 1 or 10,000 reviews but I’m not convinced they are equivalent with say 10 review.
Also, most restaurants that stick around are pretty good but you get some amazingly bad restaurants that soon fail. It’s worth separating overpriced from stay the fuck away.
However, the rounding issue is a big deal both in how people rate stuff and how they interpret the scores to the point where small numbers of responses become very arbitrary.
It doesn't mitigate the effect, the combination of the effect on rating and interpretation is the source of the issue, which exists whenever the review reader isn't in the cultural midpoint of the raters.
Obviously, yet the scale of the mismatch when looking at a composite score isn’t total, thus the effect is being mitigated.
Further, even without that the more consistent the cultural mix the more consistent the ratings. Anyone can understand a consistent system.
Has anyone seen a live system (Uber, Goodreads, etc.) implement per-user z-score normalization?
"Here's your last 5 drivers, please rank them"
Weight by amount spend could be interesting.
Big vendors/companies should probably be required to have per product ratings rather than optional. Rating adobe or alibaba on general is probably not all that useful.
The EU almost requires it but google (for example) still didnt find a nice technical solution.
nlh•2mo ago
Take any given Yelp / Google / Amazon page and you'll see some distribution like this:
User 1: "5 stars. Everything was great!"
User 2: "5 stars. I'd go here again!"
User 3: "1 star. The food was delicious but the waiter was so rude!!!one11!! They forgot it was my cousin's sister's mother's birthday and they didn't kiss my hand when I sat down!! I love the food here but they need to fire that one waiter!!"
Yelp: 3.6 stars average rating.
One thing I always liked about FourSquare was that they did NOT use this lazy method. Their score was actually intelligent - it checked things like how often someone would return, how much time they spent there, etc. and weighted a review accordingly.
theendisney•2mo ago
If one would normalize the ratings they could change without doing anything. A former customer may start giving good ratings elsewhere making yours worse or give poor ones inproving yours.
Maybe the relevance of old ratings should decline.
kayson•2mo ago
theendisney•2mo ago
Alternatively you could apply the same rating to the customer and display it next to their user name along with their own review counter.
What also seems a great option is to simply add up all the stars :) Then the grumpy people wont have to do anything.
ajmurmann•2mo ago
This actually somewhat goes into another pet peeve of mine with rating systems. I'd like to see ratings for how much I will like it. An extreme but simple example might be that the ratings of a vegan customer of a steak house might be very relevant to other vegans but very irrelevant to non-vegans. More subtle versions are simply about shared preferences. I'd love to see ratings normalized and correlated to other users to create a personalized rating. I think Netflix used to do stuff like this back in the day and you could request your personal predicted score via API but now that's all hidden and I'm instead shown different covers off the same shows over and over
theendisney•1mo ago
Also, yes it matters that one in a hundred ratings leaves such a large mark on your business. I know one where they go out of their way to deliver quality. They get maybe two ratings per week. The competitor left only four fake ratings. It would take 200 weeks or 4 years to get back to 5 stars.
Hizonner•2mo ago
My favorites: A power supply got one star for not simultaneously delivering the selected limit voltage and the selected limit current into the person's random load. In other words, literally for not violating the laws of physics. An eccentric-cone flare tool got one star for the cone being off center. "Eccentric" is in the name, chum....
stevage•2mo ago
derefr•2mo ago
esperent•2mo ago
I would personally frame that as a review for poor documentation. A device shouldn't expect users to know laws of physics to understand it's limitations.
Hizonner•2mo ago
We're talking about a general-purpose device meant to drive a circuit you create yourself. I'm not sure what a good analogy would be. Expecting the documentation for a saw to tell you you have to cut all four table legs the same length?
esperent•2mo ago
The saw analogy isn't a good one - saws work within the range of physics that humans have instinctual understanding of. We instinctively know what causes a table to wobble. We do not instinctively know the physical behaviors of electricity.
You might counter that people should know this before messing with electricity, and I'll agree. But what people should know and what they actually know are often very different.
A warning in the manual might prevent some overeager teenager who got their hands on this device from learning this particular law of physics the hard way.
Hizonner•2mo ago
Actually, a toaster is more dangerous than a low voltage bench supply if you use it in the maximally moronic way. If you ask me to set a fire on purpose, and give me a choice of which to use, I'll pick the toaster. And, by the way, I don't buy the idea that people "instinctively" understand fire.
But the other, more relevant part is that a toaster or a washing machine is used for a single purpose in a stereotyped way. There are a bounded number of fairly well understood mistakes that people are likely to make frequently. You can list them.
A bench supply, like many other tools, can be used in an almost infinite number of ways, which you are meant to be designing for yourself. If you buy a washing machine, you're "saying" that you want to wash clothes. If you buy a variable power supply, you're "saying" that you want to design or repair electronics that might do anything, or do electroplating, or who-knows-what-else. There is no complete list.
You cannot do such things safely without actually understanding how they work, in more depth than an instruction manual is going to be able to give you, even if the manual somehow knew what you were planning to do, which of course it doesn't. You can't design an electronic circuit, or a plating protocol, and you definitely can't troubleshoot either one, without having a clue about Ohm's law.
People sometimes have their own ideas, based on actual understanding of something significant, and they need general purpose tools to support those ideas.
> The saw analogy isn't a good one - saws work within the range of physics that humans have instinctual understanding of. We instinctively know what causes a table to wobble.
... and yet people will try to build a table by attaching each leg with a single nail in the end, because they don't instinctively understand wood grain, or that there are lateral forces and leverage involved. The saw manual doesn't get into those things either. That's woodworking knowledge you're meant to have before you buy the saw. Even though you could easily put something heavy on the table and get hurt when it collapses.
Saw instructions are restricted to the actual process of sawing. They don't get into table design, not even the "inobvious" parts. Actually, many power saw instructions don't even say much about how to saw. They tell you how to mount the blade, and what this or that switch does. After the endless warnings.
> But what people should know and what they actually know are often very different.
I could also say that people don't read manuals. Especially not if the manuals are gigantic tomes, which is where they end up when they go in the direction you're talking about. The manual for your average power tool has pages of fine-print warnings, basically trying to explicitly forbid every stupid way somebody has misused that kind of tool in the past. Essentially no users read them.
I have an air nailer whose instructions helpfully inform me that I shouldn't use acetylene instead of air. It specifically says that. That's like toaster instructions warning you not to make your toast out of asbestos panels dusted with cyanide.
Hooking your nailer up to acetylene is not an obvious mistake to make. It's not an idea that comes to mind. It probably wouldn't even work well (before the fireworks started). It's not easy, either. You'd have to kludge up some weird adapter system to make the deliberately incompatible fittings work. Anybody who works around acetylene knows really well why it's fucking suicidally stupid. And, yes, if you don't know that, you shouldn't be messing with acetylene. Or nailers.
But apparently some cretin did it one time, and now it's in the instructions. Even so, the instructions for that nailer don't say that it's not for styrofoam. Which is more the sort of thing that was going on with the power supply. I guess nobody's lost an eye to a ballistic nail that came through a piece of styrofoam yet. Or at least nobody's had the gall to stand in a courtroom and say it was the nailer company's fault.
The cretin with the acetylene wouldn't have read the warnings, and now that they're longer, the next cretin is even less likely to read them. In fact, nobody expects most users to read the warnings. The warnings are not there to improve safety. They're purely for defense against lawsuits. They might even work for that, but they're not what you'd write if you actually set out to improve safety.
The real safety impact , if any, is almost certainly negative. Warning about obviously off the wall idiotic behavior overwhelm any actually useful warnings, and prevent them from being seen. On the other end of the spectrum, overcautious warnings, if they are read, breed contempt for warnings about really important, possibly inobvious risks. More is not better.
... and bringing it back to reviews and that power supply, such supplies typically come with a specification sheet and either no instructions, or one page with a few bullet points that would make no sense if you didn't understand how voltage and current relate. So even if the documentation on that particular supply were in some sense inadequate, it would still be up to the standards of any other supply you might buy. And in any case, the reviewer's claim was that there was something wrong with the device itself because it didn't do something the reviewer should have known was impossible. That's not a useful review.
nmstoker•2mo ago
anon7000•2mo ago
Why can’t I downvote or comment on it? As a user, I just want more context.
But obviously, it’s not in Amazon’s interest to make me not want to buy something.
BeFlatXIII•2mo ago
kazinator•2mo ago
zzo38computer•2mo ago
nlh•2mo ago