This sounds like something Louis Rossmann should cover as a counter-example of mfgrs trying to do the right thing but fickle, corporate reviewers behaving in a petty, unfair manner.
I actually tried to reach out to Louis Rossmann a few times but haven’t got a response (yet).
I think what’s most interesting is that we figured out a business model based on open source hardware that’s sustainable. Thus a win-win for the manufacturer and the customer.
Repairability was actually a feature we design into the product from the start.
AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.
That they "immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators".
That's good, I genuinely respect that. But are there going to be improvements in QC protocol? Consideration of a bigger display? Apparently not, or at least there's no mention of this.
Instead they launch into a distracting and unproductive discussion about reviews in general, missing the entire point of the review's critiques and an opportunity to make a better product, or at least to clarify why they don't see a need for better QC, or don't think a bigger display would be a good idea.
That's my take as well. I have seen several reviews where there were issues with the unit received where the unit was replaced and an update was made to the review. It's not like this is the only situation to have ever happened. A manufacture complaining about someone else complaining about a valid problem with what they received is just petty. Man up, admit there were problems, accept the loss in revenue by sending out replacement units. You screwed up the product during manufacturing/design/wherever, own it. Once you do that, stop whining about people correctly calling out defects in the units even if you did fix it. Until you recall/replace every defective unit (not just the ones with owners making noise about it), you have no standing to be upset someone making valid points about the defects.
This is what you could call a learning opportunity. Instead, they come across as petulant and whiny. Just take your medicine and grow and learn to not make the same mistake on future products
Wired vs. tired is literally about what’s “cool.” That’s it. It has never been rigorous about anything.
It's become something else, Wired has a brand name and a reputation, so when they pooh-pooh something that has more weight that if you or I do.
I don't even have a preference. They both seem nearly ideal, depending on context.
It's linked in the article but here it is. https://www.wired.com/gallery/best-indoor-air-quality-monito...
> - Our monitor: Downgraded due to a faulty display (a warranty-covered hardware issue).
> - Another Monitor: Recommended, despite having no display at all.
> - Another Monitor: Also recommended, despite lacking a CO2 sensor—one of the most critical metrics for assessing indoor air quality and ventilation.
It's not a failure that the one without a display doesn't have a display. It's a design choice. The AirGradient unit has a display, but it's tiny and hard to read. Scrolling through the article, all the other units with displays have much larger and more readable text. You can read the biggest data points from across a room. The AirGradient has a display, but it fails to be a good display, hence the reviewer's perspective - it's not living up to its goals.
There are three outputs. LEDs that go from green to yellow or red. The small display. A webpage dashboard. Or you can plug the data into HA for whatever you want.
The only issue I have with the display is that it’s monochrome and that prevents making data easy to read the trends, by showing positive changes as green or negative ones as red.
If the display is too small the LEDs are easily visible for quick information and then the dashboard is for more data.
Reviewers often have their issues really understanding how people use products, often because rapidly changing things to review, doesn’t allow them the time to truly use and understand a product.
You can't just do that and get in quality testing time with more than one or two products.
Reviewing things fairly and helpfully is hard and takes time, and especially as AI slop takes over writing (thankfully it looks like this article at least has a byline), I think it's going to be harder and harder to find actual useful human reviews to guide decisions.
This is quite different from being tasked with comparing bicycles which would require a lot of effort to give equal time to each one. Unless the journo was a world class rider, I'd be shocked if they rode any one of them for more than 5 minutes.
These devices usually have between 3-8 sensors inside (with wildly varying quality and quality control), run firmware that _usually_ has access to your WiFi or requires an app to run on your phone (security implications), and are meant to exist in your home for years at a time.
Good reviews which consider all those aspects take time and effort, even for simpler devices.
The reviewer states:
I’ve been using AirVisual Pros for the past five years.
so it's not like they're new to the field. They know what they want out of the product they're reviewing. That may not be what someone reading the review may be after, but that doesn't invalidate the review.To draw a parallel, I think an iPhone user may have a harder time using Android than someone who has never used either phone.
Admittedly, I'm another happy AirGradient user.
I get a new laptop and phone and generally dislike it, because it's not what I am familiar with and it's not yet setup just the way I like it.
And then there is the fact, that the reviewers favoured product has a logo on the product page for the reviewers publication.
There is certainly potential for financial interests to impact reviews.
• Product A has limited features but does them well. If the customer is okay with the features the product has, the reviewer can recommend it for this customer.
• Product B has more features but is impacted by QA issues as well as product design decisions that make those features harder to use. This impacts the ability of the customer to use features they might've paid for to use the product, and it may even impact their ability to use features core to other products. This potentially makes Product B less desirable for comparable use cases.
With this in mind, I'm inclined to agree with Wired's decision.
It also raises my eyebrows that they see “repairability [as] one of our core differentiators.” It’s cool to make that possible quietly for people who are into it, but would you want a “repairable” smoke detector? Or one that just works? If it broke, would you want them to send you one that’s not broken, or parts and a booklet of repair instructions?
If I pay for X, I will be mad if I can only use X-1.
> I understand why I need to check a dashboard for an outdoor air quality monitor, but having to check a dashboard for an indoor monitor seemed like an extra unnecessary step.
This is after already mentioning the unit also has led light bars to display quality without reading the number.
The reviewer seems to be saying that just lights and a web dashboard isn't enough for and indoor monitor.
Yet earlier in the article the Author picked the "Touch Indoor" as the unit with the best display, even though that is an indoor unit with no screen and only led lights.
Given that, you'd think the AirGradient unit's lights would be compared to say why they are worse, but that doesn't happen.
Having read the wired reviews, they set off my internal alarms for "low quality reviewer" who doesn't display a deep understanding of the products being reviewed or the market segment. There's a lot of fluff and stuff about screen size and very little digging into which actual accuracy and functionality.
That said, I haven't seen any good reviews from wired in a log time.
The transparency of this company is nice but you can’t control what other people think about your products based on their experiences. It isn’t really “unfair” at all, at least not 100% unfair. They are essentially upset that the press is allowed to have an independent opinion.
If I buy a laptop and the screen is broken (warranty issue!) that’s still a lot worse of an experience than a desktop PC that has everything working. The excuse that a desktop PC doesn’t include a screen isn’t relevant, the idea is that the competition shipped a fully functioning product.
> After using the monitor for a few months, the display began to fall apart into unreadable characters.
The AirGradient screen isn't even that small, but the UI can be much more user-friendly IMO. There's a reason all the other meters with screens do HUGE NUMBER+tiny label.
I'm sure many people will prefer the AirGradient, but I don't think the reviewer is wrong for having different preferences.
That's the idea. The caveats they don't want to you to remember are left unbolded.
They don't seem to be as interested in the fact their outdoor monitor was the recommended outdoor solution either.
My airgradient monitor has been online for years and sending data to Prometheus reliably. I’ve been able to plot the air quality across a few climate events and the introduction of a Samsung air filter in my bedroom. It’s a good little product.
The oled display is nice, but I rarely care in realtime what the exact metrics are. I have that stored as time series stats sso I can see trends over time. Exactly like I do for metrics of production systems in SRE life.
The unit also has a series of LEDs across the top and I can read the actual status from 20’ away (which is as far as I can get without going out a window or around a corner).
One green led? Good. Two green leds? Meh. Three LEDs? They’re red now and that’s not great
Single red led in the top left in addition to any on the right? Spectrum is having yet another outage (no internet)
No LEDs? Not powered on
Reviewer was overly severe and did his readers a disservice.
It’s better imho than my Airthings wave pro, and it lets me get awesome, actionable time series data. It’s sensitive enough to show air quality taking a dive overnight with two people and multiple pets breathing in the same room (one green during the day, three or four red at night), and also to show that adding a half dozen spider plants keeps co2 well in check (consistent one green led).
And I can read the air quality from across the room without getting out of bed.
The fact that I can keep using this even if the vendor goes out of business was a major selling point, but also home assistant integration.
I highly recommend these(I have an indoor and outdoor units)
A more professional response would just stick to the facts rather than trash the reviewer and pontificate on what is wrong with journalism today.
Is there a "meta review" site like metacritic, but for products?
If, however, I concede the author's idea that reviews must have objective criteria, methodology and standards in order to be taken seriously then I'd like to propose the first objective criterion: broken out-of-the-box === not recommended.
edit: evidently the device failed after a few months. This doesn't change my final opinion, which is in total agreement with the review, but it deserves to be mentioned because I was incorrect in my facts. For my fellow JS devs, I'm standing by broken out-of-the-box === not recommended, and adding broken within a few months of installing == not recommended.
then let me ask you: as an ethical person trying to write a review, how do you handle that situation? It seems like that's an angle to this that we're not exploring, and that's whether the review is epistemologically justified rather than whether it's objectively correct. The way I see it, as a reviewer I get a product that fails well sooner than I expected it to and I have three choices:
1) Don't report the failure
2) Report the failure
3) Report the failure but try to contextualize it (basically, trying to solve for sure whether they got unlucky or not)
1 is obviously unethical, and 3 seems like it's well outside the scope of a reviewer's duty (and could be seen as carrying water for a particular brand. after all, do you think the person who wrote the OP would be okay with it if his product's failure was considered typical but another product's failure was determined to be atypical, regardless of the truth?). The only ethical approach is to report what happened, and not speculate as to cause.
Easy
Why not take responsibility for that instead of complaining about an honest review?
Eh, what? I've received products that needed warranty repair/replacement from Apple, Toyota, Philips, Nintendo, Ikea, Breville, etc. (All of those examples which provided good service in repairing/replacing the product in question.)
A single data point of a broken product doesn't tell you anything of value.
Every volume product has failures and a single datapoint is not enough to say anything about a product's quality (good or bad). At the same time, a failure during a review is absolutely something that should be mentioned, as well as given an opportunity to test the company's RMA process. At the very least a failure like that should cause someone to look into how many other's online have similar issues.
Given the AirGradient is $100 cheaper than the winning product, I think the review might have been a little harsh.
There is a vanishingly small collection of youtubers that I might still trust when it comes to product reviews, and that list is shrinking.
Even ignoring the broken display, which I think is a red herring here (it would be relevant if this unit had a pattern of quality issues or failures indicating systematic production issues), I think that's the story here.
I appreciate the response from airgradient, assuming it's all true.
One reason is, in fact, the screen. AirVision Pro’s display is bright, but it is bright all of the time; it cannot be made dim enough to make it suitable for use in the bedroom, for example: the blueish white LCD is basically a small light fixture. Furthermore, the contents of the screen are well-readable only at a narrow angle (think when you look straight at it; putting it on top of a tall fridge or down on your windowsill makes it illegible). I would much rather prefer an e-ink display.
Second, on their website IQAir states that their air monitors are made in Europe[0]. This is a false claim. In fact, AirVision Pro is made in PRC, as it declared on the box. I would not be against a good product made in PRC, and AirVision Pro is in fact known for good cheracteristics regarding accuracy, but it seems like a dark pattern at best, and they clearly want to mislead customers.
Third, the enclosure featured a charging USB port (which is an obsolete micro USB variety incredibly hard to find cables for) that was very finicky and gave up the ghost 3 months in. The device just wouldn’t charge its battery or see any power at all thereafter, so it basically became a brick of cheap plastic for all intents and purposes. I can’t be bothered to disassemble the enclosure and try to repair it since I can’t stand the bright screen anyway and I already got the hang of air quality patterns where I live.
It did the job, sure, but if AirGradient’s PM2.5 and carbon dioxide detectors do nearly as good of a job[1] it makes it a much more compelling option for me.
Unfortunately, as of the time I last checked, AirGradient shipped to a small set of countries which did not include my area; by comparison, IQAir has a much wider coverage.
[0] You can still see the proud large “Swiss made” in the relevant section of their website (https://www.iqair.com/products/air-quality-monitors). Furthermore, if you Google the question, the LLM-generated answer suggests that
> The IQAir AirVisual Pro air quality monitor is Swiss-designed and manufactured. While IQAir is headquartered in Switzerland, their manufacturing facilities for air purifiers, including the AirVisual Pro, are located in both Switzerland and Southern Germany
which is not true.
[1] Perhaps someone can comment on that; I don’t see SenseAir sensors listed on https://www.aqmd.gov/aq-spec/evaluations/summary-table.
Also it’s a pain in the a to zero out the co2 sensor the first time.
I would probably not recommend it to someone who does not like to dabble a lot with the tech. It’s not really a it just works and it’s easy for everyone product.
ahaucnx•16h ago
I spend quite a long time writing this post and it actually helped me to see the bigger picture. How much can we actually trust tech reviews?
I am already getting very interesting results from the survey I posted and already planning to write a follow up post.
jzellis•16h ago
I'm a world class writer but I stopped doing it for a living a long time ago. Why? Because as media moved from print to online, the work was devalued. I've worked for 25 cents a word sometimes, which was pretty decent when one 1200 word piece could pay rent back then. Nowadays, writers are offered $25 per article flat with no compensation for rewrites. Staff positions pay badly for too much work but are as coveted as C suite gigs are in the tech world. Maybe more so.
So if the reviewer is staff, they might be assigned three or four reviews in a given week on top of other work. If they're freelance, they might have to take on more just to make their rent. This is because your average magazine staffer who's not management pulls about as much as a Starbucks manager, and was ever thus, unless you got in at Vanity Fair or The Atlantic back in the Before Times.
It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
Your device arrived damaged, which is absolutely no one's fault, but your reviewer doesn't have time or honestly impetus to give it a second chance. Not for whatever they're getting paid for that review, which is not much at all.
It's just bad luck, is all. And yes, it's not fair and, yes, you're right to complain, but it's not as simple as "tech writer lazy".
(And if anyone's response is "They accepted the job, they should do their best at it no matter how little it pays", I'm guessing you've never had to duck your landlord to try not to get evicted before the freelance check you've been hunting up for three weeks arrives. There's a reason I'd rather make a living as a mediocre coder than a very good writer these days - at its worst, the tech industry is more renumerative and stable than the publishing industry is.)
tobr•15h ago
This is awkward, but I think you mean ”I'm a world-class writer”.
handoflixue•15h ago
redbluered•11h ago
I rather enjoy snark, whether by me, at me, or just reading.
skylurk•9h ago
shkkmo•12h ago
You wrote it, you don't get to dodge the responsibility like that. Professional integrity still matters.
> It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
That seems unethical.
> There's a reason I'd rather make a living as a mediocre coder than a very good writer these days
As coders, we also have an ethical responsibility to pushback against code that will harm people.
We need more writers to say no to writing stories with insufficient time/resources to do it ethically same as we need more developers who push back against building unethical products.
no_wizard•16h ago
mind-blight•16h ago
The only thing you're missing for me is radon detection. I just bought a house and tests came in below remediation levels, but the report showed a lot of spikes and variance. So you have any plans for a model with radon detection in the future?
edent•16h ago
That points to a lack of QA on your part and, I think, it is fair for a reviewer to point out.
Even if you have an exemplary warranty process and easy instructions, that's still a hassle. Not everyone has the confidence or the time to repair simple things.
As for the objective/subjective nature of reviews. Are your customers buying air monitors for their 100% precision or for "entertainment" purposes / lifestyle factors?
I have a cheap Awair air monitor. I have no idea if it is accurate - but it charges by USB-C and has an inconspicuous display. That's what I wanted it for.
It is perfectly fair for a reviewer to point out their personal preferences on something like this. They aren't a government testing lab.
16bytes•13h ago
It seems unfair to move to "not recommended" due to a single instance of a hardware failure, especially if the manufacturer made it right. And repair-ability is one of their core values!
At most this should've triggered a "this happened to me, keep an eye out if this seems to be a thing." note in the review instead of moving to not recommended.
edent•12h ago
How about if they gave you a voucher for a free drink to say sorry?
Reviewing products is like interviewing people. You have to go by what you see on the day. Your can't review (or interview) based in what could have happened; only on what did.
toomuchtodo•8h ago
Hardware device arrives damaged or non functional? I’m just going to call and ask for another one. If it’s a critical need (I cannot wait for a return and delivery cycle), I’m buying more than one upfront. Spares go in inventory.
shkkmo•12h ago
They shipped a working product that failed after several months, was covered under warranty and also repairavle at home (which the review doesn't even mention.)
> It is perfectly fair for a reviewer to point out their personal preferences on something like this.
But that isn't what happened. The product was included as "not recommended" not just "this wasn't my favorite due to X but would be good for this type of person".
They make up a category to list every other unit as "best for" but decided that nobody should want to buy this one because the Author got annoyed.
The review is poorly written and doesn't do a good job of actually comparing units. It is the kind of review article that mostly fluff with very few details about actual differences revealed during testing that I have learned to ignore when looking for information about what to buy.
philipwhiuk•15h ago
Isn't this an outdoor one. Outdoor ones aren't expected to have a display because you want to check them without going outdoors.
This seems reasonable.
ahaucnx•15h ago
pwg•15h ago
"is that this review is ... pretty much purely based on the personal preferences of the author."
You've found the core takeaway about nearly all "product reviews" in nearly all publications. They are almost all simply "the personal preferences of the author".
These authors have neither the time, nor the science skills, for anything even beginning to look like a rigorous scientific review, and so the "best" vs. "ok" vs. "not recommended" tags applied result because the author liked the particular shade of pink used on a trim piece on one, or liked that another one looks like the Apple computer they are using, and so forth.
But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.
Yet, as you say, they have "great power" to influence buying decisions on the part of folks who read their reviews.
bryant•15h ago
This is also why review aggregators exist: if I'm just getting into a thing, such as watching movies or buying appliances, I probably need a general sense of how people collectively feel about a thing. But if I'm keenly aware of my preferences, it helps me to find reviewers who align with how I think. People routinely seek out specific reviewers or specific publications for this reason.
For instance, someone reading this review might conclude "I really appreciate that ease of use is a topic that's front of mind with this reviewer." Another reviewer's focus might be customizability, and they might recommend AirGradient. And that reviewer's audience follows that person likely because those concerns are front of mind.
...to be honest, if AirGradient had responded more along those lines ("we prioritized X and Y. We understand if this isn't the best fit for customers looking for Z, but we're working on it"), it would've felt more empathetic and less combative to me.
axus•14h ago
Which sites and publications would you recommend, and which have are biased for financial reasons?
cestith•14h ago
ahaucnx•13h ago
On subjective reviews: I think there's absolutely nothing wrong with reviews based primarily on an author's subjective opinion. However, such reviews should be appropriately labeled. For example, "My Favorite Air Quality Monitors" rather than "The Best Indoor Air Quality Monitors". The title sets reader expectations for objective evaluation with consistent methodology.
On the defective display: Important clarification: we did not ship a broken device. The display issue developed during the review period—this wasn't a QC failure on our part. Hardware can fail during use (as it can with any electronic device), which is exactly why we immediately offered replacement parts, a new unit, and detailed repair instructions when we learned about it.
On the tiny display and lessons learned: We're well aware that opinions on our display vary significantly, as evidenced by this discussion. Some users love it, others find it too small. We actually have differing opinions within the AirGradient team as well. We're planning a refresh of our indoor monitor next year and are currently testing around 10 different display types—including e-ink, colour OLED, touchscreens, and others. So far, we haven't found the ideal replacement, but we're planning to involve our community later this year to gather feedback on the various options.
pwg•13h ago
Unfortunately, that ship has sailed. There have now been so many review articles for so very long titled "Best X" when the nature of the review is "... in the subjective opinion of the review author" that it is unlikely anyone views a "best X" article as having any objective evaluation or rigor behind it at all.
Your suggestion would be nice to enforce, but there's no way to get that ship back to port to change its course now.
redbluered•11h ago
I hate blinking lights in a bedroom.
jjulius•13h ago
You write:
>How can a product be penalized for a failing display when another recommended product has no display?
This is an incredibly perplexing take. A display is subjective - whether or not the customer wants one is up to the customer. What the customer does want is a functional product, so regardless of what another product's features are, if that product functions as intended and yours does not, of course it's going to be recommended over yours.
>How can an indoor monitor without CO2 sensing - essential for understanding indoor air quality - be recommended over one that includes this crucial measurement?
Again - the products without CO2 sensors functioned as intended. It is indeed accurate that CO2 is one of the most critical metrics for assessing indoor air quality, but it goes back to my previous comment - perhaps the customer is more interested in PM2.5 indoors than CO2 for a specific reason. We don't know. Ultimately, the CO2-less sensors functioned as intended, whereas yours did not.
You go on to say:
>And specifically for situations like this: How would you want us to handle it? Should companies stay quiet when review methodology breaks down? Should we be more aggressive in calling this out? Or is transparency and open discussion the right approach?
Maybe focus less on one review and more on improving the product? As another comment states, you shipped a broken product and it suggests that there's a problem with your QA process. Further, you state early on:
>Let me be clear: this was a legitimate hardware failure, and we take full responsibility for it. As soon as we learned about the issue, we immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators.
Let's maybe hear more about that. How/why did the hardware fail? Did you examine your QA process and make any improvements to it? Highlight these steps, as well as the "core differentiator" that is your repairability, rather than asking perplexing questions about why one reviewer didn't like your product.
As an "average Joe" customer in this area, the whole response feels excessive and... whiny (driven by the fact that you don't highlight that you did, in fact, have a product on the list that was well recommended). I don't say that to be terribly mean, it's just a bit off-putting. You're not necessarily wrong about product reviews like this in general, but like... who cares? Put the effort into making a solid product, not taking umbrage with one person's opinion.
There will be more reviews, and some of them will be negative. You're not going to be able to control perception and opinion, and nobody will ever get perfect marks from everyone. Learn to be OK with that.
Edit: I just saw your response about this not being a hardware failure when shipped. Still, the general concept of my point remains - detail what you're doing to determine how this happened and prevent it in the future, rather than griping about the review process. If "transparency is how [you] operate", lemme hear the deets about this issue!
ahaucnx•13h ago
As I mentioned above, we are working on a refresh of the indoor monitor. The display is also under discussion but so far with 10s of thousands of the indoor monitor sold, I am only aware of a single digit number of cases of a failed display.
redbluered•11h ago
It's not a universal product. Competitors have upsides and downsides, and different people want different things.
I think this is the time to move on, and focus on people so like and appreciate you, and not dwell on those who don't. Success brings more of both, and if you can't handle a few haters, you probably don't want to be too successful.
And reviews are imperfect, but a lot better than no reviews. Accept life isn't perfect.
Thanks for a great product and for running a company with integrity.
liminal•9h ago