https://www.ssph-journal.org/journals/public-health-reviews/...
> Prevalence estimated (...) 2%–3.5% in primarily non-hospitalized children.
So a fake test always saying "No" would be more accurate at 96.5% accuracy.
Junk science?
The primary conclusion of this research was basically just "this looks like it would be worth doing more research on." Which is a fair conclusion for a study this small.
The title on hn which implies that seems to be inaccurate and it's not the original title of the article.
> We evaluated the diagnostic power of the device in a cohort of 45 LC patients and 14 healthy pediatric donors. We estimated a 94% accuracy for the microclot count using the devices, significantly higher than the traditional counting of microclots on slides (66% accuracy).
They are comparing the predictive power and using accuracy (instead of sensitivity, recall, F1, etc.). For their method "using the devices", they compute an accuracy of the predictive power, not of the count, of 94%. For the previous method they say the accuracy is 66%.
Basic questions: Is accuracy even a good metric for this? Is 94% a good value or just the difference between bad and very bad?
It might very well be that their improvement is from bad to really good, but the point is that a raw stat of "94% accuracy" is useless without context and so is the headline.
The sample size is pretty small here and the control group even smaller. The paper concludes that a larger study is necessary to confirm the result.
Tests have a sensitivity (1 - percentage of false negatives) and specificity (1 - percentage of false positives)
"Accuracy" usually refers to sensitivity. If specificity is near 100% and the test is cheap/fast even low sensitivity can be good
On the other hand you could have sensitivity of 100% but the test could be useless if specificity is low and the condition is rare
https://pmc.ncbi.nlm.nih.gov/articles/PMC4614595/#:~:text=Ac...
That is exactly why I gave the trivial example of an "always No" test. It has perfect specificity (zero false positives) and has accuracy corresponding to prevalence. The sensitivity is zero, however, which is the point.
At least, that's my layman's understanding when I was following it some years ago. I'm not sure if there's been more recent studies that have found more concrete links since then, but I suspect GP is in the same boat, which is why they asked.
[1]https://www.sciencedaily.com/releases/2012/05/120529211645.h...
"the patients who gave blood had a significant reduction in systolic blood pressure (from 148 mmHg to 130 mmHg) as well as reduction in blood glucose levels and heart rate, and an improvement in cholesterol levels (LDL/HDL ratio)."
If so, the answer is that the body replenishes plasma in a day and red cells in six weeks (redcrossblood.org FAQ). The relative amount does change quickly.
OP's linked paper has "the iron-reduction patients had 300ml of blood removed at the start of the trial and between 250 and 500ml removed four weeks later."
A blood donation removes 500 ml, so about a year of menstruation all at once. You can donate every two months, besides.
So, yes, if there is an effect then we might expect the magnitude of the effect to differ. Or else we'd expect a paper cut to also have the same effect.
Sex biological difference could matter as well.
Even if they did, the hormonal effects would likely swamp anything else. Which is a huge problem: women are routinely excluded from studies to avoid that, meaning we have no idea what the effects are on women.
https://pmc.ncbi.nlm.nih.gov/articles/PMC8994130/
The study specifically does not look at the effect on recipients, though the donation centers do not disallow such donations. My presumption is that the donation is a net positive all around. If study comes to show the contrary, I'll certainly revise my approach.
Wikipedia lists much lower numbers on https://en.wikipedia.org/wiki/Long_COVID (6–7% in adults, ~1% in children, less after vaccination.) and seems to use a more liberal definition than this paper, as it mentions "Most people with symptoms at 4 weeks recover by 12 weeks" (while the paper only considers it "long COIVD" if symptoms last past 3 months).
I've found studies (peer reviewed, as far as I can tell) claiming anything from well under 10% to well over 30%.
What's going on here?
Maybe not all infections are considered "acute".
I don't see how you'd know the exact number without a solid diagnostic check.
Additionally, a lot of those numbers are based on earlier strains of COVID, which were much more severe.
I suspect the 1/5 figure is largely true for "has some degree of cardiovascular damage and worsened general health after COVID", but the number of people actually disabled by the condition is much lower.
That said, any loss of ability is a sad thing, and I am incredibly disappointed that we did not introduce any shared indoor space air quality legislation post-pandemic.
For a variety of reasons, hyping the threat of infection has been a pretty widespread practice among the medical and scientific community since COVID began. There's no way on earth 1 out of 5 kids are still experiencing symptoms 3 months out.
Never enough to warrant going to a doctor unless I was being super paranoid (and spend a long time convincing them I wasn't paranoid) but just enough to always wonder if there was something more to the story.
> We estimated a 94% accuracy for the microclot count using the devices, significantly higher than the traditional counting of microclots on slides (66% accuracy)
> We evaluated the diagnostic power (...). We estimated a 94% accuracy for (our method), significantly higher than the (traditional method) (66% accuracy).
Both methods have counting in their name, but they are comparing the diagnostic power.
kace91•1h ago
dmd•1h ago
kace91•51m ago