However, the sensitivity of return false is 0%, which renders it useless (and why sensitivity and specificity are used in this context).
The FDA classifies these devices as high-risk because they might give a false result but completely ignores the guaranteed harm of not having them at all. It’s a system that punishes action and rewards delay.
Well medical devices aside, the legal framework around anything, including business, manufacture etc. is more impeding while failing to address things like environmental destruction/pollution which causes real harm. ( notice, that I did not say climate change, a separate subject).
It all makes sense when one sees it either though the lens of either corruption of more likely human stupidity - where a bunch of rules give people the comfort of being protected.
Both false positives and false negatives are harmful. False positives will send people to the hospital for no reason and divert resources from people with real emergencies - not to mention leaving them with a large ER bill to pay. False negatives will result in people with actual heart attacks dismissing their symptoms and dying.
I think the FDA safety margin for things like this should be more “this has no actual obvious harm to use, it has a plausible mechanism of action to help + isn’t fraudulently measuring what it claims to measure and its science backed.
Something like this hits all the targets already.
Also you can already buy home ECG devices for a couple hundred bucks. Not sure if there is some history of being banned in the past or whatever, but otherwise I'd guess the main problem is just a lack of much interest in the market.
They allow a summary report. They don't require a clinical human trial. They barely care if you follow the FCC and safety requirements for electronics in general. This does not look burdensome.
The symptoms aren’t.
Men and women have slightly different ECGs and a doctor can usually tell your gender from an ECG. The appearance of a heart attack will be more similar than a normal heartbeat. Gender differences have a much smaller impact on an ECG than things like body mass and blood pressure. Overworked hearts will look more like overworked hearts.
krunck•4d ago
ashwinsundar•4d ago
Continuous monitoring is extremely challenging still because ECG data needs to be sampled at a relatively high frequency (~200 Hz) to accurately identify the QRS complex in the waveform. That uses a lot of power, and the batteries we have still aren't good enough to support those types of demands. 200 * 60 * 60 = 720,000 samples per hour to collect and process.
It's possible that algorithmic approaches may be able to reduce the sampling frequency required. Power-constraints were the main issue when I studied this topic 10 years ago during my master's degree. I had looked into non-frequency domain techniques (such as empirical mode analysis/Hilbert-Huang transform) as a possible way to reduce sampling frequency and thus power consumption.
https://github.com/AshwinSundar/Empirical-Mode-Decomposition...
sneak•17h ago
rather than reducing the frequency of the sampling, dynamically adjust the duty cycle of when sampling is happening?
this is probably a dumb suggestion, it seems pretty obvious. for example the apple watch doesn’t do o2 monitoring continuously, just for some fraction of the time.
do you need to sample every second to detect heart attacks? don’t they continue to show up on an ekg for more than 30 seconds?
elric•17h ago
Making it effectively useless? Unless the fraction of the time is multiple times per minute? E.g. in sleep apnea it's not uncommon for some desaturation to occur, triggering an arousal and deeper breaths, restoring saturation, only for the cycle to repeat 2 minutes later.
My Garmin has a similarly useless feature. I have no idea what the supposed benefit is. Maybe they hope that if they sample multiple nights they can detect some desaturation anyway and can get the user in for polysomnography? Might be worth it.
jml78•14h ago
firesteelrain•15h ago
lazyasciiart•13h ago
stavros•12h ago
firesteelrain•12h ago
lazyasciiart•6h ago
firesteelrain•5h ago
I went through this with a relative just two weeks ago and learned this from the cardiologists
lazyasciiart•6h ago
vlovich123•9h ago
hwillis•7h ago
Integrated frontends have had big impacts on efficiency and improving batteries have had basically the same increase in specific energy. I would be shocked if power was a limiting factor.
https://www.ti.com/lit/ds/symlink/ads1291.pdf?ts=17469713549...
avs733•10h ago
Detection of ecg anomalies(especially episodic ones with intermittent recording) was the subject of the physionet cardiac computing challenge almost 10years ago[0].
It’s amazing how far machine learning has come. I know teach a version of this challenge as a one day in class activity in my department’s physiology class. They actually get to train multiple models on a gpu cluster (and compare that to trying to train models on their laptops).
One thing we reinforce in the lesson is human vs. computer “interpretation”. They/clinicians can look at ecgs and make some sense of them. An LTSM is worse than random chance/a medical student. However moving to the frequency domain makes the LTSM more accurate than cardiologists, but neither they nor clinicians can “see” afib ina spectrograph. It’s a great way to talk about algorithmic versus human reasoning and illustrate that to students.
That then gets reinforced with other case studies of the ying and yang of human and machine decision making throughout our curriculum- like alpha fold working great until you ask it about a structure in the absence of oxygen, because that’s not in its training data.
[0] https://physionet.org/content/challenge-2017/1.0.0/
closewith•10h ago
But to be clear, a single lead ECG requires two electrodes at a minimum and commonly a third as ground. So a single lead ECG will have minimum two cables attached to electrodes on the patient. The placement depends on which lead (eg lead I, lead II, etc) but there's always two minimum.