> By collecting this data, images of people can be generated from multiple perspectives, allowing individuals to be identified. Once the machine learning model has been trained, the identification process takes only a few seconds.
> In a study with 197 participants, the team could infer the identity of persons with almost 100% accuracy – independently of the perspective or their gait.
So what's the resolution of these images, and what's visible/invisible to them? Does it pick up your clothes? Your flesh? Or mosty your bones?
Gait analysis is complete fiction. Especially with a non-visual approach like this.
> The results for CSI can also be found in Figure 3. We find that we can identify individuals based on their normal walking style using CSI with high accuracy, here 82.4% ± 0.62.
If you're a person of interest you could be monitored, your walking pattern internalized in the model then followed through buildings. That's my intuition at practical applications, and the level of detail.
If you want to do advanced sensing, trying to identify a person, I would postulate you need to saturate a space with high frequency wifi traffic, ideally placed mesh points, and let the algo train on identifying people first by a certain signature (combination of size/weight, movement/gait, breath / chest movements).
Source: I worked on such technologies while at Signify (variants of this power Philips/Wiz "SpaceSense" feature).
More here: https://www.theverge.com/2022/9/16/23355255/signify-wiz-spac...
The researchers never claimed to generate "images," that's editorializing by this publication. The pipeline just generates a confidence value for correlating one capture from the same sensor setup with another.
[Sidenote: did ACM really go "Open Access" but gate PDF download behind the paid tier? Or is the download link just very well hidden in their crappy PDF viewer?]
I mean you could even jam a microwave oven door open, turn it on, and then measure how much energy loss there was through certain paths. That's essentially all beamforming in Wifi requires -- a really sophisticated way of measuring paths that cause energy loss, and a really sophisticated antenna design that allows you to direct the signal through paths that don't cause energy loss. The first problem is what's facilitating surveillance because humans cause signal loss because our bodies are mostly water, and 2.4 GHz radio waves happen to get absorbed really well by water. This causes measurable signal loss on those paths and the beamforming antennae use that information to route around your body. But they could also just log that information and know where you are relative to the WAP.
Already is and widely used for exactly what the article worries about
Philips WiZ bulbs: https://www.wizconnected.com/en-us/explore-wiz/spacesense
Alarm.com also supports such sensors: https://poweredbyalarm.com/eventresources/wp-content/uploads...
The paper says, in a somewhat contrived scenario, with dozens of labelled walkthroughs per person, they can identify that person from their gait based on CSI and other WiFi information.
This is a long way from identifying one person in thousands or tens of thousands, being able to transfer identifying patterns among stations (the inference model is not usable with any other setup), etc.
All the talk of "images" and "perspectives" is journalistic fluffery. 2.4Ghz and 5Ghz wavelengths (12cm & 6cm) are too long to make anything a layperson call an "image" of a person.
What creepy thing could you actually do with this? Well, your neighbor could probably record this information and tell how many and which people are in your home, assuming that there is enough walking to do a gait analysis. They might be able to say with some certainty if someone new comes home.
That same neighbor could hide a camera and photograph your door, or sniff your WiFi and see what devices are active or run an IMSI catcher and surveil the entire neighborhood or join a corporate surveillance outfit like Ring. Using the CSI on your WiFi and a trained ML model is mostly cryptonerd imaginiation.
It's known as your "hand"
> a study with 197 participants, the team could infer the identity of persons with almost 100% accuracy – independently of the perspective or their gait.
The paper seems to make it clear that the technique still depends on gait analysis, but claims it's more robust against gait variations.
That a super impressive! I wonder how that would be at scale, with a few millions people. I’m don’t think that would remain as accurate
> To allow for an unobstructed gait recording, participants were instructed not to wear any baggy clothes, skirts, dresses or heeled shoes.
> Due to technical unreliabiltities, not all recordings resulted in usable data. For our experiments, we use 170 and 161 participants for CSI and BFI, respectively. [out of 197]
I wish they had explained what the technical unreliabilities were.
Heck, even Ecobee remote temperature sensors can do this.
Reminds me of the story about how the Google Nest smoke detector had a microphone in it. [1]
0 - https://www.amazon.com/b?node=23435461011&tag=googhydr-20&hv...
1- https://www.reddit.com/r/privacy/comments/asmusq/google_says...
Not even the biggest privacy issue of using Alexa devices. I think listening you 24/7 is a bigger potential issue.
Not sure if Alexa has this, but cheap mm-wave wideband multi-GHz sensors(or radars more accurately) now enable more finely grained human presence detection and also human fall detection[1] with the right algos, so you can for example detect if grandma in the nursing home fell down and didn't get back up, but in a privacy focused way that doesn't resort to microphones or cameras. Neat.
>Reminds me of the story about how the Google Nest smoke detector had a microphone in it.
Vapes have microphone arrays in them to detect when you're sucking and light up the heating element. Cheap electronics have enabled a new world of crazy.
[1] https://www.seeedstudio.com/MR60FDA2-60GHz-mmWave-Sensor-Fal...
It was listed in the features for the 2nd gen units. https://support.google.com/googlenest/answer/9229922#zippy=%...
The devices that reported BFI information were also stationary, and there were no extra devices transmitting information that would be conflicting.
A single camera would be much more effective.
Given a tightly controlled environment and enough training data, you can use a lot of things as sensors.
These techniques are not useful for general purpose sensing, though. The WiFi router in your home isn't useful for this.
Even Xfinity has motion detection in homes using this technique now:
Seeing this option in settings was definitely a wake up call for me.
We've seen it before with things like taking photos around corners.
And no, it isn't like the Wright flyer and a bit crap now but in 40 years we have jet planes. This will never get significantly better.
Nothing says "out of touch with reality" like 'murcan media.
Any sub-meter precision or presence detection does not really matter, if these companies have all your other questions, queries, messages, calendars, browse history, app usage, and streaming behaviour as well.
Second, it is a logical leap to assume Google knows everything already. They could for example build this nearby Wi-Fi based location querying API with privacy in mind, by purposefully making it anonymous without associating it with your account, going through relays (such as Oblivious HTTP), use various private set intersection techniques instead. It is tired and lazy to argue that just because some Big Tech has the capability of doing something bad therefore they must already be doing it.
Run grapheneos!
firecall•4d ago