Once you figure out the “through other objects” part, I guess it just becomes an energy control problem, ie how to get object A to location B accurately, and decelerate it, before the effect wears off. Which is maybe not so hard when you have a teleport sender and receiver that can do the acceleration and deceleration.
Hypothetically the sender would estimate the trajectory required to hit the receiver then sync/teleport an inert beam of atoms (photos or something) with it. Then, once sync has been established you would know the trajectory settings to use, perhaps it would be a giga-energy problem to ie phase the object, accelerate it to light speed, then receive it at the destination and un-phase it. This would allow you to ie teleport living things without the morale dilemma of losing their original consciousness.
The practical distance would be based on the achievable speed ie how far can we shoot something before it phases back. You can cover a pretty big distance in 1us at the speed of light! Around 300m. If you can keep something phased for 10ms, you could go 3000km, at which point you just form a network of receivers.
(Just an exercise, don’t take it seriously!)
Where did you read that in the article? I couldn't find it.
> We speculate that the participant’s fair skin and lack of hair were significant factors that reduced the attenuation of light to feasibly detect a signal. In addition to the participant wherein a signal was observed, the experiment also included trials on seven other subjects. The details of the subject pool are as follows: two females and six males; 25 to 35 years old; 14.5 to 15.5 cm head diameter; Fitzpatrick skin types: 3 type I, 4 type II, and 1 type V; hair types: 1 bald, 4 short and light-colored, and three dense and dark-colored. We did not observe any significant time-correlated signals above background noise for the seven other subjects.
Photons measured in this regime explore regions of the brain currently inaccessible with noninvasive optical brain imaging.
I believe reflected photons are much more useful, by measuring how long in between signal and response you can get flight time which tells you depth. Of course I have no idea if infrared light reflects on anything in the brain.
In standard fNIRS, a light source and a detector forming a channel have to be ~2-3cm apart. The light leaves the source, goes into the scalp in a banana shape due to refraction, and reaches the detector. The idea is that due to differential absorption of different wavelengths by oxygenated and deoxygenated haemoglobin, you can send 2 wavelengths and solving a 2x2 system gives you the fluctuations in oxygenated and deoxygenated haemoglobin in the tissue the light transversed. This is a proxy of brain activation in that area. If the neurons fire a lot, they consume more oxygen and the brain then sends more oxygen there, this is called Brain-oxygenation level depedent (BOLD) response. If the path length is too short, the light cannot get refracted deep enough to reach the cortex, so you do not measure brain. If it is longer, too much light is absorbed on the way and less signal reaches the detector. The researchers here try to detect light with source/detector diametrically opposite on the scalp, and they show they can. However, it is not clear what kind of application this can have. It was done under very restrictive conditions (subjects very light-skinned, no hair, 30 minutes recording). Moreover, an advantage of standard fNIRS is the high spatial specificity, and it is not clear how to actually translate the light intensity data in their case to brain activation (and probably it is going to be very noisy) as the light transverses all the head.
In any case, they are experimenting with a novel technique, more like a PoC that they can at least detect photons but nothing more than that, and we are probably far away from any potential applications, if any is even come out of this. But it could also lead to applications we cannot actually imagine right now. As for applying this to measure brain activity in the way current fNIRS and fMRI do, I am skeptical.
The X-rays in CT scans also transverse all the head. Would it be possible to use the same algorithms as CT to construct a 3D image with this tech?
No short term brain computer interface with optical techniques just yet.
ggm•7mo ago
jocaal•7mo ago
ars•7mo ago
metalman•7mo ago
mpreda•7mo ago
saltcured•7mo ago
ars•7mo ago
freehorse•7mo ago
I don't think this can give a structural image, but not sure what this can be used whatsover. It is probably more comparable to fmri because the technique, applied on short source-detector paths, is usually showing fluctuations in oxygenation levels in the cortex, as proxy of brain activity, but in contrast to fmri it could not go deeper into subcortical structures of the brain.