To prevent light from reaching the detector from sources other than light
transmitted through the head, the experiment was performed in a light-tight
enclosure that surrounded the head. The enclosure was built using black
foamboard and covered with two layers of black cloth and a laser safety
curtain.
https://doi.org/10.1117/1.NPh.12.2.025014https://spectrum.ieee.org/media-library/a-3d-illustration-sh...
It's very common to have a CMS feeding images to an LLM that extracts the contents and gives image files a meaningful file name and alt tag.
Non-invasively. No "below threshold of detection". Beyond anything our scientists say is possible.
We're just not advanced enough as a species to do it yet.
We need to keep pushing these boundaries.
I suppose you could flood the brain with nano-machines which would latch onto all the bits and pieces and collect the data? But where would they store it? How would we get them all back out again?
I don't think it's possible to do this with our current understanding of physics. This is not a question of needing better technology, but needing a whole new universe with different physics altogether.
I'm not even sure what is more far fetched, this or superluminal travel. I'm actually leaning towards the former :D
Sunlight contains copious amounts of 800-nm light, so this is probably completely non-hazardous.
1.2 watts over your entire head is fine.
1.2 watts in a 800nm-diameter cylindrical path is "for some reason we decided to make the outer few millimetres of your skin explode, but we had to be in contact with your skin to manage that because that power density of laser would have ionised the air before it reached you".
fNIRS[1] is one of the four main brain imaging technologies (that I know of?): EEG, fMRI, fNIRS, and ultrasound. Like fMRI (& ultrasound?), fNIRS measures the oxygenation levels of different parts of the brain, which has been shown to be a close analogue for brain activity (more activity => more respiration, just like muscles). In this context, it's not enough to simply receive the signal you sent through -- you want to infer which emitter the signal came from so that you can infer the oxygenation levels of the regions it passed through/reflected-off-of.
All of that is a very amateur, high-level overview, but hopefully it clearly supports my underlying point/question: how could you possibly make this work with a cross-head emitter-detector setup?? It seems impossible to disentangle more than one emitter's signals, and I'm not sure how you'd map oxygenation levels without more than one.
Then again, fNIRS and EEG both rely on some serious statistical wizardy to turn 16-128 1D time series into a 3D model of activity, so perhaps I'm underestimating our tools! For example, the addition of frequency modulation to the fNIRS setup is an ongoing area of frontier research.
P.S. In case any of the hackers here haven't heard yet: BCI (Brain-Computer interaction) is blowing up right now thanks to the unreasonable efficacy of LLMs for decoding brain activity[2][3][4], and it's a very hackable field! There's a healthy open-source community for both fNIRS[5] and EEG[6], and I can personally highly recommend the ~$1000 Unicorn EEG system[7] for hackers.
[1] https://en.wikipedia.org/wiki/Functional_near-infrared_spect...
[2] https://www.nature.com/articles/s42003-025-07731-7
[3] https://arxiv.org/abs/2309.14030v2
[4] https://arxiv.org/pdf/2401.03851
[5] https://openfnirs.org/2024/01/01/continuous-wave-spectroscop...
[7] https://www.gtec.at/product-configurator/unicorn-brain-inter...
milliams•2h ago