> The display delivers high-quality audio
Are multiple pixels somehow combined to reproduce low frequencies?
It would be wild to integrate this into haptics
There's this excellent (German?) for website that lets you play around and understand these via demos. I’ll see if I can find it.
Edit: found it, it’s https://www.audiocheck.net/audiotests_stereophonicsound.php
For headphone based spatialization (binaral synthesis) usually virtual Ambisonics fed into HRTF convolution is used, which is not amplitude based, specially height is encoded using spectral filtering.
So loudspeakrs -> mostly amplitude based, headphones not amplitude based.
HRTF as used in binaural synthesis is for headphones only, not relevant here.
Concerning absolute localization, in frontal position, peak accuracy is observed at 1÷2 degrees for localization in the horizontal plane and 3÷4 degrees for localization in the vertical plane (Makous and Middlebrooks, 1990; Grothe et al., 2010; Tabry et al., 2013).
from https://www.frontiersin.org/journals/psychology/articles/10....
Humans are quite good at estimating distance too, inside rooms.
Humans use 3 cues for localization, time differences, amplitude differences and spectral cues from outer ears, head, torso, etc. They also use slight head movements to disambiguate sources where the signal differences would be the same (front and back, for instance).
I do agree that humans would not perceive the location difference between two pixels next to each other.
Edit: Found it: https://advanced.onlinelibrary.wiley.com/doi/10.1002/advs.20...
Go to supporting information on that page and open up the mp4 files
Edit - I'm not sure that's the same thing? The release talks about pixel based sound, the linked paper is about sticking an array of piezoelectric speakers to the back of a display.
edit 2- You're right, the press release is pretty poor at explaining this though. It is not the pixels emitting the sound. It's an array of something like 25 speakers arranged like pixels.
This here seems to be about adding separate piezoelectric actuators to the display though, it doesn’t seem to use the panel itself.
> by embedding ultra-thin piezoelectric exciters within the OLED display frame. These piezo exciters, arranged similarly to pixels, convert electrical signals into sound vibrations without occupying external space.
It is getting very interesting, sound, possibly haptics. We already had touch of course, including fingerprint (and visuals of course). We are more and more able to produce richt sensory experiences for panes of glass.
Also how differing parts of the screen can generate different sound sources to create a sound scape tailored for the person in front of the screen (eg laptop user)?
Interesting tech to watch!
walterbell•1h ago
jdranczewski•1h ago
orbital-decay•1h ago
GenshoTikamura•55m ago