I don't know if 20% is correct, but I feel it's very close to it. I also think a lot of internet arguments happen as a direct result of miscommunication. Emojis are great, but they get abused to the point that HN filters them out. Perhaps allow readers to toggle if they want to see emojis or not?
How much of the book will you understand if you only read page 1?
Somebody cursing in French can still be interpreted as anger even if you don't understand French, and written profanity can still be interpreted as anger even if you didn't hear it spoken.
Tone and language do complent each other, but neither is a prerequisite for the other like your book analogy would suggest.
Parsed perhaps, but it's so context sensitive that it's not useful, save for extremities. The same tone of voice can have so many meanings based on what's actually being said and yet another if you add context.
If communication is 20% verbal and 80% nonverbal, and if communication is very nonlinear in understanding (as with your book example), how do we know what 1% of communication is? What does it mean, and how can we tell that the figure is correct, when our main or only way of detecting whether communication succeeded is through understanding or lack thereof?
That's not even a good test, due to miscommunication. Both parties might think it succeeded, but then much later on you find out the truth (maybe).
I would load up audio files in Audacity and look at them to see how the audio "looked", as a function of how intense each frequency is over time.
You can even set a track to spectrogram while recording which allowed you to see the sound in real time.
Music also tends to be very beautiful in the spectrogram! And birdsong also. Sometimes I would see a bird first, and only afterwards notice it in my field of hearing.
I noticed while analyzing a podcast that I began to recognize common words like "you." I also noticed that I was able to easily distinguish between different people's voices.
I had to wonder if I were deaf, or if I become deaf, I would suddenly have a strong motivation to learn how to read these things. To develop some kind of device which would show them to me 24 hours a day.
I have not done this, but the project has remained in the back of my mind for over a decade.
Does anyone else know more about this? Does such a device exist?
I think that only some linguists learn how to read spectrograms. But it seems like something that might be extremely useful to any hearing impaired person?
Relating to the article, I think one could quickly learn to read them fluently (e.g. as subtitles, perhaps overlaid on real life), and of course you get the tonal information built in for free—that's what a spectrogram is!
https://news.wisc.edu/a-taste-of-vision-device-translates-fr...
It was a (telarc I think?) recording of the 1812 overture.
The grooves were wide where the canons went off, so that the needle could deflect enough to capture the dynamic range. You could see the waveform.
I think of "Surely You're Joking Mr. Feynman" where people could sniff like a bloodhound. Feynman would have people handle books, and he could tell which ones had been handled.
I think there are things that just trying would be successful more than you think.
Spoke English is also the same.
Just watch a typical George Carlin video on how he stretches out a single word.
realty_geek•2mo ago
In Akan languages it is not difficult to conceive of how the same word can be written in different ways to convey another dimension.
Anyone who speaks an akan language will understand that each of these words below means good but with a slightly different emphasis.
papa papaaapa papapapapapa
What is the linguistic term for this concept?
pegasus•2mo ago
realty_geek•2mo ago
Chatgpt also explained the concept of ideophones which was helpful:
https://chatgpt.com/share/69187b3e-7948-8001-9fea-2b4412d5a7...