Turning a knob with a mouse is the worst interface I can think of. I don't know why audio apps/DAWs fall so hard on skeuomorphism here when the interface just doesn't make sense in the context.
Knobs are confusing when converted to a mouse paradigm because there can be a few strategies to control them (click+drag up/down, click+drag right/left, weird rotational things, etc), and you have to guess since each FX studio and software may implement it just a little different.
But the layout of these buttons, while certainly not standard, is generally familiar across various filters, etc. So if you are dealing with a complex interface the skeumorphism absolutely helps to make the input more familiar and easily accessible.
This is what skeumorphism is for and this is a great place to use it.
Imagine if the symbols for "play" "pause" and "stop" were changed simply because it no longer made sense to follow the conventions of a VCR, then multiply that by an order of magnitude.
i never use 'hardware', totally happy doin what i do. (thats music i think. enjoying your craft). most ppl i know using similar tools do have midi controllers to have more of an instrumental interface. theres tons of options. no need to discourage anyone...
double-clicking usually lets one type the value... really good interfaces let one scroll seamless independent of screen borders; the perfect pair with a trackball or a long surface/desk for sliding the mouse
I'm racking my brain thinking of what a better interface would be for selecting a number between a range of values, where the number is a point on a continuum and not any specific value, and can't think of one. The equivalent "traditional" UX for webapps would be a slider control, but that's functionally the same and you'd be going against many years of domain-specific common understanding for not much benefit.
Yet somehow the two industries have pretty much entirely different tech stacks and don't seem to talk to one another.
Telephony is significantly less latency sensitive than real time audio processing, it’s also significantly less taxing since you’re dealing with a single channel.
The level of compression and audio resolution required are significantly different too. You can tune codecs for voice specifically, but you don’t want compression when recording audio and can’t bias towards specific inputs.
They’re only similar in that they handle audio. But that’s like saying the needs of a unicycle and the needs of an F1 car are inherently the same because they have wheels.
I feel like most people doing audio in music are not working at the low level. Even if they are creating their own plugins, they are probably not integrating with the audio interface. The point of JACK or Pipewire is to basically abstract all of that away so people can focus on the instrument.
The latency in music is a much, much bigger issue than in voice, so any latency spike would render network audio completely unusable. I know Zoom has a "real time audio for musicians" feature, but outside of a few Zoom demos during lockdown, I'm not sure anybody uses this.
Pipewire supports audio channels over network, but again I'm not entirely sure what this is for. Certainly it's useful for streaming music from device A to device B, but I'm not sure anybody uses it in a production setting.
I could see something like a "live coding symphony", where people have their own livecoding setups and the audio is generated on a central server. This is not too different than what, say, Animal Collective did. But while live coding is a beautiful medium on its own, it does lack the muscle memory and tactile feedback you get from playing an instrument.
I would love to see, as you said, these fields collaborate, but these, to me, are the immediate blockers which make it less practical.
cadr•1h ago
jjrh•16m ago