Turning a knob with a mouse is the worst interface I can think of. I don't know why audio apps/DAWs fall so hard on skeuomorphism here when the interface just doesn't make sense in the context.
Knobs are confusing when converted to a mouse paradigm because there can be a few strategies to control them (click+drag up/down, click+drag right/left, weird rotational things, etc), and you have to guess since each FX studio and software may implement it just a little different.
But the layout of these buttons, while certainly not standard, is generally familiar across various filters, etc. So if you are dealing with a complex interface the skeumorphism absolutely helps to make the input more familiar and easily accessible.
This is what skeumorphism is for and this is a great place to use it.
Imagine if the symbols for "play" "pause" and "stop" were changed simply because it no longer made sense to follow the conventions of a VCR, then multiply that by an order of magnitude.
i never use 'hardware', totally happy doin what i do. (thats music i think. enjoying your craft). most ppl i know using similar tools do have midi controllers to have more of an instrumental interface. theres tons of options. no need to discourage anyone...
double-clicking usually lets one type the value... really good interfaces let one scroll seamless independent of screen borders; the perfect pair with a trackball or a long surface/desk for sliding the mouse
I'm racking my brain thinking of what a better interface would be for selecting a number between a range of values, where the number is a point on a continuum and not any specific value, and can't think of one. The equivalent "traditional" UX for webapps would be a slider control, but that's functionally the same and you'd be going against many years of domain-specific common understanding for not much benefit.
Probably not, a lot of musicians develop on the go (planes etc) so they're dealing with built-in trackpads pretty often. You can still scroll but it's not as ergonomic.
Ultimately I see two problems though,
1. sometimes the number doesn't matter or make sense at all. A good example is a macro knob. The value is somewhere between "0" or "1", and synths do let you set it manually (since this is how recorded automation works), but a macro slider doesn't make too much sense IMO.
2. lots of controls deal with logarithmic values. Anything that corresponds to a frequency is going to need finer control when you're tweaking values below 500Hz vs changing a value between 10000Hz and 10500Hz. Knobs mask this pretty well. I'm sure you could build a slider that dealt with this, but a number box would be very weird since you'd want the scroll step to be much smaller at lower values.
And 20 pixels wide on a modern screen is so tiny you would have trouble seeing it, so the whole premise of a "20 px knob" is blown. A slider with 100 pixels width would also be pretty small. The smallest I'd make it to work on a modern screen is at least 250 pixels wide. And that's plenty of resolution for most things. If a slider is more important, make it bigger, and you get more resolution.
Arguing about a 20 pixel knob or slider is kind of stupid considering how small that actually is in screen real estate. If the knob or slider is 20 pixels in any direction then you have other UX problems.
Doing things like 6 pixels of movement on the screen equal 1 pixel on the slider can cause problems with display scaling and will mean if you have sliders near the edge of the screen you will not be able to use jump on click and even then can be a headache.
It is an interesting topic and far from solved.
Reaper has a standard UI for controlling plugins you can use instead of the VST UIs, other DAWs probably do too. It's an awful, lifeless sea of sliders and check boxes that hurts to look at, and instantly drains one of all creativity.
Some people like Reason for instance, but I find that its UI innovations just get in my way.
I would say the opposite, it's basically the perfect interface for a very specific scenario with requirements that don't really occur in much other computer software.
In fact, if it was all MIDI controlled, it's just a matter of mapping the mouse scroll wheel to a midi channel.
I don't play flight sims but I imagine most flight surfaces require small adjustments and the effect of those adjustments on the aircraft is naturally smoothed out by the dynamics of the plane (you're adjusting an acceleration).
I imagine the scroll wheel is not suitable for dogfighting.
I would also assume there are detent free mousewheels with a far greater number of steps, there used to be. The scroll ring on my trackball is detent free and quite fine but it is also ~2" in diameter, considerably larger than the wheel on any mouse.
A slick-looking GUI is a kind of ad for the app. As author of an accessible, terminal-based DAW app, I contrast remembering an incantation like 'add-track' or 'list-buses' with hunting around. These incantations can have shorter abbreviations, such 'lb' for list buses, and 'help bus' or 'h bus' to be sufficiently discoverable, easier for both implementer and user. And then to have hotkeys to bump plugin parameters +/- 1/10/100 etc. Probably I'm pissing into the wind to think the majority of users will ever choose this -- and GUIs do provide amazing facilities for many purposes -- but we do have a huge array of choices on linux, including this plethora of music creation and production apps. That is a big success, IMO.
Edit: was comparing nama to ecasound there, not the more common graphical DAWs.
1. Drag up/down to change value. 2. A modifier key to slow the drag for finer resolution changes when dragging. 3. The ability to double-click the knob and type in precise values when I know exactly what I want.
The problem with knobs on a GUI is when designers stay with them when there is a faster option. Like an opportunity to combine three knobs.
For example, the EQ on any SSL channel strip is a nightmare because they slavishly stick with a skeumorphic design of the original hardware. The hardware required mixers to use two hands to adjust gain and frequency at the same time, and then dial in Q on a third knob. Very tedious when you have a mouse.
When this is done right, you get something like FabFilter's Pro-Q graphic EQ. The gain and frequency controls are instead an X/Y slider that you can easily drag across a representation of the frequency spectrum. In addition you can use a modifier key to narrow/widen your Q. All with a single click and drag of your band.
True though I would put this very much in the "feature, not a bug" bucket. These tools are for people who have worked with the original hardware and want a very faithful emulation, including the look and feel. In the digital world with a modern PC there's not much purpose of a channel strip plugin in the first place, so the only people using one are doing so with intention.
It's a bit like saying that manual transmission cars could be controlled more easily if they were automatic transmission; it's completely true, but if you're buying a manual you want that experience.
Pro-Q is a great example of a digital-first tool (the automatic transmission equivalent), with lots of great visual feedback and a lot of thought put into a mouse+kb workflow. All of Fabfilter's stuff is like this actually, though sometimes to its detriment; the Fabfilter automation and LFO system feels very different from basically every other plugin. It's actually a more efficient workflow when you get used to it, but due to how different it is from everything else most people I talk to dislike it unless they've really bought into the Fabfilter suite.
Which kind of goes back to the original point: VSTs use knobs because it's what people are used to, and using something different might be a negative even if it's better!
Sure it mismatches the GUI, but it gives users the option when they don't want to do a click/drag for freq, then gain, then freq, then gain, then Q. You know?
That tediousness is what keeps me from using the SSL channel strip altogether.
Re: channel strip plugins: The advantage to using them in DAWs is speed and economy. Having everything in one window (ala the Scheps Omni Channel) saves me a lot of clicks vs. when I have multiple plugins in different slots.
I do absolutely everything in the box with a laptop keyboard and track pad. My primary motive is being quick and precise, and the less plugin window management I have to do the better. The channel strip keeps the tools compact and my movements minimal.
Many if not most professional producers use MIDI controllers with knobs/sliders/buttons MIDI mapped to DAW controls. As such the skeuomorphism actually plays a valuable role in ensuring that the physical instrument experience maps to their workflows. Secondarily, during production/mastering, producers are generally using automation lanes and envelopes to program parameters into the timeline, and the piano roll to polish the actual notes.
When I've historically done working sessions, the composition phase of what I'm doing tends to involve very little interaction with the keyboard, and is almost entirely driven by my interaction with the MIDI controller.
Conversely, when I'm at the production phase, I am generally not futzing with around with either knobs or the controller, and I am entirely interacting with the DAW through automation lanes or drawing in notes through the piano roll. So I don't really ever use the knob through a mouse and I've never really encountered any professional or even hobbyist musicians who do except for throwaway experimentation purposes.
Yet somehow the two industries have pretty much entirely different tech stacks and don't seem to talk to one another.
Telephony is significantly less latency sensitive than real time audio processing, it’s also significantly less taxing since you’re dealing with a single channel.
The level of compression and audio resolution required are significantly different too. You can tune codecs for voice specifically, but you don’t want compression when recording audio and can’t bias towards specific inputs.
They’re only similar in that they handle audio. But that’s like saying the needs of a unicycle and the needs of an F1 car are inherently the same because they have wheels.
I feel like most people doing audio in music are not working at the low level. Even if they are creating their own plugins, they are probably not integrating with the audio interface. The point of JACK or Pipewire is to basically abstract all of that away so people can focus on the instrument.
The latency in music is a much, much bigger issue than in voice, so any latency spike would render network audio completely unusable. I know Zoom has a "real time audio for musicians" feature, but outside of a few Zoom demos during lockdown, I'm not sure anybody uses this.
Pipewire supports audio channels over network, but again I'm not entirely sure what this is for. Certainly it's useful for streaming music from device A to device B, but I'm not sure anybody uses it in a production setting.
I could see something like a "live coding symphony", where people have their own livecoding setups and the audio is generated on a central server. This is not too different than what, say, Animal Collective did. But while live coding is a beautiful medium on its own, it does lack the muscle memory and tactile feedback you get from playing an instrument.
I would love to see, as you said, these fields collaborate, but these, to me, are the immediate blockers which make it less practical.
There are projects that aim to provide synced multi player jamming, but last I checked they are all based around looping. Human ear SHOCKINGLY does not lend itself to being fooled and will noticed surprisingly small sync issues.
I always compare it with photo editing where you can cheat and smudge some background details with no one the wiser, whereas any regular non-audiophile will notice similar smudging or sync in audio.
It's still limited to whatever latency the network has, but it can be useful for some things. If that means it's mostly useful for loops, then that's up to the musicians. :)
(I myself have used it for remote livestream participants, but only for voice. I was able to get distinct inputs into my console just like folks in the studio had, and I gave them a mix-minus bus that included everyone's voice but their own, for their headphones.
It worked slick. Interaction was quick and quality was excellent. And unlike what popularly passes for teleconferencing these days: It all flowed smoothly and sounded like they were in the room with us, even though they were a thousand miles away.)
The audio interface is abstracted away in exchange for some metadata about the buffer's properties and the buffer itself, and that is true for basically everything related to audio: the buffer is the lowest level the OS offers you, and you are free to implement lower-level stuff in your dsp/instrument, like using assembly, maybe also functions for SSE, AVX or NEON based acceleration.
You get chunks of samples in a buffer, you read them, do something with them and write the result out into another buffer.
"Pipewire supports audio channels over network" thanks for reminding me: I'm planning to stream the audio out of my Windows machine to a raspi zero to which I will then connect my bluetooth headphones. First tests worked, but the latency is really bad with shairport-sync [0] at around 400 ms. This is what I would use Pipewire for, if my workstation were Linux and not Windows.
Maybe Snapcast [1] could be interesting for you: "Snapcast is a multiroom client-server audio player, where all clients are time synchronized with the server to play perfectly synced audio. It's not a standalone player, but an extension that turns your existing audio player into a Sonos-like multiroom solution."
"I could see something like a "live coding symphony", where people have their own livecoding setups and the audio is generated on a central server." Tidal Cycles [2] might interest you, or the JavaScript port named Strudel [3]. Tidal can synchronize multiple instances via Link Synchronization. Then there's Troop [4], which "is a real-time collaborative tool that enables group live coding within the same document across multiple computers. Hypothetically Troop can talk to any interpreter that can take input as a string from the command line but it is already configured to work with live coding languages FoxDot, TidalCycles, and SuperCollider."
[0] https://github.com/mikebrady/shairport-sync
Additionally, from what little I'm aware of, telephony is heavily optimized for particular frequencies of human voice and then heavily compressed within that. As well, any single telephony stream is basically a single channel. A song may have dozen of channels, at high resolution, full spectrum, all sorts of computationally demanding effects and processing, and still need latency and sync measured on milliseconds.
So... Kind of the opposite of each other,while both being about processing sound :-).
Also I imagine TDM was first used in telephony.
I've got no issues with it.
deadmau5 is famously a PC guy as well, he seems to have no issues with Windows (that I know of or that are not extremely specific to a setup that involves millions of dollars' worth of hardware and multiple computers). His setup is like an amusement park for nerds.
I didn't realize Dave Phillips had passed away. I remember he had an incredible page of audio software links but all stuff I almost never got to make any sound. Sometimes I would even blow up my whole system trying to get something to work and have to reinstall the whole operating system.
Seeing how far we have come with this site is just incredible.
Between having to make a living somehow, and not reaping a whole lot of other personal benefits from open source audio development, it takes a very special kind of person to publish these contributions in the first place. Once they're published, generally with its UI defined in code by a developer person, they're not necessarily easy for a designer to edit.
Nor is there much of a steady community around most of the plugins. So many are "publish, feature-complete enough, move on" kind of projects.
As always, be the change you want to see in the world.
Vital is a wave table synth; Helm is a subtractive synth.
Helm was the first synthesizer that I really excelled with. I would recommend anyone who wants to actually learn the fundamentals of synthesis, to start on it. Once you get good at to it, it's faster to dial in the exact sound you want than to reach for a preset.
It's far more straightforward and less complicated than additive (ZynAddSubFX), FM, or wave table synths.
That being said, if you just want a very advanced synth with a lot of great presets, Vital is far more advanced.
Really. It amazes me that I still find out about new Linux plugins after years of producing music on the platform. It could not have been easy to compile this; the information is all over the place online.
The ability to filter (!) for compression, saturation, etc. is so great.
If your question was about using Ardour, I used it a bit and I managed to make a tune. I recommend this tutorial: https://www.youtube.com/watch?v=ACJ1suTVouw
- portable, reproducible environments: I suppose you could achieve this with a docker setup. If I jump to a different workstation, I want to be able to load my current project without playing setup wrangler.
- license management like a dotfile database: all my licenses are fettered to the wind across two or three email addresses, and every time my PC crashes (twice in the past 5y) or something breaks I have to go recollect them. It’s a quest. Quests suck.
- remote or cloud processing: connect to your workstation if you’re on the road, or a cloud cluster running DAWs. Sure, this introduces some lag which can force you into drag to piano roll contexts, but other times you’re just messing with mixes. But gaming giants like Sony figured it out for PS4/5 titles.
- shareable projects with some innovative business solution to the license barrier. I want to be able to access somebody else’s project and load it in its entirety—whether I have Neural DSP’s latest Archetype or not. Whether I have Serum2 or not. Apple managed to get bands onto iTunes. We seriously can’t get Waves or Neural DSP to go the same route? Some royalty-based approach?
I know these are outlandish requirements in some scenarios but I feel like this would fix the misery of music making in the era of Windows and macOS being goblinware operating systems.
But I would love every thing that you list. I think things like PipeWire for better or worse are pushing things towards sanity, or least, better ideas for managing the mess in the open source world, which is decades in the making.
Get in touch if you'd like to chat more about this stuff (my email is in my profile).
Cloud work so far just hasn't had much appeal. People are walking around with Apple M{1..5} laptops with enough compute power to do things you couldn't do on a studio system 10 years ago. Sure, if you're doing sample based playback with really high end sample libraries, or physically modelled synthesis, you can always max out any system with a large orchestral piece, but the stuff you can do on a laptop on a bus or train or the back of car really does encompass most of what most people want to be able to do.
iTunes was useless for people on Linux, just as any system that convinced Waves to participate in some sort of "implicit licensing" scheme would be (even though they run linux inside the hardware units they sell). Again, this link was to a set specifically concerned with the situation for audio work on Linux; until plugin developers en masse recognize it as a valid latform that they should support (improvements every day, but very slowly), this will as useless for Linux as iTunes was (and remains).
I'm very interested in what you mean by the term "goblinware", that's a new one to me
I have a Pittsburgh Modular TAIGA that cost $800. How much am I willing to pay for a software synth doing analog modeling? Basically nothing.
To get everyone a piece of the pie to make it worth it for this, it would cost the end user so much for the pie that no one would bother.
My experience with music collaboration is that the technical challenges have been minimal for a long time now. The challenges of collaboration are social and being on the same page musically. Even in the 90s, it wasn't hard to find people to make rock music with locally. The problem was always what was meant by "rock music" to begin with.
It is exactly the opposite problem of video games. It would be like the economics of video games if the goal was to team up to play video games no one else was playing.
I think this would only work if every musician's goal was to be Taylor Swift.
Put aside the bit rot that occurs constantly with digital project files in a way it never did with tape, just getting my DAW, plugins, samples, projects working has been a giant pain. I am a big fan of deterministic/reproducible computing environments in general. My primary machine is setup declaratively with Nix. It would be great if something like this existed for my music setup without having to compromise on creative choices.
Most of these plugins are of dubious software quality, but that is not mutually exclusive with their ability to accomplish great sounding results. One of the reasons I bought a dedicated machine for music is that I inherently don't really trust them to be running on the same device as my personal computing. Some of them (Universal Audio Apollo) even require kernel extensions on MacOS.
If anyone is attempting to solve this, I'd like to hear about it.
I came back pleasantly surprised with the current state of things. Minus the underlying linux sound system, which is still a mess of things that barely work together. (I have a lot of expensive/pro plugins and all the DAWs on the Mac, so this was mostly a filtering exercise - what I can use on Linux that can still mix/master a whole project).
- I'm not a FOSS purist in audio, so that wasn't a requirement. But I am 'linux purist' so no VST wrappers of windows DLLs etc.
- Watershed moment for me: Toneboosters and Kazrog coming to Linux. Along with u-he, these make for a very, very high quality offering. You can easily mix a commercial release just with these. Kazrog isn't even 'Linux beta' like the rest, proper full release on Linux. I was briefly involved in beta testing for Linux, Shane & co are incredible people.
- I have most/all DAWs for the Mac. Reaper and Bitwig on Linux are enough for me and feel like good citizens in Linux. (ProTools is never coming, neither is Logic. But addition of Studio One makes for a really good trio).
- Any USB class-compliant audio interface will work (modulo control applications which generally aren't available on Linux, so ymmv).
- iLok is missing, which removes a whole host of possible options (I have 500+ licences on my iLok dongle, none of that stuff is accessible). I can't say I miss iLok, but I do miss Softube (not that it's available on Linux, iLok or not).
I made a few 100+ tracks mixes on my thinkpad with Reaper and the above combo of plugins, it worked just fine.
But Linux is still Linux, and 30 years later still annoys me with typical 'linux problems', which generally boil down to 'lack of care'. UI is still laggy, compositors be damned. While Reaper is butter-smooth on a Mac, audio thread never interferes with UI (and vice versa), it can get quite choppy on Linux. If you allow your laptop to go to sleep with a DAW open, chances are good that upon resuming you'll have to restart it as it will lose sound. And a lot of smaller annoyances that are just lack of polish and/or persistent bugs, that I'm sadly used to on Linux (want to switch users on Linux Mint? The lock screen can get hella confused and require a lot of tinkering to get the desktop back). But overall, it's a million miles away from a hobbyist endeavour that Linux audio used to be until recently. I could get actual work done with Linux this time around.
I have not had any UI issues in at least a decade on Slackware. The few times I tried Mint over the years, it was filled with random annoyances like you mention.
Edit: This is not advocating using Slackware for audio work, it works great but it is Slackware and most don't get along with the Slackware way. But there is a DAW module for AlienBob's Slackware Live Edition[0]. It worked alright when I tried it, as well as any other live distro.
But generally that's my point, 'it works if you go and edit this obscure line in this obscure config file'. Mac has had a stable CoreAudio backend for quarter of a century now (counterpoint - Windows is also a mess). I wish Linux would stabilise their userland a bit more and stop rewriting stuff every few years.
Sometimes I wish there was a commercial company behind 'linux for audio' that will give me a finely tuned Linux distro on a finely tuned desktop machine, based on whatever distro, I don't really care. But have it all released/patched at their own pace, as long as everything 'just works'. I'd be happy to pay for that. The whole 'OS due upgrade, is anything going to work tomorrow, I have a session' is still an unsolved problem on _every_ OS/platform. Most busy studio heads go years without installing/upgrading _anything_ for fear of having a lemon after said upgrade, with clients waiting at the door.
That would be Ubuntu Studio or Kx-Studio. Mint is quite far from what you want unless you are willing to put in the time to set it up right. Most any distro which ships PipeWire with Jack support enabled will probably perform better than your Mint setup running ALSA. PipeWire speaks Jack so if built with Jack support, any Jack aware application will connect to PipeWire with no need to start Jack or anything, it just works.
It is not an obscure file that I had to edit, it is the file (script) which starts the server. Took maybe 5 minutes to get everything working flawlessly and that includes compiling PipeWire since Slackware does not build it with Jack support and a google search to find out why resume killed the audio.
What a strange jab. People are militant about _freedom_, not getting things for free. If you don't care about freedom then just use Logic or Ableton or whatever, they're probably better than anything on Linux and they're industry standards. But they completely trash your freedoms as a user and that's what many people can't stomach. Plenty of software that respects the users freedoms is sold.
it's rather customisable, reasonably priced and just works great as a daw for electronic music.
https://renoise.com/download it even comes with a demo and it's own vst for other daw'.
Anyway, thanks for the feedback regarding the site itself, both positive and negative. This is a 1-2 man project, just for fun, but any feedback are always welcome :)
AMA
cadr•1mo ago
jjrh•1mo ago
jcelerier•1mo ago
ofalkaed•1mo ago
https://kx.studio/
jpetso•1mo ago
Its install recipes directory may yield a less fancy, but probably more comprehensive list: https://github.com/zynthian/zynthian-sys/tree/oram/scripts/r...
With Zynthian OS up and running, the full list of plugins shows in its webconf page, it's so long that they have to hide basically most of the plugins from the main on-device UI.
Roughly speaking, if it's open source, most likely it will work. If it's proprietary, assume that only Pianoteq and a small number of u-he plugins will work. Most commercial products with binary-only distribution don't feel like RPi devices are a large enough market for them to build binaries for it. Even if they otherwise offer ARM builds for Apple Silicon and Linux builds for x86.
MomsAVoxell•4w ago
And see also, monome (https://monome.org/) .. as yet another such example.
Just to point out how flourishing the Linux-based DAW sub-culture really is.