I want to try to build a hardware midi keyboard using mechanical hall effect switches like this: https://mechanicalkeyboards.com/products/wuque-studio-dash-5...
There's a few physical keyboards out there but many are super super expensive.
I've been using it very frequently for a few years, and can confirm it's quite fun to use, though I don't have much experience playing other keyboard instruments and haven't tried to learn it seriously enough to make proper use of chords etc.
Demo of me playing here, though try not to mind the messy desk and bad recording setup (one-hand, MIDI going through phone due to lack of speakers): https://drive.google.com/file/d/1qZuFlK9GybX5aP5tGi54AvTTFKd...
The layout I use is not a regular piano layout, but a B-griff layout, as used on a bayan (Russian accordion). I made a demo site for playing with the layout (keyboard or touchscreen), but without velocity sensitivity: https://maxdamantus.gitlab.io/bayan/
The B-griff/C-griff layouts naturally fit computer keyboards, and I'd argue they're better in general than piano layouts for various reasons. There's also this cool video of someone using two Commodores for similar effect: https://youtube.com/watch?v=EBCYvoC4muc
EDIT: turns out the Terpstra keyboard is also able to work using B-griff: http://terpstrakeyboard.com/web-app/keys.htm?fundamental=261...
nice! Is the velocity code on the firmware side? Is that why it's not included on the demo site?
I've been meaning to look into adding Wooting support, just haven't got round to it. This would require either WebUSB or WebHID which are still "experimental", and I'm not sure if there will be latency issues.
The program I created for computing velocity and emitting MIDI is here [1]. When it's listening for keyboard data it does it in another thread so that it can attach fairly accurate timestamps to the key events, without being subject to pauses from Gtk or something. If there's inconsistency between the key event times and the relative timestamps, the velocity detection can be noticably off. I've noticed this in Wooting's own MIDI application [2]. Hopefully WebUSB/WebHID includes timestamps in the data, so it doesn't have to be subject to JS GC pauses.
As well as the Rust program above (possibly Linux-only, though should be easily adaptable to work on Windows/macOS), I've written an Android app that does the same thing so I can use it without a computer (though in Android you have to take control of the USB device rather than just read the HID data), but I haven't uploaded the code anywhere.
[0] https://gitlab.com/Maxdamantus/bayan
[1] https://gist.github.com/Maxdamantus/c5d5133cab1ef3596ac589d2...
[2] https://github.com/WootingKb/wooting-analog-midi ... as well as issues with timestamp jitter, this application seems overall a bit bloated for my taste (web-based app using Tauri), and it's inconvenient since it will continue to generate MIDI when you're focused on other applications, so you'd have to close and open it (slow startup time too) each time you want to use it. their timestamp jitter occurs because the Wooting SDK (at least at the time, haven't looked at it recently) worked by running a thread that would update a shared HashMap and you would have to poll that hash map; it wasn't possible to see all updates, and there were no timestamps attached. I avoided the SDK and instead just read the data directly from the "hidraw" device.
This in turn is why the Netherlands is so big when it comes to dairy and cheese; grass was one of the few things that would grow consistently and survive the floods, cows eat grass, cheese and butter can be preserved to last through winter, etc.
Things like TerpSON, TerpSmith, Terp….
One time while voting the lady working there butchered my name 8 times, she literally could not get it right.
https://www.apmreports.org/episode/2019/08/22/whats-wrong-ho...
I always felt that many native English speakers can't really parse a text properly. They seem to react to certain keywords. When the text says something they didn't expect, they often miss it or get confused.
I thought it might be a side-effect of being monolingual and hence having a less explicit understanding of language but seeing how they are taught to read, things make perfectly sense.
It is crazy of much staying power bogus science has in education. Reminds me how the idea of individual learning styles is still popular and though it lacks empirical evidence.
Eh, they get their revenge. As any Australian of a certain age can tell you TelSTRA could not get it right, for any value of right, without expending an equivalent effort to moving a mountain.
In German Frisur, and in south slavic languages Фризура, means hairstyle, which is also related to the english word Frizz
We also have at least one -stra river in Norway, Vinstra. Don't know if anyone has it as last name, though.
Maybe it also helps that I've got a headphone jack plugged in instead of the Bluetooth I'm used to from listening to things during the day, but even so, my touchscreen just doesn't feel this responsive normally, including in games. It's on my list of things to check the delay of, but since my phone's 960 fps camera is the equipment I use to check that, it is a bit tricky
Edit: ontouchstart calls noteOn method https://github.com/wcgbg/terpstrakeyboard/blob/03d526a0ed5c7...
This eventually calls into audio api start with 0 delay https://github.com/wcgbg/terpstrakeyboard/blob/03d526a0ed5c7...
The data to play was retrieved from the server (based on the instrument you selected) and parsed ahead of time https://github.com/wcgbg/terpstrakeyboard/blob/03d526a0ed5c7...
It loads samples at four frequencies and plays the nearest one at the appropriate rate (proportional to the target frequency) https://github.com/wcgbg/terpstrakeyboard/blob/03d526a0ed5c7...
The settings.audioContext variable is simply an instance of the web audio API as expected (with a fallback to support Safari/webkit) https://github.com/wcgbg/terpstrakeyboard/blob/03d526a0ed5c7...
This gainNode it's connecting is just about playback volume it seems https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioCo... and "audioContext.destination", MDN says, you can think of as a speaker device
So nothing too special here: of course preloading server data but then simply creating a new sound source during ontouchstart is enough. I guess delay in other software comes from the extra cruft they have such as big libraries (and Bluetooth, in the cases where I use that), not from any actual delay in touchscreen or audio rendering
It's also all just vanilla JS which probably helps keep it snappy
With a foot activated MIDI controller the number of keys will always be rather limited. Therefore, I think that the virtual layout should be customized to the music you play or even for each song. E.g. you could only map notes of a certain scale and range in order to save space.
If you consider building something custom; I have found that outputting to hardware MIDI from an Arduino is delightfully simple. (The hard part on my own project is finding motivation for figuring out how to read velocity from piezos and isolating vibrations.)
The number of keys is the real problem - I don't want to be playing a single note in one octave, I want to play my chord progression and that means I need separate buttons for the major, minor, 7th (which 7th...) for the chords that go with the song.
I'm personally fond of this one though: https://en.wikipedia.org/wiki/Moog_Taurus
See also this classic instrument, familiar from the late great FAO Schwartz on Fifth Avenue: https://en.wikipedia.org/wiki/Walking_Piano
Mike Battaglia plays Scarborough Fair in 31-tone equal temperament
squircle•5h ago