It has up to 6 voices, it proposes the BRAIDS macro-oscillators, exposing +40 differents sound engines, and it adds an additional experimental engine where the wavetables are created by an async brain, named Nallely. Nallely is a small modular environment coded in Python which runs on a raspberry pi and is built for exploring emergent behaviors. You program it by patching independent autonomous modules together.
How does it works? The brain generates signals which are streamed via MIDI (via pitchwheel on 4 different channels for 14bits resolution) at slow speed in 4 different circular wavetables of the synth. LISA lets you play while the wavetables are constantly rewritten in real-time. The brain execution model is a fully async hybrid actor model based on independent threads. No global clock or synchronization is enforced. Consequently, because of the CPU load, temperature, OS scheduler, network,... and message passing, the modules constantly drift unpredictibly, either lightly, or harshly depending on the topology of your patch. The signals that are produced by Nallely can be used as waveform for the wavetables, as notes sequences, or as CV equivalent, there is no distinction in what signals represent, the topology of the patch determines what will be the final piece.
In the demo video, I just built an harmonic oscillator using 2 integrators patched in feedback, which is fed to one of the wavetables. This oscillator is then connected to other modules to derive other wavetables and functions which are patched in the other wavetables and the synth parameters.
https://www.youtube.com/watch?v=fxvfnqQKWsY
LISA firmware is written in C/C++ and runs on rp2040, while Nallely is written purely in Python, and can run on a Raspberry Pi. Nallely have been successfully tested on a rpi zero2, a rpi3, and a rpi5.
You don't have to use Nallely to use LISA, it's a standalone MIDI synth, and you don't have to use LISA to use Nallely, it's a generic modular brain which happens to speak MIDI, but LISA coupled with Nallely become the Fodongo synth: a synth that lets you sculpt your wavetables in real-time.
I'm just starting to experiment with this and I try to explore what can be done with slow cv-rate signals feeding wavetables to create sounds. So far I can get a nice variety of sounds, from very pure sine if using LFOs, to very harsh drifting phasing sawtooth sound, or massive organ-like sounds.
It fits well for drone, especially using the envelope: the release can go up to 5s, emphasizing all the micro-drifts and variations in the wavetables, sounds overlap, changes, fades, etc.
LISA and Nallely are free open-source projects:
Nallely: https://github.com/dr-schlange/nallely-midi LISA: https://github.com/dr-schlange/LISA