New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone

This is expected. Both browsepi and mod-ui need to be patched to add a new file manager entry. The path you created is correct. I can share the patches here before the PR ok? But is the plugin still running or crashing because of that? Because in this case you can

  1. comment mod fileTypes entry in ttl
  2. change default model in ttl (at bottom under state) so that you can switch between available models by hand
4 Likes

New demo From the sound perspective I’ve only changed the IR using a Bassman this time. Idea: registering a delta dirac through the plugin cab + eq section should give me the companion cab + eq for the model. Problem: the resulting IR files sound really bad and introduce a lot of gain. In case you want to try those IRs are here. Procedure I’ve used to register the IR:

  1. on the plugin, disabled everything except cab and equalizer
  2. on Reaper I have a track with delta dirac that is routed to another track in which I’ve instantiated the plugin. So I hit “render track” and I obtain a track which seems nothing but silence
  3. with peak normalization I see an impulse is visible, but is delayed by 1024 samples more or less
  4. I adjust the ir waveform by cutting silences and obtain a 8192 samples file, which I export to 32f format wav

Any clue what I’m doing wrong?

4 Likes

I don’t know what’s causing the crash. Tried to comment

#    mod:fileTypes "modelsim" ;

but nothing has changed.

3 Likes

Tried the IRs with IR Loader Cabsim and they are loud and delayed. Definitely something is wrong there…
48kHz 1024 samples file are more than enough for cab IRs.

2 Likes

Is journalctl | grep jackd showing something useful?

3 Likes

Thanks, now I see

Jan 01 02:15:03 moddwarf mod-jackd[315]: lilv_lib_open(): error: Failed to open library /root/.lv2/rt-neural-generic.lv2/rt-neural-generic.so (/lib64/libm.so.6: version `GLIBC_2.29' not found (required by /root/.lv2/rt-neural-generic.lv2/rt-neural-generic.so))

We have libc-2.27.so on Dwarf.

2 Likes

Okay I can try to arrange a mk file but I can’t build at the moment who can help? So we can release a Beta for the Duo!

Regarding the IRs, I should have fixed them, can you double check?

6 Likes

IRs now are perfectly working :sunglasses:

8 Likes

Guys… This is A.MA.ZING !

Blown away!!!

9 Likes

Hi, I’ve sent some PRs to moddevices @falkTX FYI

plus necessary changes to file browser feature to have device model files in Mod

Models in plugin have been renamed from originals to avoid licensing issues. For now, I was able to train the following models

Moon

  • MoonAcoustic.json Neural DSP Archetype Plini High Life Acoustic Preset
  • MoonCrunch.json Neural DSP Archetype Plini Electric Sunrise Crunch Preset
  • MoonJazz.json Neural DSP Archetype Plini Sunhead JazzClean Preset
  • MoonRiff.json Neural DSP Archetype Plini Electric Sunrise Riff Preset

Vintage

  • VintageEdgeOfBreakup.json Neural DSP Tone King Imperial MKII Pete Thorn Preset
  • VintagePedalPlatform.json Neural DSP Tone King Imperial MKII Pete Thorn Preset
  • VintageRockRythm.json Neural DSP Tone King Imperial MKII Pete Thorn Preset
  • VintageRockSolo.json Neural DSP Tone King Imperial MKII Pete Thorn Preset
  • VintageUnderWater.json Neural DSP Tone King Imperial MKII Pete Thorn Preset

IRs

  • I’ve registered impulse responses for the cab used in the presets, need to understand how to package them but for now they’re in the repo

I don’t know what are the next steps on Mod side, meanwhile I will continue developing the plugin in particular adding new models. Any preference?

Last but not least, if you think I deserved a beer then I’ve added the donation button in the repo.

10 Likes

Thanks a lot! I’ll try it asap. I’d like to have more crunchy amps like Fender Blackface, Princeton Reverb, Bassman Tweed, Matchless, Vox AC30. On the hi-again side I’d like a Custom Audio OD-100, a Soldano SLO-100 and Plexi (all with no cab). Maybe could be worth to rip some of them from a Fractal Audio device :sweat_smile:

4 Likes
CMake Error at rt-neural-generic/CMakeLists.txt:14 (add_subdirectory):
  The source directory

    /home/user/mod-workdir/moddwarf/build/aidadsplv2-2963a6d4996ab291e270c312c8e1ef4380aa6c9a/modules/RTNeural

  does not contain a CMakeLists.txt file.


-- Found PkgConfig: /home/user/mod-workdir/moddwarf/host/usr/bin/pkg-config (found version "0.28") 
-- Checking for module 'lv2>=1.10.0'
--   Found lv2, version 1.18.2
-- Configuring incomplete, errors occurred!
See also "/home/user/mod-workdir/moddwarf/build/aidadsplv2-2963a6d4996ab291e270c312c8e1ef4380aa6c9a/CMakeFiles/CMakeOutput.log".
make: *** [package/pkg-generic.mk:188: /home/user/mod-workdir/moddwarf/build/aidadsplv2-2963a6d4996ab291e270c312c8e1ef4380aa6c9a/.stamp_configured] Error 1

1 Like

This one should be easy, I need to tweak mk file to fetch a repo with submodules, in recursive mode

4 Likes

This thread is amazing to watch—please don’t forget your bass-playing brothers and sisters!

9 Likes

Oh! You’re part of the game from the beginning. In the Dataset I’ve created as well as in the one mentioned in the paper there are two tracks: Bass and Guitar. It seems a portion of Bass help the nn to learn the device modeled, so an equal portion of bass is inserted in train, test and val. I would be curious to train over a bass amp with the very same dataset. I guess Neural DSP has some Bass stuff aren’t they? Problem here is we need to create a virtual room where people with gear like real amps and effects can meet people with a Colab subscription like me. Otherwise time I get back to my everyday job and this thread could freeze for a while. Now we have a neural lv2 plugin without all unnecessary Juce stuff, plus a very small list of trained models, plus companion IRs to be used with available ir loaders (mod cabsim, but I prefer lsp) and file manager integration. We need to decide what to do next. I think the training process could be automatized, recently Ik Multimedia introduces Tone X TONEX. Honestly I would prefer a real person to follow the training process of the model, but an opensource version of the utility that you enter files and you get json models in output would be awesome.

5 Likes

Hello @madmaxwell

I’m loving this thread but as a total noob on this deep learning stuff I need to understand a few things.

Will we be able to train the plugin to obtain our own new models or the plugin will only “play” the models?

I would love to have my equipment modeled and uploaded to my MOD… :wink:

3 Likes

same here! I’ve got an Eden WT-500 and an Eden D-410XLT that I’d love to get modeled so I don’t ever have to move them again.

2 Likes

For now the train workflow is entirely external to the plugin, is actually done by invoking a python script on a Colab instance running with GPU. I think it’s time for the thread on how to train new models. Here I would like to keep plugin devel stuff. I couldn’t create the thread until now since I’ve made a lot of tests to understand better the workflow, that is now rather simple: you record two audio files at 48kHz, you give them to the python script and you take a break for the next 2h. The neural network learn to imitate the amp model, it does this with a precision that is expressed in ESR and at the moment with the depth of the network I’m using, I’m obtaining ESR 0.008-0.011 which is pretty good. The ankward thing is that until now I’ve trained over Neural DSP’s plugins which are neural models on their own. This is mostly because I’m a bit of a Neural fan boy, despite I hate everything about iLok and Pace and so on. Well the idea of the plugin that do training is cool, but if the plugin is running on an embedded device without Neural Accellerators then I see it very hard to pursue. Instead, I would like to provide the model generation like a web service, so that the user load the recorded files on a web interface, select the network type and then the server will send a query to a cluster or a Colab instance. Yet the device could acts as an audio card recording the Dataset through the hardware, but still external hw would be necessary, for a real amp you need a reactive load, in my case (plugin running in my DAW) it wouldn’t make sense to do recording on the device.

7 Likes

I Heard you!

I’m no dev but will keep following this thread.
An ex-student of mine is involved with AI and deep learning. He is the CEO of Somma.ai.
I will try to reach out to him and maybe learn more about all of this.

Thanks a lot for the explanation.
You definitely deserve a beer! (or several) :grin:
Cheers

5 Likes

jep - me too.Thanks for the explanation.
Do the audio files have to be recorded for real or can I just send a pink noise or special eq sweep through my setup and then to the script?

I have an Azure budget of around 150€ per month. Do you think its possible to use that for the script?

7 Likes