Optimized neural-amp-modeler-lv2. BIG difference

I have compiled neural-amp-modeler-lv2 using quite a few optimizations and have managed to get the CPU usage down a lot. On my Libre La Frite SoC/SBC (512MB RAM) which I believe is quite similar in speed to a RPi3 I can now use a WaveNet Standard model where I with my earlier optimized builds only managed to use a feather models. The feather model used to take up ~90% CPU but with this new build it only uses 48%. Standard models use 89-90%.

I have built the plugin under Debian 12/Bookworm arm64/aarch64 so I believe it should work on latest Modep but I would like to hear your feedback.

neural-amp-modeler-lv2_git20240326_linux_aarch64_o3_lto_pgo_openacc_glibc236.tar.bz2

My Didthis project page

14 Likes

That sounds impressive already;
I’m sure @gianfranco would be interested to read what you’ve been doing :wink:

2 Likes

That is something that we need to ask to @falkTX , but I’m quite sure our builds are highly optimized, so I’d be very curious to see the results too :slight_smile:

3 Likes

Would you mind to let us know which flags you use? That would it make much easier to give feedback.

1 Like

Would be really nice to See nam beging this performant on The dwarf

Build instructions now located at coginthemachine.ddns.net:80/mnt/nam/ under software_src_misc. This makes it easier for me if/when I update them.

1 Like

If you are using Debian Bullseye (arm64, Glibc 2.31) I have a build for that now on my web server.