So a few tests with the Duo X (initial model), using jack_iodelay
tool to measure physical out-to-in latency.
This measures full input and output latency combined.
Current defaults (128 frames, 2 periods per buffer, async mode):
406.723 frames 8.473 ms total roundtrip latency
extra loopback latency: 22 frames
Safer sync mode (128 frames, 3 periods per buffer, sync mode):
405.718 frames 8.452 ms total roundtrip latency
extra loopback latency: 21 frames
Defaults with sync mode (128 frames, 2 periods per buffer, sync mode):
277.724 frames 5.786 ms total roundtrip latency
extra loopback latency: 21 frames
Safer 64 frames mode (64 frames, 3 periods per buffer, async mode)
277.723 frames 5.786 ms total roundtrip latency
extra loopback latency: 21 frames
Safer 64 frames sync mode (64 frames, 3 periods per buffer, sync mode)
214.723 frames 4.473 ms total roundtrip latency
extra loopback latency: 22 frames
Defaults with 64 frames mode (64 frames, 2 periods per buffer, async mode)
failed, cannot run
Defaults with 64 frames sync mode (64 frames, 2 periods per buffer, sync mode)
failed, cannot run
So, contrary to the Duo, the Duo X is able to run at 64 frames quite okay (if 3 periods per buffer is also enabled).
Doing so reduces the latency by around 2.7ms.
Instead of reducing buffer size, using sync mode can be done to reduce latency without impacting CPU.
Doing so reduces the latency by around 0.02ms for 3 periods per buffer, or 2.7ms for 2 periods per buffer (same amount as 64 frames reduction)
Combining both lower buffer size and sync mode reduces latency by 4ms (this needs to use 3 periods per buffer though, otherwise the audio/i2s cannot cope with it)
PS: For those wondering what “sync/async mode” is…
Basically the audio engine we use (JACK2) uses an async audio model by default, where the audio renders to a non-active buffer and plugins that were able to finish rendering on time get their buffer copied into the real/active one. This is to prevent misbehaving plugins from causing audio glitches, the audio from such plugins will just not be used. So on a parallel chain of plugins, audio still keeps running except for the chain that includes the bad plugin.
The latency added by this async mode is the same as one audio period.
When using sync mode, the plugins render directly into the active audio buffer. This has lower latency, but makes xruns much more noticeable (one bad plugin can ruin the entire audio graph, even if disconnected)