On Mon, 2012-05-28 at 19:48 +0000, Fons Adriaensen wrote:
Sorry, that term isn't clear. By 'signals' I meant basically a single
numeric value (float), whatever the rate.
As opposed to an 'event' (also an ambiguous term) which actually says
"hey, this control changed to 4.0 now" when it changes - and only when
it changes. An 'event' could also have ramp information for
interpolation and whatever else.
> > It's better in many cases to be explicitly told when the control value
Well, in the event case we are *not* regularly sampling a single value
(which is a signal), we are describing a parameter change over time. If
it didn't change this time, it's the same as last time, but of course:
> , and assuming you
(The following is my perspective on the issue with respect to the plugin
interface, not a rebuttal: I agree completely)
Right, you need not only 'now' information in the event, but 'future'
information in that event to make it work. I see these issues as more
evidence that a single value just isn't good enough, since getting the
requisite information when the control itself is just a single float
involves mandated latency (i.e. the value this cycle is actually for the
future, and the value now is actually one that came earlier). With the
right rules, that can works, but it's screwy: the control value at frame
32 doesn't actually correspond to frame 32, the 'synchronous' feel of
run() has been ruined, and more importantly, sample accurate offline
rendering just breaks outright (unless you start tacking on a bunch of
additional crap to work around that with special cases).
I think the same requirements can be better met by moving past a single
float. Or: potentially flexible buffer size is not the error in LV2;
the use of a single float for controls is. The buffer size feature is
mostly orthogonal and simple to add; ControlPort is the thing that
probably shouldn't have been inherited from LADSPA at all.
To reframe slightly what you've said: in order for interpolation to work
correctly the value must be known for 'now', and at 'now plus 1
block' (see below for more on 'block'). This is basically the eventey
equivalent to saying you need a "regular period", but slightly different
to a sampled signal for a control. Most notably, the value described
now indeed represents *now*.
Of course, in some cases (like the ones you are thinking of, I believe)
you may not be able to actually affect the sample at t=32 with the value
at t=32; the interpolation or other factors might add some additional
hidden control latency by necessity. This is fine. The important
distinction is that what the host is describing is a single synchronous
stream of control and audio, where t=32 is t=32 for both control and
The big tangible difference is that the latency is *optional*. With
this scheme, you could e.g. write a trivial amplifier plugin that simply
multiplies by the gain control, and there would be no additional latency
whatsoever. The host can describe a step-wise function and have it
actually applied to the audio (in whatever way) without latency.
This is crucial if you want to build stuff out of lower-level plugins in
modular synths and such (but is not limited to synthesis), a case where
a "float value with guaranteed control latency" scheme completely fails.
(There is the question of what to do in live situations where future
values do not exist, but there are only two possible things to do, and
the solution is obvious, so I will ignore that and stay on topic)
Relating back to buffer size, there is one time parameter here: how far
into the future the plugin must know controls in advance (L).
This raises a big question: ignoring convolution and such, is that value
actually related to the audio block size at all? If you have a
description of a continuous parameter where you know values at least L
frames in advance (for sufficiently large L), you can interpolate that
parameter smoothly, in whatever way is appropriate for the plugin.
It seems the desire for a restricted block size (in cases where it is
not otherwise necessary) is actually just a consequence of the control
description (i.e. a single float) sucking.
I think the actual problem is inadequate controls, and fixing that seems
like a dramatically better solution than restricting the block size and
mandating latency to work around it. The latter is more of a kludge
than an ideal solution and has many downsides; not the least of which
being that it's dramatically less feasible. The only thing every person
in this thread has agreed on is that restricting the buffer size (to
make controls work properly) is a massive difficulty for hosts. As far
as I can tell, we don't have to.
Linux-audio-dev mailing list