There’s plenty of literature on limit cycle oscillations. But simple and generic methods to characterise their properties are rare: given that limit cycles often arise from nonlinear feedbacks and delays, general analytical results on their properties are usually not that easy to obtain. Well, for a few exceptions.
Squinting our eyes, the main two characteristics of an oscillation are: period and amplitude. If you have read the title of this post, you won't be surprised which one of the two we're going to talk about.
where is the oscillating variable, is a negative-feedback function (usually monotonically decreasing and sufficiently nonlinear to generate oscillations) and is the feedback delay.
The famous Mackey–Glass equations are of this form.
Method. It turns out that—given that satisfies reasonable conditions on smoothness, monotonicity and steepness—it’s surprisingly simple to constrain the amplitude of any limit-cycle oscillations generated from this system : Define the function
and find its smallest and largest real zeroes. That’s it. Those values bound the amplitude of the limit cycle of Eq. (1) from above and below.
where is a free parameter characterising the feedback strength. This function is smooth and monotonically decreasing for . If is large enough, the system exhibits oscillations, see below for examples.
All we need to do is to plug this expression into Eq. (2) and solve . (If you're lazy, like I am: Your favourite computer algebra system or WolframAlpha can do this: see here). We’re lucky: The zeroes that have a chance to be real (and to be extrema among the zeroes) have simple analytical forms,
This expression is real only if . For , the expression under the square root vanishes—and there's no limit cycle, just a fixed point. As is for .
Let’s look for some numerical confirmation: In panel (a), the dots show peak and trough values of limit cycles as determined from numerical simulations with varying . Different dots correspond to different delays (whence the different amplitudes), the curves show the analytical bounds (4). In panel (b), we see a few example time series with different parameters, starting from some arbitrary initial condition. As soon as the system has settles towards its limit cycle, the solution stays within the bounds (shaded areas).
Why is this working? There are several ways to think about it. Let's make this really trivial. Eq. (1) has the form is a ‘source’ function, which happens to depend on itself. If is a constant input, will converge exponentially towards that value. If is not constant, will follow the motion of —whether it is able to catch up with depends on how quickly is changing. Since , will constantly try to catch up with a transformed version of itself in the past. Well, that's just what a negative feedback oscillator is.
Now, consider saturating oscillations where the system remains at its extrema for an extended period of time, like the oscillations with long delays in panel (b) of the above figure. If the system manages to remain at a maximum for a time , it will respond to it as the term now has the value for extended time—and therefore must be the minimum towards which the system converges: . But the same logic applies vice versa: . It's pretty straightforward if you think about it.
Plugging these two expressions into each other, we get (and correspondingly with signs reversed). The function defined in Eq. (2) is constructed precisely such that its zeroes satisfy this condition. It’s just a convenient way to find and .
But it's far from a proof. Which requirements must satisfy for this to work? What happens if oscillations are not saturating? How sharp are the bounds? And can we generalise this method to work for oscillators other than the specific delay system given by Eq. (1), e.g., the repressilator and other multi-component systems? If you really must know all of this, have a look at my paper .
Should you find this useful or should you be aware of similar or related methods, feel free to get in touch! (firstname.lastname@example.org)