We have so far studied the simplest thing a solution of a differential equation can do: be attracted or repelled by a fixed point. The next simplest thing it can do is to behave periodically in time, that is, to trace out a closed curve, called a cycle. We have already met examples of periodic solutions in the opening part of this ebook. In particular, we have seen an example of a stable or attracting limit cycle. It corresponds to self-sustained oscillations which are robust under slight perturbations. Such limit cycles appear in physical, biological and chemical systems. We shall see that these objects only occur in nonlinear systems.
Periodic solutions and cycles
Consider a two-dimensional system
$$ \begin{cases} \dot{x}=f(x,y)\\ \dot{y}=g(x,y) \end{cases} $$
where $(x,y)\in\mathbb{R}^2$ and $f,g:\mathbb{R}^2\to\mathbb{R}$ are continuously differentiable functions. We say that a solution $(x(t),y(t))$ is periodic if there exists some number $T>0$ such that$$ x(t)=x(t+T) \quad \text{and}\quad y(t)=y(t+T)\quad\text{for all}\thinspace t. $$
The period of this solution is defined to the minimum such $T$. The trajectory of a periodic solution is a closed curve in the $xy$-plane (cycle). Notice that a constant solution (that is, a fixed point) is not considered as a periodic solution.Remark. To define periodic solutions, it is enough to require that there exists a time $T>0$ such that $x(0)=x(T)$ and $y(0)=y(T)$. Then, by the statement about uniqueness of solutions of differential equations, we must have $x(t)=x(t+T)$ and $y(t)=y(t+T)$ for all $t$.
Cycles cannot be isolated for linear systems.
The simplest example we have met is the harmonic oscillator described by the linear system
$$ \begin{cases} \dot{x}= y \\ \dot{y}=-x. \end{cases} $$
We see that none of the cycles are isolated. This is in fact a general feature of linear systems. Indeed, suppose we are given a linear system $\dot{\boldsymbol{x}}=A\boldsymbol{x}$ such that there exists a periodic solution $\boldsymbol{x}(t)$. Then so is $c\thinspace\boldsymbol{x}(t)$ for any constant $c\neq 0$, by linearity. Hence the cycle associated to $\boldsymbol{x}(t)$ is surrounded by a one-parameter family of cycles. In particular, the amplitude of a periodic oscillation is set entirely by its initial condition $\boldsymbol{x}_0$. And any slight disturbance to the amplitude will persist forever. In contrast, we are going to see that, in nonlinear systems, it is possible to have isolated cycles that control the behavior of nearby trajectories.
Limit cycles
Suppose we have a nonlinear system $\dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{x})$ with an isolated cycle $C$. By ‘isolated’ we mean that, at least in a sufficiently small ‘annular’ neighborhood of $C$, there is no other cycle. We expect that sufficiently nearby trajectories will ‘feel’ its presence. What can happen? Well, they can either spiral toward $C$ or can spiral away from $C$. Then $C$ is called a limit cycle.
It is natural to say that $C$ is stable or attracting if nearby trajectories spiral toward $C$ on both sides. When they spiral away from $C$ on both sides, then we say that $C$ is unstable or repelling. In the mixed case, we say that $C$ is semi-stable. This third case is not generic.
Building a limit cycle from scratch. We want to cook up a simple example of a limit cycle, some kind of toy model. Let us look for a system having the circle of radius one centered at $(0,0)$ as an attracting limit cycle. The trick is to think in terms of polar coordinates $(r,\theta)$. The simplest situation would be to have uncoupled equations for the radial and angular motions:
$$ \begin{cases} \dot{r}=f(r)\\ \dot{\theta}=g(\theta). \end{cases} $$
In the $r$-direction, we want $f(1)=0$, that is $r=1$ to be a fixed point. We also want to have $r(t)$ decreasing down to $1$ if $r(0)>1$, and increasing up to $1$ if $r(0)<1$. Let us take the logistic equation, that is$$ \dot{r}=r(1-r). $$
In the $\theta$-direction, we can simply take a rotation at constant angular velocity, for instance $\dot{\theta}=-1$ for a clockwise rotation. Combining the radial and the angular motions, we get spiralling trajectories toward the circle, from both sides.
Coming back to cartesian coordinates, this gives much more complicated equations which are coupled:
$$ \begin{cases} \dot{x}= y +x\big(1-\sqrt{x^2+y^2}\big)\\ \dot{y}=-x+ y\big(1-\sqrt{x^2+y^2}\big). \end{cases} $$
With the previous recipe, we can cook up examples with, say, three limit cycles:$$ \begin{cases} \dot{r}= r(1-r)(2-r)(3-r)\\ \dot{\theta}=-1. \end{cases} $$
The circles of radius $1$ and $3$ are stable limit cycles, whereas the circle of radius $2$is an unstable one.
So we have easily made examples of limit cycles from scratch. But, in practice, we face the reverse problem: given a vector field $\boldsymbol{f}$, how do we know that it generates a limit cycle? Can we predict how many limit cycles it will have? Of course, we can use numerical experiments. Mathematically speaking, proving the existence of a limit cycle can be difficult. Predicting how many limit cycles there can be turn out to be an open problem, even for polynomial vector fields!
This is the second part of Hilbert’s sixteenth problem. It is only known that polynomial vector fields in the plane can only have finitely many limit cycles. As we show later on, there exist smooth ($C^\infty$) vector fields of the plane with infinitely many concentric limit cycles.
Proving the presence of a limit cycle
Although limit cycles show up quite naturally in physical, biological or chemical models, it is not easy to mathematically prove the existence of a limit cycle. The main tool for doing this is the so-called Poincaré-Bendixson theory. We deliberately state the following theorem in a friendly way, and not in its most general form.
Poincaré-Bendixson theorem.Consider a system
$$ \begin{cases} \dot{x}=f(x,y)\\ \dot{y}=g(x,y) \end{cases} $$
where $f$ and $g$ have continuous partial derivatives, and such that solutions exist for all $t$. Let $R$ denote a closed, bounded region of the $xy$-plane which contains no fixed points. Suppose that no solution may leave $R$. Then the system has a periodic solution in the region $R$.It is common to call the region $R$ a trapping region, since, if and when any solution enters $R$, it may no longer leave it. So, to apply the theorem, our task is to find a region such that, if we travel along its boundary, we are always pushed into
the region. Mathematically, this means that the vector field points inwards along the curve delimiting $R$. It is in general difficult to find a trapping region. We shall see examples in the sequel.
A consequence of Poincaré-Bendixson theorem is that if there exists a closed trajectory in a two-dimensional system, then it must contain a fixed point.
From an intuitive viewpoint, Poincaré-Bendixson theorem is a rather plausible result. Indeed, every solution starting in $R$ stays there forever, and it cannot approach a fixed point. It cannot tend to infinity since $R$ is bounded, by assumption. Now, pick up a point $(x_0,y_0)$ inside $R$ and run the corresponding solution. If at some time $t>0$ we have $(x(t),y(t))=(x_0,y_0)$, then this means that we have a periodic solution. If no such time exists, there are two scenarios since a trajectory cannot self-intersect. Either the solution eventually rotates, say, clockwise, or it has to make an infinite number of turns to change the sense in which it rotates. In the former case, it has to ‘pile up’ on a closed trajectory. In the latter case, since the trajectory cannot cross itself, this implies that the turns become sharper and sharper; in the long run, this contradicts the differentiability of the vector field. However, a rigorous proof of Poincaré-Bendixson theorem is fairly lengthy and requires the full force of an innocent-looking theorem of topology, namely the Jordan curve theorem.
The Jordan curve theorem assures that a simple closed curve divides the Euclidean plane
into two disjoint regions, the ‘inside’ and the ‘outside’. Although intuitively obvious, this theorem is surprisingly hard to prove. The point is that we do not usually think about badly behaved curves, like nowhere differentiable curves.
Warning. Poincaré-Bendixson theorem does not generalise to three or more dimensions. We will see later on the far-reaching consequences of this fact.
Ruling out the presence of a limit cycle
We now present tools that can be used to eliminate the possibility of having cycles.
Bendixson’s theorem. Consider a system
$$ \begin{cases} \dot{x}=f(x,y)\\ \dot{y}=g(x,y) \end{cases} $$
where $f$ and $g$ have continuous partial derivatives on some simply connected domain $D$ of the $xy$-plane. (By ‘simply connected’ we mean that the domain has no ‘holes’ or disjunct portions.) Then, if the quantity$$ \frac{\partial f}{\partial x}(x,y)+\frac{\partial g}{\partial y}(x,y) $$
is not identically zero (i.e., not zero for all $(x,y)\in D$) and is one sign in $D$, there are no cycles inside $D$.
Let us give the proof of this theorem which is by contradiction. Suppose that we do have a closed trajectory $C$, with interior $I$, contained in $D$.
We apply Green’s theorem which says that
$$ \oint_C (f\boldsymbol{i}+g\boldsymbol{j})\cdot \boldsymbol{n}\, \text{d}s \equiv \oint_C f\text{d}y-g\text{d}x = \iint_I \left( \frac{\partial f}{\partial x}+\frac{\partial g}{\partial y}\right) \, \text{d}x\text{d}y $$
where $\boldsymbol{i}$ (resp. $\boldsymbol{j}$) is the unit vector in the $x$-direction (resp. in the $y$-direction), and $\boldsymbol{n}$ is the outward normal to the curve $C$. By hypothesis, the integrand on the right-hand side is continuous and never zero; thus it is either always positive or always negative, and the right-hand side is therefore either positive or negative.On the other hand, the left-hand side must be zero. For since $C$ is a closed trajectory, $C$ is always tangent to the vector field $f\boldsymbol{i}+g\boldsymbol{j}$. This means that $\boldsymbol{n}$ is always perpendicular to $f\boldsymbol{i}+g\boldsymbol{j}$, so that the integrand $(f\boldsymbol{i}+g\boldsymbol{j})\cdot \boldsymbol{n}$ is identically zero. We thus arrived at a contradiction, so we conclude that there cannot exist any closed trajectory in $D$.
Remark.
The quantity appearing in Dulac’s theorem is nothing but the divergence of the vector field. Remember our interpretation of a differential equation as describing an imaginary fluid flow: the vector field gives the fluid velocity at each point $(x,y)$. Then, the divergence of the vector field is the extent to which the flow behaves like a source or a sink at $(x,y)$. It is a local measure of the extent to which there is more exiting an infinitesimal region of space than entering it. One can see that the divergence of the vector field at point $(x,y)$ is the trace of Jacobian matrix at that point.
Example. Consider a general two-dimensional linear system
$$ \begin{cases} \dot{x}=ax+by\\ \dot{y}= cx+dy \end{cases} $$
where $a,b,c,d$ are real parameters. Applying Bendixson’s theorem we find that if $a+d\neq 0$ then there are no closed trajectories. If $a+d=0$, this theorem says nothing. But we can easily conclude by using what we have learnt on linear systems. The characteristic function of the system is$$ \lambda^2 -(a+d)\lambda +(ad-bc)=0. $$
If $a+d=0$ then the solutions have opposite sign if $ad-bc<0$ and the origin is a saddle. If $ad-bc>0$ then solutions are pure imaginary and the origin is a center. In the special case $a=d=0$, $b=1$, $c=-1$ we get the harmonic oscillator $\dot{x}=y$, $\dot{y}=-x$ for which we now that there is a continuum of closed trajectories. In conclusion, the above linear systems has cycles if, and only if, $a+d=0$, $ad-bc>0$.Unfortunately, Bendixson’s theorem can fail even for simple systems. But Dulac has rescued this theorem by making the following simple but deep observation: consider the system
$$ \begin{cases} \dot{x}= \varphi(x,y)\, f(x,y)\\ \dot{y}= \varphi(x,y)\, g(x,y) \end{cases} $$
where $\varphi$ is continuously differentiable, real-valued function; it shares the same phase portrait as the system$$ \begin{cases} \dot{x}= f(x,y)\\ \dot{y}= g(x,y). \end{cases} $$
$$
\frac{\text{d} x}{\text{d}(\varphi(x,y)t)}=f(x,y).
$$
Think of the case when $\varphi(x,y)>0$ for all $(x,y)$. Then take the above equation for $\dot{x}$; it can be written as
Hence, if we can disprove the existence of a closed trajectory for some $\varphi$, one can disprove the existence of a closed trajectory for the original system.
Dulac’s theorem.
Consider the same assumptions as in Bendixson’s theorem. In addition, assume that one can find a continuously differentiable function $\varphi:\mathbb{R}^2\to\mathbb{R}$
such that
$$ \frac{\partial (\varphi f)}{\partial x}(x,y)+\frac{\partial (\varphi g)}{\partial y}(x,y) $$
is not identically zero (i.e., not zero for all $(x,y)\in D$) and is one sign in $D$, then there are no cycles inside $D$.The function $\varphi$ is usually called a Dulac function. Unfortunately, there is no algorithm for finding such functions. Let’s apply Bendixson-Dulac theorem in two previously studied examples for which we observed the absence of cycles.
More sharks & sardines. This is the system
$$ \begin{cases} \dot{x}= x\, (1-x-y)\\ \dot{y}= \beta\, (x-\alpha)y \end{cases} $$
where $\alpha,\beta>0$.Recall that we study this system in the interior of the positive quadrant.
Let
$$ \varphi(x,y)=\frac{1}{xy} $$
which is well-defined for $x,y>0$. We have$$ \frac{\partial (\varphi f)}{\partial x}(x,y)+\frac{\partial (\varphi g)}{\partial y}(x,y)=-\frac{1}{y}\cdot $$
This is strictly negative, thus there cannot be cycles in this model.Back to our model of two competing populations. We have the equations
$$ \begin{cases} \dot{x}= x\, (1-x-a_{12}y)\\ \dot{y}= \rho y\, (1-y-a_{21}x) \end{cases} $$
where $\rho,a_{12},a_{21}>0$. Again, we only consider the first quadrant, and we use the same Dulac function as before. We get$$ \frac{\partial (\varphi f)}{\partial x}(x,y)+\frac{\partial (\varphi g)}{\partial y}(x,y)=-\left(\frac{1}{y}+\frac{\rho}{x}\right)\cdot $$
Since we a strictly negative quantity for all $x,y>0$, there is no cycle in this model.