Part II: Two-dimensional systems: flows in the plane - Chapter 10

Stability of Fixed Points & the Lyapunov method

We investigate the stability and instability properties of fixed points. In other words, what happens if we perturb a system which is sitting at a fixed point? As the reader can guess, we should use the linear approximation of the system, but we expect only to be able to deal with ‘small’ perturbations. This is why we shall present another, somewhat more geometric, technique: the method of Lyapunov. Besides, this method gives us a grasp on the size of the basin of attraction of fixed point (when it is asymptotically stable).

What does it mean for a fixed point to be `stable’?

Consider a general two-dimensional system given by

$$ \dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{x}) $$

where $\boldsymbol{x}=(x,y)$ and $\boldsymbol{f}(\boldsymbol{x})=(f(\boldsymbol{x}),g(\boldsymbol{x}))$. Suppose it has a fixed point $\bar{\boldsymbol{x}}=(\bar{x},\bar{y})$, that is, a point at which the vector field vanishes:

$$ \boldsymbol{f}(\bar{\boldsymbol{x}})=\boldsymbol{0}. $$

The first notion of stability is the following. A fixed point is stable if starting close (enough) garantees that you stay close. Here is a precise mathematical formulation:

Definition. The fixed point $\bar{\boldsymbol{x}}$ is said to be stable if, given $\epsilon>0$, there is a $\delta>0$ (depending only on $\epsilon$) such that, for every $\boldsymbol{x}_0$ for which $\|\boldsymbol{x}_0-\bar{\boldsymbol{x}}\|<\delta$, the solution $\boldsymbol{x}(t)$ such that $\boldsymbol{x}(0)=\boldsymbol{x}_0$ satisfies the inequality

$$ \|\boldsymbol{x}(t)-\bar{\boldsymbol{x}} \| \leq \epsilon\quad\text{for all}\thinspace t\geq 0. $$

The fixed point is said to be unstable if it is not stable.


To be more precise, there is an $\eta>0$ such that, for any $\delta>0$, there is an $\boldsymbol{x}_0$ with $\|\boldsymbol{x}_0-\bar{\boldsymbol{x}}\|<\delta$ and $t_{\boldsymbol{x}_0}>0$ such that

$$ \|\boldsymbol{x}(t_{\boldsymbol{x}_0})-\bar{\boldsymbol{x}} \|= \eta. $$



Next, we define the notion of attracting point.

Definition. The fixed point $\bar{\boldsymbol{x}}$ is said to be attracting if there is a $\delta>0$ such that

$$ \lim_{t\to\infty} \boldsymbol{x}(t)=\bar{\boldsymbol{x}} $$

whenever $\|\boldsymbol{x}(0)-\bar{\boldsymbol{x}}\|<\delta$.

In other words, any solution that starts within a distance $\delta$ of $\bar{\boldsymbol{x}}$ is guaranteed to converge to $\bar{\boldsymbol{x}}$ eventually.

Lastly, there is the notion asymptotic stability that combines the two previous notions.

Definition. The fixed point $\bar{\boldsymbol{x}}$ is said to be asymptotically stable if it is both stable and attracting.

The reader is probably wondering: Why bother with the last definition? Indeed, it seems intuitively clear that an attracting fixed point is necessarily stable. This is true for one-dimensional systems. But, in general, this turns out to be false, as the example shown hereafter shows.

Attractiveness does not imply stability: Vinograd’s example

The following, somewhat pathological, example illustrates why it is essential to require stability in the definition of asymptotic stability. This section can be skipped at first reading.

Consider the system

$$ \begin{cases} \dot{x} = \frac{x^2(y-x)+y^5}{r^2(1+r^4)}\\ \dot{y} = \frac{y^2(y-2x)}{r^2(1+r^4)} \end{cases} $$

where $r^2=x^2+y^2$. The origin is the only fixed point.


Indeed, the second equation implies either $y=0$ or $y=2x$. In the latter case, if $\dot{x}=0$ as well, then

$$ x^3+ 32 x^5=0\quad \Rightarrow \quad x=0\quad \text{or}\quad x^2=-1/32. $$

So the only real solution is $x=y=0$.


One can see that the fixed point $(0,0)$ is attracting, but not stable! Indeed, one can observe that many solutions starting in a small ball centered about $(0,0)$ leave the ball
of radius $1/2$ centered about $(0,0)$, no matter how small is the former ball. Notice that proving the attractiveness of $(0,0)$ and its instability is not easy! The only thing that it is easy to check is that $(0,0)$ attracts all points on the $x$-axis. Indeed, $\dot{y}|_{y=0}=0$, so the $x$-axis is invariant, and, on this line, $\dot{x}=-x/(1+x^4)$, which implies that $x(t)$ monotonically moves toward $(0,0)$.

Asymptotic stability from linearization

With our previous study of linear systems, and their use to see how the phase portrait looks like in a small neighborhood of a fixed point, it is not too surprising that we can deduce some consequences on stability.

Let’s start with linear systems, that is, systems of the form

$$ \dot{\boldsymbol{x}}=A\boldsymbol{x} $$

where $A$ is two-by-two matrix. It always has $\boldsymbol{0}$ as a fixed point. We have the following result.

Theorem. If $\boldsymbol{0}$ is a sink, (i.e., all the eigenvalues of $A$ have negative real parts), then it is asymptotically stable. If $\boldsymbol{0}$ is a source or a saddle, then it is unstable.

It is evident in the definitions above that the stability type of a fixed point is a local property. Consequently, it is reasonable to expect that, under certain conditions, the stability type of a fixed point $\bar{\boldsymbol{x}}$ of a nonlinear system can be determined from its linearized version, which is the linear system obtained by replacing the vector field by the Jacobian matrix evaluated at $\bar{\boldsymbol{x}}$.

Theorem. Let $\bar{\boldsymbol{x}}$ be a fixed point of $\dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{x})$ where $\boldsymbol{f}$ is differentiable and has all its partial derivatives that are continuous functions. Then
- if all the eigenvalues of the Jacobian matrix at $\bar{\boldsymbol{x}}$ have negative real parts, then $\bar{\boldsymbol{x}}$ is asymptotically stable;
- if at least one of the eigenvalues has positive real part, then $\bar{\boldsymbol{x}}$ is unstable.

In view of Hartman-Grobman theorem, this theorem can be rephrased by saying that, if $\bar{\boldsymbol{x}}$ is a sink, then it is asymptotically stable, whereas, if it is a source or a saddle, it is unstable.

Although the previous theorem is by no means surprising, its proof is a bit involved, so we refrain from giving it. Now, suppose that we have a sink. Can we say something quantitative about the time it takes for a solution starting in a neighborhood of the fixed point to reach it? The answer is yes:

Theorem. Let $\bar{\boldsymbol{x}}$ be a sink. This means there exists $\alpha>0$ such that the eigenvalues of the Jacobian matrix at $\bar{\boldsymbol{x}}$ have both a real part strictly less than $-\alpha$. Then, given $\epsilon>0$, there exists a $\delta>0$ such that for every $\boldsymbol{x}_0$ for which $\| \boldsymbol{x}_0-\bar{\boldsymbol{x}}\|<\delta$, the solution $\boldsymbol{x}(t)$ such that $\boldsymbol{x}(0)=\boldsymbol{x}_0$ satisfies

$$ \|\boldsymbol{x}(t)-\bar{\boldsymbol{x}}\| \leq \epsilon\, e^{-\alpha t} $$

for all $t\geq 0$.

Thus, if we start close enough to a sink, we converge to it very quickly. Practically, this means that, after a short time, we can no more distinguish the solution from the sink. This is what we observe in numerical experiments.

Let us close this section by noting that the concepts of stability we introduced and the theorems we stated are valid in any dimension.

Stability via Lyapunov’s method

For an asymptotically stable fixed point, it is of considerable practical importance to obtain good estimates of its basin of attraction, that is, the subset of $\mathbb{R}^2$ consisting of the initial data $\boldsymbol{x}_0$ with the property that $\boldsymbol{x}(t)\to \bar{\boldsymbol{x}}$ as $t\to \infty$, where $\boldsymbol{x}(t)$ is the solution such that $\boldsymbol{x}(0)=\boldsymbol{x}_0$. The above theorem would yield only a crude estimate, because the nonlinear terms are roughly estimated. This is why we introduce the so-called Lyapunov method which makes explicit use of nonlinear terms,
and usually gives a better estimate of the basin of attraction of an asymptotically stable fixed point. Unfortunately, this method is tricky.

A starting example: the pendulum

We come back to the idealized pendulum :

$$ \begin{cases} \dot{x}=y\\ \dot{y}=-\sin x-\mu y. \end{cases} $$

The equilibrium position of the pendulum hanging straight down is the fixed point $(0,0)$. It is physically obvious that it is stable in the frictionless case ($\mu=0$), whereas it becomes asymptotically stable if we add friction. (On the contrary, the fixed points $(-\pi,0)$ and $(\pi,0)$, that both correspond to the pendulum being straight up, are unstable, with or without friction.)

In fact, we can visualize what happens by adopting a more abstract point of view. Why are we doing that? Because this suggests a powerful strategy to prove stability of a fixed point for systems that have nothing to do with mechanics.


Now you visualized what is going on, let us go further. Using the formula for the total derivative of $L$ with respect to $t$ yields

$$ \dot{L}(x,y)=\frac{\text{d} L(x,y)}{\text{d}t} = \frac{\partial L}{\partial x} \frac{\text{d}x}{\text{d} t}+\frac{\partial L}{\partial y} \frac{\text{d}y}{\text{d} t} =\frac{\partial L}{\partial x}\, \dot{x}+\frac{\partial L}{\partial y}\, \dot{y}\,. $$


Notice that one can rewrite $\dot{L}(x,y)$ as the scalar product of the gradient of $L$ and the vector field $\boldsymbol{f}$ evaluated at $\boldsymbol{x}=(x,y)$. This is called the Lie derivative of $L$ with respect to $\boldsymbol{f}$.

For any solution passing through the point $(x,y)$ at time $t=0$, we must have $\dot{x}=y$ and $\dot{y}=-\sin x$. Hence

$$ \dot{L}(x,y)=(\sin x)\, y +y\, (-\sin x)=0. $$

Therefore, $L$ is constant along every solution, as claimed.
Proceeding as before, but now taking into account that we have $\mu>0$ (that is, $\dot{y}=-\sin x -\mu y$), we find

$$ \dot{L}(x,y)=-\mu y^2 \leq 0. $$

This entails the fact that energy decreases along every solution.

To sum up, we found a real-valued, function $L(x,y)$ attaining its minimum at the fixed point $(0,0)$. When the time derivative of $L$ along every solution equals $0$, then the fixed point $(0,0)$ is stable. When this derivative is strictly negative except at $(0,0)$, then it is asymptotically stable.

Lyapunov functions and Lyapunov’s theorem

Motivated by the pendulum, one may ask if, for systems having nothing to do with mechanics, we can find an energy-like function that decreases along solutions. Such a function is called a Lyapunov function. If such a function exists, then we expect to draw conclusions on stability or asymptotic stability.

To be more precise, consider a system $\dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{x})$ with a fixed point at $\bar{\boldsymbol{x}}$. (We purposely use a vector notation because what we are about to do works for any finite-dimensional system, not only for a two-dimensional one.)
We have the following plausible theorem.

Lyapunov’s theorem.
Suppose that we can find a continuously differentiable, real-valued function $L(\boldsymbol{x})$ such that $L(\boldsymbol{x})>L(\bar{\boldsymbol{x}})$ for all $\boldsymbol{x}$ in some neighborhood $U$ of $\bar{\boldsymbol{x}}$.
If $\dot{L}(\boldsymbol{x})\leq 0$ for all $\boldsymbol{x}\in U$, then $\bar{\boldsymbol{x}}$ is stable.
If $\dot{L}(\boldsymbol{x})< 0$ for all $\boldsymbol{x}\in U\backslash\{\bar{\boldsymbol{x}}\}$, then $\bar{\boldsymbol{x}}$ is asymptotically stable.
If $\dot{L}(\boldsymbol{x})> 0\thinspace \text{for all}\thinspace \boldsymbol{x}\in U\backslash\{\bar{\boldsymbol{x}}\}$, then $\bar{\boldsymbol{x}}$ is unstable.

When $\bar{\boldsymbol{x}}$ is asymptotically stable, it should be clear that we wish to take the neighborhood $U$ as large as possible. Indeed, $U$ is contained in the basin of attraction of $\bar{\boldsymbol{x}}$. As one expects, the basin of attraction is defined as the set of initial data $\boldsymbol{x}_0$ with the property that $\boldsymbol{x}(t)\to \bar{\boldsymbol{x}}$ as $t\to \infty$, where $\boldsymbol{x}(t)$ is the solution such that $\boldsymbol{x}(0)=\boldsymbol{x}_0$. For the pendulum with friction ($\mu>0$), it is clear that the basin of attraction of $(0,0)$ is the whole $xy$-plane. It is then natural to say that $(0,0)$ is globally asymptotically stable.

There is a serious drawback with Lyapunov’s theorem: there is no systematic way to construct Lyapunov functions. Sums of square occasionally work, as the next examples will show. Let us mention that we have presented only a tiny part of the method of Lyapunov.

A toy example

We illustrate Lyapunov’s theorem with a simple example.

$$ \begin{cases} \dot{x}= y\\ \dot{y}=-x+\mu x^2 y \end{cases} $$

where $\mu$ is a real parameter. When $\mu=0$, one can recognize the harmonic oscillator (\texttthyperlien pour renvoyer au chapitre 1). But the way we perturb it has no physical interpretation. The only fixed point is $(0,0)$. When $\mu=0$, we know that the phase plane is filled up with closed trajectories, namely circles. So the function $L(x,y)=x^2+y^2$ is a candidate to be a Lyapunov function.


Let’s compute its time derivative along the solutions of the system:

$$ \begin{aligned} \dot{L}(x,y) &= 2x\dot{x}+ 2y\dot{y}\\ & =2xy-2xy+2\mu x^2 y^2\\ &= 2\mu x^2 y^2. \end{aligned} $$

Let’s apply Lyapunov's theorem. If$\mu<0$, then $\dot{L}(x,y)\geq 0$ for all $(x,y)$ and vanishes only at $(0,0)$. Thus $(0,0)$ is (globally) asymptotically stable. If $\mu>0$, then $\dot{L}(x,y)>0$ for all $(x,y)$, except at $(0,0)$, thus $(0,0)$ is unstable.

Resilience in a model of two competing populations

We come back to the following model:

$$ \begin{cases} \dot{x}=x\, (1-x-a_{12}y) \\ \dot{y}=\rho y\, (1-y-a_{21}x). \end{cases} $$

We saw that there is a coexistence regime which corresponds to the fixed point

$$ (\bar{x},\bar{y})=\left(\frac{1-a_{12}}{1-a_{12}a_{21}},\frac{1-a_{21}}{1-a_{12}a_{21}} \right). $$

If the populations densities are exactly given by $\bar{x}$ and $\bar{y}$, they remain as such for ever. Now suppose that an extrinsic influence suddenly pulls the system away from these values. Will this perturbation be promptly damped by the dynamics? In other words, is the fixed point $(\bar{x},\bar{y})$ asymptotically stable? In Ecology, the capacity of an ecosystem to respond to a perturbation and to recover quickly is called resilience.


The numerical experiment clearly shows that $(\bar{x},\bar{y})$ is asymptotically stable. Linearization about this point would mathematically confirm this observation, but only locally. What seems to be true is that $(\bar{x},\bar{y})$ is globally, asymptotically stable: for any initial densities $(x_0,y_0)$ that are strictly positive, the system rapidly tends to $(\bar{x},\bar{y})$. Can we prove this rigorously? Yes, we can, because there is a Lyapunov function! It is given by the quadratic form

$$ L(x,y)= \rho\left[ a_{21} (x-\bar{x})^2+2a_{12}a_{21} (x-\bar{x})(y-\bar{y})+a_{12}(y-\bar{y})^2 \right]. $$

How did people find it? Well, they tried a sum of squares that they adjusted by trial-and-error. It is left to the reader to verify that the above function $L$ does satisfy the required properties to conclude that $(\bar{x},\bar{y})$ is asymptotically stable.