Next: Algebraic stability criteria Up: Stability of linear control Previous: Stable and unstable systems   Contents

# Definition of stability and stability conditions

Because of its feedback structure a control system can become unstable, e.g. oscillations with increasing amplitudes in the signals can occur. In section 5.1 a signal-based definition of stability is established, which relies on the boundedness of the input-output signals. In this section we focus on a definition of stability for linear systems that is independent of the input-output signals. First the following definition is introduced:

A linear time-invariant system according to Eq. (3.3) is called ( asymptotically) stable, if its weighting function decays to zero, i.e. if

 (5.1)

is valid. If the modulus of the weighting function increases with increasing to infinity, the system is called unstable.

A special case is a system where the modulus of the weighting function does not exceed a finite value as or for which it approaches a finite value. Such systems are called critically stable. Examples are undamped S and I elements, see sections 4.4.2 and 4.4.7.

This definition shows that stability is a system property for linear systems. If Eq. (5.1) is valid, then there exists no initial condition and no bounded input signal which drives the output to infinity. This definition can be directly applied to the stability analysis of linear systems by determining the value of the weighting function for . If this value exists, and if it is zero, the system is stable. However, in most cases the weighting function is not given in an explicit analytic form and therefore it is costly to determine the final value. The transfer function of a system is often known and as it is the Laplace transform of the weighting function , there is an equivalent stability condition for according to Eq. (5.1). The analysis of this condition - see section A.5 - shows that for the stability analysis it is sufficient to check the poles of the transfer function of the system, that is the roots of its characteristic equation

 (5.2)

Now the following necessary and sufficient stability conditions can be formulated:

a)
Asymptotic stability

A linear system is only asymptotically stable, if for the roots of its characteristic equation

 for all

is valid, or in other words, if all poles of its transfer function lie in the left-half plane.
b)
Instability

A linear system is only unstable, if at least one pole of its transfer function lies in the right-half plane, or, if at least one multiple pole (multiplicity ) is on the imaginary axis of the plane.

c)
Critical stability

A linear system is critically stable, if at least one single pole exists on the imaginary axis, no pole of the transfer function lies in the right-half plane, and in addition no multiple poles lie on the imaginary axis.

It has been shown above that the stability of linear systems can be assessed by the distribution of the roots of the characteristic equation in the plane (Figure 5.2). For control problems there is often no need know these root with high precision. For a stability analysis it is interesting to know whether all roots of the characteristic equation lie in the left-half plane or not. Therefore simple criteria are available for easily checking stability, called stability criteria. These are partly in algebraic, partly in graphical form.