4.4.3 Nonlinear systems, singular controls, and Lie brackets

Let us now investigate whether the preceding results can be extended beyond the class of linear control systems. Regarding the bang-bang principle cited in the previous paragraph, the hope that it might be true for general nonlinear systems is quickly shattered by the following example.

A distinguishing feature of the above example is that the function
, whose sign determines the value of the optimal control
, identically vanishes. Consequently, the Hamiltonian
maximization condition alone does not give us enough information
to find
. In problems where this situation occurs on some
interval of time, the optimal control on that interval is called
*singular*, and the corresponding piece of the optimal state
trajectory is called a *singular arc*.

Example 4.1 should not be taken to suggest, however, that we must give up hope of formulating a bang-bang principle for nonlinear systems. After all, we saw in Section 4.4.2 that even for linear systems, to be able to prove that all time-optimal controls are bang-bang we need the normality assumption. It is conceivable that the bang-bang property of time-optimal controls for certain nonlinear systems can be guaranteed under an appropriate nonlinear counterpart of that assumption.

Motivated by these remarks, our goal now is to better formalize the phenomenon of singularity--and reach a deeper understanding of its reasons--for a class of systems that includes the linear systems considered in Section 4.4.2 as well as the nonlinear system (4.58). This class is composed of nonlinear systems affine in controls, defined as

where , , is an matrix with columns , and for the control set we again take the hypercube (4.52). The Hamiltonian for the time-optimal control problem is

From the Hamiltonian maximization condition we obtain, completely analogously to Section 4.4.2, that the components of the optimal control are determined by the signs of the functions . These functions of time (always associated with a specific optimal trajectory) are called the

In order to simplify calculations, from this point on we assume that , so that the input is scalar and we have only one switching function

The optimal control satisfies

The canonical equations are and

where and are the Jacobian matrices of and . Let us now compute the derivative of :

We see that is the inner product of with the vector . Perhaps the vector field , which we have not encountered up to now, has some significant meaning?

In general, the *Lie bracket* of two differentiable vector fields
and
is defined as

Note that the definitions of the Lie bracket for matrices (in linear algebra) and for vector fields (in differential geometry) usually follow the opposite sign conventions. The geometric meaning of the Lie bracket--which justifies its alternative name ``commutator"--is as follows (see Figure 4.16). Suppose that, starting at some point , we move along the vector field for units of time, then along the vector field for units of time, after that along (backward along ) for units of time, and finally along for units of time. It is straightforward (although quite tedious) to check that for small the resulting motion is approximated, up to terms of higher order in , by . In particular, we will return to if in a neighborhood of , in which case we say that and

We can now write the result of the calculation (4.62) more informatively as

Coupled with the law (4.61), this equation reveals a fundamental connection between Lie brackets and optimal control.

Lie brackets can help us shed light on the bang-bang property. For a singular optimal control to exist, must identically vanish on some time interval. In view of (4.60) and (4.63), this can happen only if stays orthogonal to both and . We have seen that in time-optimal problems for all . Thus for planar systems ( ) we can rule out singularity if and are linearly independent along the optimal trajectory.

If or and are not linearly independent, then we have to look at higher derivatives of and see what it takes for them to vanish as well. Rather than differentiating again, let us revisit our derivation of and try to see a general pattern in it. Consider an arbitrary differentiable function . Following the same calculation steps as in (4.62) and using the definition of the Lie bracket, we easily arrive at

The formula (4.63) for is recovered from this result as a special case by setting which gives . Now, if we want to compute , we only need to set to obtain the following expression in terms of iterated Lie brackets of and :

A singular optimal control must make vanish. The control

can potentially be singular if . However, it should meet the magnitude constraint . If we assume, for example, that the relation

holds with for all , then (4.66) would not be an admissible control unless . To investigate the possibility that this last function does vanish, we need to consider its derivative given by (4.64) with , and so on.

We are now in a position to gain a better insight into our earlier observations by using the language of Lie brackets.

LINEAR SYSTEMS (SECTION ). In the single-input case, we have and . Calculating the relevant Lie brackets, we obtain , , , , , etc. A crucial consequence of linearity is that iterated Lie brackets containing two 's are 0, which makes the derivatives of the switching function independent of . It is easy to see that cannot vanish if the vectors span , which is precisely the controllability condition.

EXAMPLE REVISITED. For the system (4.58), we have , . The first Lie bracket is

On the -axis, where the singular optimal trajectory lives, vanishes and so and do not span . In fact, when . The next Lie bracket that we should then calculate is

Since , (4.65) gives . All higher-order Lie brackets are obviously 0, hence . We see that all information about the singularity is indeed encoded in the Lie brackets.

It is worth noting that singular controls are not necessarily complex. The optimal control
in Example 4.1 is actually quite simple. For single-input planar systems
,
,
, with
and
real analytic, it can be shown that all time-optimal trajectories are
concatenations of a *finite* number of ``bang" pieces
(each corresponding to either
or
) and real analytic singular arcs. It is natural to ask whether a similar claim holds for other optimal control problems in
or for
time-optimal control problems in
. We are about to see that these two questions are related and that the answer to both is negative.