next up previous contents index
Next: 4.3.1 Changes of variables Up: 4. The Maximum Principle Previous: 4.2.10 Transversality condition   Contents   Index

4.3 Discussion of the maximum principle

Our main objective in the remainder of this chapter is to gain a better understanding of the maximum principle by discussing and interpreting its statement and by applying it to specific classes of problems. We begin this task here by making a few technical remarks.

One should always remember that the maximum principle provides necessary conditions for optimality. Thus it only helps single out optimal control candidates, each of which needs to be further analyzed to determine whether it is indeed optimal. The reader should also keep in mind that an optimal control may not even exist (the existence issue will be addressed in detail in Section 4.5). For many problems of interest, however, the optimal solution does exist and the conditions provided by the maximum principle are strong enough to help identify it, either directly or after a routine additional elimination process. We already saw an example supporting this claim in Exercise 4.1 and will study other important examples in Section 4.4.

When stating the maximum principle, we ignored the distinction between different kinds of local minima by working with a globally optimal control $ u^*$ , i.e., by assuming that $ J(u^*)\le J(u)
$ for all other admissible controls $ u$ that produce state trajectories satisfying the given endpoint constraint. However, it is clear from the proof that global optimality was not used. The control perturbations used in the proof produced controls $ u$ which differ from $ u^*$ on a small interval of order $ \varepsilon $ in length, making the $ \mathcal L_1$ norm of the difference, $ \int_{t_0}^{t_f}\vert u(t)-u^*(t)\vert dt$ , small for small $ \varepsilon $ . The resulting perturbed trajectory $ x$ , on the other hand, was close to the optimal trajectory $ x^*$ in the sense of the 0-norm, i.e., $ \max_{t_0\le t\le t_f}\vert x(t)-x^*(t)\vert$ was small for small $ \varepsilon $ (as is clear from the calculations given in Sections 4.2.2-4.2.4). It can be shown that the conditions of the maximum principle are in fact necessary for local optimality when closeness in the $ (x,u)$ -space is measured by the 0-norm for $ x$ and $ \mathcal L_1$ norm for $ u$ ; we stress that the Hamiltonian maximization condition (statement 2 of the maximum principle) remains global. At this point it may be instructive to think of the system $ \dot x=u$ as an example and to recall the discussion in Section 3.4.5 related to Figure 3.6. In that context, the notion of a local minimum with respect to the norm we just described is in between the notions of weak and strong minima; indeed, weak minima are defined with respect to the 0-norm for both $ x$ and $ u$ , while strong minima are with respect to the 0-norm for $ x$ with no constraints on $ u$ . For strong minima, the necessary conditions provided by the maximum principle are still valid. This is not the case for weak minima, because in a needle perturbation the control value $ w$ is no longer arbitrary: it must be close to $ u^*(b)$ .

The statement of the maximum principle contains the condition (justified in Section 4.2.8) that $ (p_0^*,p^*(t))\ne (0,0)$ for all $ t$ . In fact, since the origin in $ \mathbb{R}^{n+1}$ is an equilibrium of the linear adjoint equation (4.31), if $ p_0^*$ , $ p^*(t)$ vanish for some $ t$ then they must vanish for all $ t$ . Thus, the above condition could be equivalently stated as $ (p_0^*,p^*(t))\ne (0,0)$ for some $ t$ . This condition is sometimes called the nontriviality condition, because with $ (p_0^*,p^*)\equiv (0,0)$ all the statements of the maximum principle are trivially satisfied. In some cases, it is possible to show that the adjoint vector itself, $ p^*(t)$ , is nonzero for all $ t$ . For example, suppose that the running cost $ L$ is everywhere nonzero (this is true, for instance, in time-optimal control problems, where $ L\equiv
1$ ). The Hamiltonian satisfies $ \left.H\right\vert _{*}=\langle
p^*,\left.f\right\vert _{*}\rangle +p_0^*\left.L\right\vert _{*}\equiv 0$ (by statement 3 of the maximum principle). If $ p^*(t)=0$ for some $ t$ , then we have $ p_0^*\left.L\right\vert _{*}(t)=0$ , hence $ p_0^*=0$ and we reach a contradiction with the nontriviality condition. We will give another example later involving a terminal cost; see Exercise 4.7 below. As for the abnormal multiplier $ p_0^*$ , since it is the vertical coordinate of the normal to the separating hyperplane, $ p_0^*=0$ corresponds to the case when the separating hyperplane is vertical (and cannot be tilted). The projection of such a hyperplane onto the $ x$ -space is a hyperplane in $ \mathbb{R}^n$ , and all perturbed controls must bring the state $ x$ to the same side of this projected hyperplane. In a majority of control problems this does not happen and we can set $ p_0^*=-1$ . We also know that the separating hyperplane cannot be vertical and $ p_0^*$ cannot be 0 in the free-endpoint case (see the end of Section 4.2.10).

next up previous contents index
Next: 4.3.1 Changes of variables Up: 4. The Maximum Principle Previous: 4.2.10 Transversality condition   Contents   Index
Daniel 2010-12-20