next up previous contents index
Next: 3.5 Notes and references Up: 3.4 Variational approach to Previous: 3.4.4 Some comments   Contents   Index

3.4.5 Critique of the variational approach and preview of the maximum principle

The variational approach presented in Sections 3.4.1-3.4.3 has led us, quite quickly, to the necessary conditions for optimality expressed by the canonical equations and the Hamiltonian maximization property (we did not actually prove the latter property, but we will see that it is indeed correct). While it helps us build intuition for what the correct statement of the maximum principle should look like, the variational approach has several limitations which, upon closer inspection, turn out to be quite severe.

CONTROL SET. Recall that our starting point was to consider perturbed controls of the form (3.24). Such perturbations make sense when the values of $ u^*$ are interior points of the control set $ U$ . This may not be the case, though, if $ U$ has a boundary, and bounded (or even finite) control sets are common in control applications. As we will see in the next chapter, the statement that the function $ u\mapsto H(t,x^*(t),u,p^*(t))$ must have a maximum at $ u^*(t)$ is true even in such situations and, moreover, this maximum is in fact global. However, we cannot hope to establish this fact using the variational approach, because $ \left.{H}_{u}\right\vert _{*}$ need not be 0 when the maximum is achieved at a boundary point of $ U$ .

FINAL STATE. In the preceding we treated the case when the final state $ x_f$ is free, but we know (see Section 3.3.3) that in general we may have some target set $ S$ . Consider, for example, the case of a fixed endpoint: $ S=\{t_1\}\times \{x_1\}$ . Then the control perturbation $ \xi $ is no longer arbitrary, since the resulting state perturbation $ \eta $ must satisfy $ \eta(t_1)=0$ . In view of the fact that $ \eta $ and $ \xi $ are related by the system (3.28) with $ \eta(t_0)=0$ , it is easy to show that admissible perturbations $ \xi $ must satisfy the constraint

$\displaystyle \int_{t_0}^{t_1}\Phi_*(t_1,t)B(t)\xi(t)dt=0

where $ \Phi_*(\cdot,\cdot)$ is the transition matrix for $ A_*(\cdot)$ in (3.28). The second equation in (3.38) needs to hold only for admissible perturbations, and not for all $ \xi $ . This condition is no longer strong enough to let us conclude that $ \left.{H}_{u}\right\vert _{*}\equiv0$ . We see that the prospects of extending the variational approach beyond free-endpoint problems do not look very promising.

DIFFERENTIABILITY. When developing the first variation, we were tacitly assuming that $ H$ is differentiable with respect to $ u$ (as well as $ x$ ). Since the Hamiltonian $ H$ is defined via (3.29), both $ f$ and $ L$ must thus be differentiable with respect to $ u$ . The reader can readily check that differentiability of $ f$ with respect to $ u$ was not one of the assumptions we made in Section 3.3.1 to ensure existence and uniqueness of solutions for our control system. In other words, the variational approach requires extra regularity assumptions to be imposed on the system. Having to assume differentiability of $ L$ with respect to $ u$ is also undesirable, as it rules out otherwise quite reasonable cost functionals like $ J(u)=\int_{t_0}^{t_1} \vert u(t)\vert dt.
$ Furthermore, the analysis based on the second variation--which is needed to distinguish between a minimum and a maximum--involves second-order partial derivatives of $ H$ with respect to $ u$ . It is clear that the variational approach would take us on a path of overly restrictive regularity assumptions. Instead, we would like to establish the Hamiltonian maximization property more directly, not by working with derivatives.

CONTROL PERTURBATIONS. When considering the control and state perturbations as in (3.24) and (3.25) with $ \alpha$ near 0, we are allowing only small deviations in both $ x$ and $ u$ . For the system $ \dot x=u$ , this would correspond exactly to the notion of a weak minimum from calculus of variations. However, as we already discussed as early as Section 2.2.1 (see in particular Example 2.1), we would like to have a larger family of control perturbations. More precisely, we want to capture optimality with respect to control perturbations that may be large, as long as the corresponding state trajectories are close to the given one. For example, Figure 3.6 illustrates a particular control perturbation (for controls that switch between only two values) which is very reasonable but falls outside the scope of the variational approach. As we will soon see, working with a richer perturbation family is crucial for obtaining sharper necessary conditions for optimality.

Figure: A control perturbation

In summary, while the basic form of the necessary conditions provided by the maximum principle will be similar to what we obtained using the variational approach, several shortcomings of the variational approach must be overcome in order to obtain a more satisfactory result. Specifically, we need to accommodate constraints on the control set, constraints on the final state, and weaker differentiability assumptions. A less restrictive notion of ``closeness" of controls will be the key to achieving these goals. Borrowing a colorful expression from [PB94], we can describe the task ahead of us as ``the cutting of the umbilical cord between the calculus of variations and optimal control theory." The maximum principle is a very nontrivial extension of the variational approach, and was developed many years later. The proof of the maximum principle is quite different from the argument given in this section; in particular, it is much more geometric in nature. We are now ready, in terms of both technical preparation and conceptual motivation, to tackle this proof in the next chapter.

next up previous contents index
Next: 3.5 Notes and references Up: 3.4 Variational approach to Previous: 3.4.4 Some comments   Contents   Index
Daniel 2010-12-20