next up previous contents index
Next: 2.5.2 Non-integral constraints Up: 2.5 Variational problems with Previous: 2.5 Variational problems with   Contents   Index


2.5.1 Integral constraints

Suppose that we augment the Basic Calculus of Variations Problem with an additional constraint of the form

$\displaystyle C(y):=\int_a^b M(x,y(x),y'(x))dx=C_0$ (2.44)

where $ C$ stands for the ``constraint" functional, $ M$ is a function from the same class as $ L$ , and $ C_0$ is a given constant. In other words, the problem is to minimize the functional given by (2.9) over $ \mathcal C^1$ curves $ y(\cdot)$ satisfying the boundary conditions (2.8) and subject to the integral constraint (2.45). For simplicity, we are considering the case of only one constraint. We already saw examples of such constrained problems, namely, Dido's problem and the catenary problem.

Assume that a given curve $ y$ is an extremum. What follows is a heuristic argument motivated by our earlier derivation of the first-order necessary condition for constrained optimality in the finite-dimensional case (involving Lagrange multipliers). Let us consider perturbed curves of the familiar form

$\displaystyle y+\alpha\eta.
$

To be admissible, the perturbation $ \eta $ must preserve the constraint (in addition to vanishing at the endpoints as before). In other words, we must have $ C(y+\alpha\eta)=C_0$ for all $ \alpha$ sufficiently close to 0. In terms of the first variation of $ C$ , this property is easily seen to imply that

$\displaystyle \left.\delta C\right\vert _{y}(\eta)=0.$ (2.45)

Repeating the same calculation as in our original derivation of the Euler-Lagrange equation, we obtain from this that

$\displaystyle \int_a^b\Big({M}_{y}(x,y(x),y'(x))- \frac d{dx}{M}_{y'}(x,y(x),y'(x))\Big)\eta(x)dx=0.$ (2.46)

Now our basic first-order necessary condition (1.37) implies that for every $ \eta $ satisfying (2.47), we must have

$\displaystyle \left.\delta J\right\vert _{y}(\eta)=\int_a^b\Big({L}_{y}(x,y(x),y'(x))-\frac
d{dx}{L}_{y'}(x,y(x),y'(x))\Big)\eta(x)dx=0.
$

This conclusion can be summarized as follows:

$\displaystyle \int_a^b\left({L}_{y}-\frac d{dx}{L}_{y'}\right)\eta(x)dx=0 \qquad \forall\,\eta\ $    such that $\displaystyle \int_a^b\left({M}_{y}-\frac d{dx}{M}_{y'}\right)\eta(x)dx=0.$ (2.47)

The reader will note that (2.48) is quite similar to the condition (1.21) on page [*]. It also has a similar consequence, namely, that there exists a constant $ \lambda^*$ (a Lagrange multiplier) such that

$\displaystyle \left({L}_{y}-\frac d{dx}{L}_{y'}\right)+\lambda^*\left({M}_{y}-\frac d{dx}{M}_{y'}\right)=0$ (2.48)

for all $ x\in [a,b]$ . Rearranging terms, we see that this is equivalent to

$\displaystyle {(L+\lambda^* M)}_{y}=\frac
d{dx}{(L+\lambda^* M)}_{y'}
$

which amounts to saying that the Euler-Lagrange equation holds for the augmented Lagrangian $ L+\lambda^* M$ . In other words, $ y$ is an extremal of the augmented cost functional

$\displaystyle (J+\lambda^* C)(y)=\int_a^b \left(L(x,y(x),y'(x))+\lambda^* M(x,y(x),y'(x))\right)dx.$ (2.49)

A closer inspection of the above argument reveals, however, that we left a couple of gaps. First, we did not justify the step of passing from (2.48) to (2.49). In the finite-dimensional case, we had to make the corresponding step of passing from (1.21) to (1.22) which then gave (1.25); we would need to construct a similar reasoning here, treating the integrals in (2.48) as inner products of $ \eta $ with the functions in parentheses (inner products in $ \mathcal L_2$ ). Second, there was actually a more serious logical flaw: the condition (2.46) is necessary for the perturbation $ \eta $ to preserve the constraint (2.45), but we do not know whether it is sufficient. Without this sufficiency, the validity of (2.48) is in serious doubt. In the finite-dimensional case, to reach (1.21) we used the fact that (1.20) was a necessary and sufficient condition for $ d$ to be a tangent vector; we did not, however, give a proof of the sufficiency part (which is not trivial).

It is also important to recall that in the finite-dimensional case studied in Section 1.2.2, the first-order necessary condition for constrained optimality in terms of Lagrange multipliers is valid only when an additional technical assumption holds, namely, the extremum must be a regular point of the constraint surface. This assumption is needed to rule out degenerate situations (see Exercise 1.2); in fact, it enables precisely the sufficiency part mentioned in the previous paragraph. It turns out that in the present case, a degenerate situation arises when the test curve $ y$ satisfies the constraint but all nearby curves violate it. This can happen if $ y$ is an extremal of the constraint functional $ C$ , i.e., satisfies the Euler-Lagrange equation for $ M$ . For example, consider the length constraint $ C(y):=\int_0^1
\sqrt{1+(y')^2}dx=1$ together with the boundary conditions $ y(0)=y(1)=0$ . Clearly, $ y\equiv 0$ is the only admissible curve (it is the unique global minimum of the constraint functional), hence it automatically solves our constrained problem no matter what $ J$ is. The second integral in (2.48) is 0 for every $ \eta $ since $ y$ is an extremal of $ C$ . Thus if (2.48) were true, it would imply that $ y$ must be an extremal of $ J$ , but as we just explained this is not necessary. We see that if we hope for (2.48) to be a necessary condition for constrained optimality, we need to assume that $ y$ is not an extremal of $ C$ , so that there exist nearby curves at which $ C$ takes values both larger and smaller than $ C_0$ .

We can now conjecture the following first-order necessary condition for constrained optimality: If $ y(\cdot)$ is an extremum for the constrained problem and is not an extremal of the constraint functional $ C$ (i.e., does not satisfy the Euler-Lagrange equation for $ M$ ), then it is an extremal of the augmented cost functional (2.50) for some $ \lambda^*\in\mathbb{R}$ . We can also state this condition more succinctly, combining the nondegeneracy assumption and the conclusion into one statement: $ y$ must satisfy the Euler-Lagrange equation for $ \lambda^*_0L+\lambda^* M$ , where $ \lambda^*_0$ and $ \lambda^*$ are constants (not both 0). Indeed, this means that either $ \lambda^*_0=0$ and $ y$ is an extremal of $ C$ , or $ \lambda^*_0\ne 0$ and $ y$ is an extremal of $ J+
(\lambda^*/\lambda^*_0) C$ . The number $ \lambda^*_0$ is called the abnormal multiplier (it also has an analog in optimal control which will appear in Section 4.1).

It turns out that this conjecture is correct. However, rather than fixing the above faulty argument, it is easier to give an alternative proof by proceeding along the lines of the second proof in Section 1.2.2.


\begin{Exercise}
% latex2html id marker 8527
Write down a correct proof of the ...
...ing the Inverse Function Theorem.\index{inverse function theorem}
\end{Exercise}

In the unconstrained case, as we noted earlier, the general solution of the second-order Euler-Lagrange differential equation depends on two arbitrary constants whose values are to be determined from the two boundary conditions. Here we have one additional parameter $ \lambda^*$ but also one additional constraint (2.45), so generically we still expect to obtain a unique extremal.

The generalization of the above necessary condition to problems with several constraints is straightforward: we need one Lagrange multiplier for each constraint (cf. Section 1.2.2). The multiple-degrees-of-freedom setting also presents no complications.

Similarly to the finite-dimensional case, Lagrange's original intuition was to replace constrained minimization of $ J$ with respect to $ y$ by unconstrained minimization of

$\displaystyle \int_a^b Ldx +\lambda \left(\int_a^b Mdx-C_0\right)%=\int (L+\lambda M)-\bar c_0
$ (2.50)

with respect to $ y$ and $ \lambda$ . For curves satisfying the constraint, the values of the two functionals coincide. However, for the same reasons as in the discussion on page [*], considering this augmented cost (which matches (2.50) except for an additive constant) does not lead to a rigorous justification of the necessary condition.

Equipped with the above necessary condition for the case of integral constraints as well as our previous experience with the Euler-Lagrange equation, we can now study Dido's isoperimetric problem and the catenary problem.


\begin{Exercise}
Show that optimal curves for Dido's problem\index{Dido's isoper...
...enary\index{catenary} problem
satisfy~\eqref{e-catenary-solution}.\end{Exercise}


next up previous contents index
Next: 2.5.2 Non-integral constraints Up: 2.5 Variational problems with Previous: 2.5 Variational problems with   Contents   Index
Daniel 2010-12-20