Next: 3.3.2 Cost functional Up: 3.3 Optimal control problem Previous: 3.3 Optimal control problem   Contents   Index

## 3.3.1 Control system

In the notation announced at the end of the previous section, control systems that we want to study take the form

 (3.17)

which is an exact copy of (1.1). Here is the state, is the control, is the time, is the initial time, and is the initial state. Both and are functions of time: , . The control set is usually a closed subset of and can be the entire ; in principle it can also vary with time, but here we take it to be fixed.

We want to know that for every choice of the initial data and every admissible control , the system (3.18) has a unique solution on some time interval . If this property holds, we will say that the system is well posed. To guarantee local existence and uniqueness of solutions for (3.18), we need to impose some regularity conditions on the right-hand side and on the admissible controls . For the sake of simplicity, we will usually be willing to make slightly stronger assumptions than necessary; when we do this, we will briefly indicate how our assumptions can be relaxed.

Let us first consider the case of no controls:

 (3.18)

First, to assure sufficient regularity of with respect to , we take to be piecewise continuous for each fixed . Here, by a piecewise continuous function we mean a function having at most a finite number of discontinuities on every bounded interval, and possessing the limits from the right and from the left at each of these discontinuities. For convenience, we assume that the value of such a function at each discontinuity is equal to one of these one-sided limits (i.e., the function is either left-continuous or right-continuous at each point). The assumption of a finite number of discontinuities on each bounded interval is actually not crucial; we can allow discontinuities to have accumulation points, as long as the function remains locally bounded (or at least locally integrable).

Second, we need to specify how regular should be with respect to . A standard assumption in this regard is that is locally Lipschitz in , uniformly over . Namely, for every there should exist a constant such that we have

for all and in some neighborhood of in . We can in fact be more generous and assume the following: is for each fixed , and is piecewise continuous for each fixed . It is easy to verify using the Mean Value Theorem that such a function satisfies the previous Lipschitz condition. Note that here and below, we extend our earlier notation to the vector case, so that stands for the Jacobian matrix of partial derivatives of with respect to .

If satisfies the above regularity assumptions, then on some interval there exists a unique solution of the system (3.19). Since we did not assume that is continuous with respect to , some care is needed in interpreting what we mean by a solution of (3.19). In the present situation, it is reasonable to call a function a solution of (3.19) if it is continuous everywhere, almost everywhere, and satisfies the corresponding integral equation

A function that can be represented as an integral of another function , and thus automatically satisfies almost everywhere, is called absolutely continuous. This class of functions generalizes the piecewise functions that we considered earlier. Basically, the extra generality here is that the derivative can be discontinuous on a set of points that has measure zero (e.g., a countable set) rather than at a finite number of points on a bounded interval, and can approach infinity near these points. If we insist that the derivative be locally bounded, we arrive at the slightly smaller class of locally Lipschitz functions.

We are now ready to go back to the control system (3.18). To guarantee local existence and uniqueness of its solutions, we can impose assumptions on and that would let us invoke the previous existence and uniqueness result for the right-hand side

 (3.19)

Here is one such set of assumptions which, although not the weakest possible, is adequate for our purposes: is continuous in and and in ; is continuous in and ; and is piecewise continuous as a function of . Another, weaker set of hypotheses is obtained by replacing the assumptions of existence of and its continuity with respect to all variables with the following Lipschitz property: for every bounded subset of , there exists an such that we have

for all . Note that in either case, differentiability of with respect to is not assumed.

When the right-hand side does not explicitly depend on time, i.e., when we have , a convenient way to guarantee that the above conditions hold is to assume that is locally Lipschitz (as a function from to ). In general, depends on in two ways: directly through the argument, and indirectly through . Regarding the first dependence, we are willing to be generous by assuming that is at least continuous in . In fact, we can always eliminate the direct dependence of on by introducing the extra state variable , with the dynamics . Note that in order for the new system obtained in this way to satisfy our conditions for existence and uniqueness of solutions, continuity of in is not enough and we need to be in (or satisfy an appropriate Lipschitz condition).

On the other hand, as far as regularity of with respect to is concerned, it would be too restrictive to assume anything stronger than piecewise continuity. In fact, occasionally we may even want to relax this assumption and allow to be a locally bounded function with countably many discontinuities. More precisely, the class of admissible controls can be defined to consist of functions that are measurable3.2 and locally bounded. In view of the remarks made earlier about the system (3.19), local existence and uniqueness of solutions is still guaranteed for this larger class of controls. We will rarely need this level of generality, and piecewise continuous controls will be adequate for most of our purposes (with the exception of some of the material to be discussed in Sections 4.4 and 4.5--specifically, Fuller's problem and Filippov's theorem). Similarly to the case of the system (3.19), by a solution of (3.18) we mean an absolutely continuous function satisfying

In what follows, we will always assume that the property of local existence and uniqueness of solutions holds for a given control system. This of course does not guarantee that solutions exist globally in time. Typically, we will consider a candidate optimal trajectory defined on some time interval , and then existence over the same time interval will be automatically ensured for nearby trajectories.

Next: 3.3.2 Cost functional Up: 3.3 Optimal control problem Previous: 3.3 Optimal control problem   Contents   Index
Daniel 2010-12-20