next up previous contents index
Next: 3.4 Variational approach to Up: 3.3 Optimal control problem Previous: 3.3.2 Cost functional   Contents   Index


3.3.3 Target set

As we noted before, the cost functional (3.21) depends on the choice of $ t_0$ , $ x_0$ , and $ t_f$ . We take the initial time $ t_0$ and the initial state $ x_0$ to be fixed as part of the control system (3.18). We now need to explain how to define the final time $ t_f$ (which in turn determines the corresponding final state $ x_f$ ). Depending on the control objective, the final time and final state can be free or fixed, or can belong to some set. All these possibilities are captured by introducing a target set $ S\subset[t_0,\infty)\times\mathbb{R}^n$ and letting $ t_f$ be the smallest time such that $ (t_f,x_f)\in S$ . It is clear that $ t_f$ defined in this way in general depends on the choice of the control $ u$ . We will take $ S$ to be a closed set; hence, if $ (t,x(t))$ ever enters $ S$ , the time $ t_f$ is well defined. If a trajectory is such that $ (t,x(t))$ does not belong to $ S$ for any $ t$ , then we consider its cost as being infinite (or undefined). Note that here we do not allow the option that $ t_f=\infty$ may give a valid finite cost, although we will study such ``infinite-horizon" problems later (in Chapters 5 and 6). Below are some examples of target sets that we will encounter in the sequel.

The target set $ S=[t_0,\infty)\times\{x_1\} $ , where $ x_1$ is a fixed point in $ \mathbb{R}^n$ , gives a free-time, fixed-endpoint problem. A generalization of this is to consider a target set of the form $ S=[t_0,\infty)\times S_1 $ , where $ S_1$ is a surface (manifold) in $ \mathbb{R}^n$ . Another natural target set is $ S= \{t_1\}\times\mathbb{R}^n$ , where $ t_1$ is a fixed time in $ [t_0,\infty)$ ; this gives a fixed-time, free-endpoint problem. It is useful to observe that if we start with a fixed-time, free-endpoint problem and consider again the auxiliary state $ x_{n+1}:=t$ , we recover the previous case with $ S_1\subset \mathbb{R}^{n+1}$ given by $ \{x\in\mathbb{R}^{n+1}: x_{n+1}=t_1\}$ . A target set $ S=T\times S_1$ , where $ T$ is some subset of $ [t_0,\infty)$ and $ S_1$ is some surface in $ \mathbb{R}^n$ , includes as special cases all the target sets mentioned above. It also includes target sets of the form $ S=\{t_1\}\times \{x_1\}$ , which corresponds to the most restrictive case of a fixed-time, fixed-endpoint problem. As the opposite extreme, we can have $ S=[t_0,\infty)\times\mathbb{R}^n$ , i.e., a free-time, free-endpoint problem. (The reader may wonder about the sensibility of this latter problem formulation: when will the motion stop, or why would it even begin in the first place? To answer these questions, we have to keep in mind that the control objective is to minimize the cost (3.21). In the case of the Mayer problem, for example, $ (t_f,x_f)$ will be a point where the terminal cost is minimized, and if this minimum is unique then we do not need to specify a target set a priori. In the presence of a running cost $ L$ taking both positive and negative values, it is clear that remaining at rest at the initial state may not be optimal--and we also know that we can always bring such a problem to the Mayer form. When one says ``cost" one may often think of it implicitly as a positive quantity, but remember that this need not be the case; we may be making a ``profit" instead.) Many other target sets can be envisioned. For example, $ S=\{(t,g(t)):t\in[t_0,\infty)\}$ for some continuous function $ g:\mathbb{R}\to\mathbb{R}^n$ corresponds to hitting a moving target. A point target can be generalized to a set by making $ g$ set-valued. The familiar trick of incorporating time as an extra state variable allows us to reduce such target sets to the ones we already discussed, and so we will not specifically consider them.

We now have a refined formulation of the optimal control problem: given a control system (3.18) satisfying the assumptions of Section 3.3.1, a cost functional given by (3.21), and a target set $ S\subset[t_0,\infty)\times\mathbb{R}^n$ , find a control $ u(\cdot)$ that minimizes the cost. Unlike in calculus of variations, we will usually interpret optimality in the global sense. (The necessary conditions for optimality furnished by the maximum principle apply to locally optimal controls as well, provided that we work with an appropriate norm; see Section 4.3 for details.)


next up previous contents index
Next: 3.4 Variational approach to Up: 3.3 Optimal control problem Previous: 3.3.2 Cost functional   Contents   Index
Daniel 2010-12-20