next up previous contents index
Next: 5.1.1 Motivation: the discrete Up: 5. The Hamilton-Jacobi-Bellman equation Previous: 5. The Hamilton-Jacobi-Bellman equation   Contents   Index


5.1 Dynamic programming and the HJB equation

Right around the time when the maximum principle was being developed in the Soviet Union, on the other side of the Atlantic ocean (and of the iron curtain) Bellman wrote the following in his book [Bel57]: ``In place of determining the optimal sequence of decisions from the fixed state of the system, we wish to determine the optimal decision to be made at any state of the system. Only if we know the latter, do we understand the intrinsic structure of the solution." The approach realizing this idea, known as dynamic programming, leads to necessary as well as sufficient conditions for optimality expressed in terms of the so-called Hamilton-Jacobi-Bellman (HJB) partial differential equation for the optimal cost. These concepts are the subject of the present chapter. Developed independently from--even, to some degree, in competition with--the maximum principle during the cold war era, the resulting theory is very different from the one presented in Chapter 4. Nevertheless, both theories have their roots in calculus of variations and there are important connections between the two, as we will explain in Section 5.2 (see also Section 7.2).



Subsections
next up previous contents index
Next: 5.1.1 Motivation: the discrete Up: 5. The Hamilton-Jacobi-Bellman equation Previous: 5. The Hamilton-Jacobi-Bellman equation   Contents   Index
Daniel 2010-12-20