6.1 Finite-horizon LQR problem

In this chapter we will focus on the special case when the system dynamics are linear and the cost is quadratic. While this additional structure certainly makes the optimal control problem more tractable, our goal is not merely to specialize our earlier results to this simpler setting. Rather, we want to go further and develop a more complete understanding of optimal solutions compared with what we were able to achieve for the general scenarios treated in the previous chapters. (We could have followed a different path in our studies and started with this specific problem class before tackling more difficult problems. From the pedagogical point of view, each approach has its own merits. Historically, however, the general nonlinear results--having their origins in calculus of variations--appeared first.)

The (finite-horizon) *Linear Quadratic Regulator (LQR)* problem
is the optimal control problem from Section 3.3 with
the following additional assumptions: the control system is a linear time-varying system

with and (the control is unconstrained); the target set is , where is a fixed time (so this is a fixed-time, free-endpoint problem); and the cost functional is

where , , are matrices of appropriate dimensions satisfying (symmetric positive semidefinite), (symmetric positive semidefinite), and (symmetric positive

- 6.1.1 Candidate optimal feedback law
- 6.1.2 Riccati differential equation
- 6.1.3 Value function and optimality
- 6.1.4 Global existence of solution for the RDE