next up previous contents index
Next: 7.3.3 Riccati inequalities and Up: 7.3 Riccati equations and Previous: 7.3.1 gain   Contents   Index

7.3.2 $ \mathcal H_\infty $ control problem

We saw in the previous subsection how $ u$ can play the role of a disturbance input, in contrast with the standard LQR setting of Chapter 6 where, as in the rest of this book, it plays the role of a control input. Now, let us make the situation more interesting by allowing both types of inputs to be present in the system, the task of the control being to stabilize the system and attenuate the unknown disturbance in some sense. Such control problems fall into the general framework of robust control theory, which deals with control design methods for providing a desired behavior in the presence of uncertainty. (We can also think of the control and the disturbance as two opposing players in a differential game.) In the specific problem considered in this subsection, the level of disturbance attenuation will be measured by the $ \mathcal L_2$ gain (or $ \mathcal H_\infty $ norm) of the closed-loop system.

The $ \mathcal H_\infty $ control problem concerns itself with a system of the form

$\displaystyle \dot x$ $\displaystyle =Ax+Bu+Dw,\qquad y=Cx,\qquad z=Ex$    

where $ u$ is the control input, $ w$ is the disturbance input, $ y$ is the measured output (the quantity available for feedback), and $ z$ is the controlled output (the quantity to be regulated); for simplicity, we assume here that there are no feedthrough terms from $ u$ and $ w$ to $ y$ and $ z$ . The corresponding feedback control diagram is shown in Figure 7.4. We will first consider the simpler case of state feedback, obtained by setting $ C:=I$ so that $ y=x$ . In this case, we also restrict our attention to controllers that take the static linear state feedback form $ u=Kx$ .

Figure: $ H_\infty $ control problem setting
\includegraphics{figures/robustcontrol.eps}

The control objective is to stabilize the internal dynamics and attenuate $ w$ in the $ \mathcal H_\infty $ sense. More specifically, we want to design the feedback gain matrix $ K$ so that the following two properties hold:


  1. The closed-loop system matrix $ A_{\text{cl}}:=A+BK$ is Hurwitz.
  2. The $ \mathcal L_2$ gain of the closed-loop system from $ w$ to $ z$ (or, what is the same, the $ \mathcal H_\infty $ norm of the closed-loop transfer matrix $ G(s)=E(Is-A_{\text{cl}})^{-1}D$ ) does not exceed a prespecified value $ \gamma>0$ .

Note that this is not an optimal control problem because we are not asking to minimize the gain $ \gamma$ (although ideally of course we would like $ \gamma$ to be as small as possible). Controls solving problems of this kind are known as suboptimal.

It follows from the results of the previous subsection--applied with the change of notation from $ A,B,C,u,y$ to $ A_{\text{cl}},D,E,w,z$ , respectively--that to have property 2 (the bound on the $ \mathcal L_2$ gain) it is sufficient to find a positive semidefinite solution of the Riccati inequality

$\displaystyle PA_{\text{cl}}+A_{\text{cl}}^TP+\frac 1\gamma E^TE+\frac 1\gamma PDD^TP\le 0.$ (7.27)

To also guarantee property 1 (internal stability of the closed-loop system) requires slightly stronger conditions: $ P$ needs to be positive definite and the inequality in (7.27) needs to be strict. The latter condition can be encoded via the Riccati equation

$\displaystyle PA_{\text{cl}}+A_{\text{cl}}^TP+\frac 1\gamma E^TE+\frac 1\gamma PDD^TP+\varepsilon Q= 0$ (7.28)

where $ Q=Q^T>0$ and $ \varepsilon >0$ . (This implies $ PA_{\text{cl}}+A_{\text{cl}}^TP<0$ which is the well-known Lyapunov condition for $ A_{\text{cl}}$ to be a Hurwitz matrix.) Next, we need to convert (7.28) into a condition that is verifiable in terms of the original open-loop system data. Introducing a matrix $ R=R^T>0$ as another design parameter (in addition to $ Q$ and $ \varepsilon $ ), suppose that there exists a solution $ P>0$ to the Riccati equation

$\displaystyle PA+A^TP+\frac1\gamma E^TE+\frac1\gamma PDD^TP-\frac1\varepsilon PBR^{-1}B^TP+\varepsilon Q=0.$ (7.29)

Then, if we let $ K:=-\frac1{2\varepsilon }R^{-1}B^TP$ , a straightforward calculation shows that the feedback law $ u=Kx$ enforces (7.28) and thus achieves both of our control objectives. Conversely, it can be shown that if the system is stabilizable with an $ \mathcal L_2$ gain less than $ \gamma$ , then (7.29) is solvable for $ P>0$ .

The general case--when the full state is not measured and the controller is a dynamic output feedback--is more interesting and more complicated. Without going into details, we mention that a complete solution to this problem, in the form of necessary and sufficient conditions for the existence of a controller achieving an $ \mathcal L_2$ gain less than $ \gamma$ , is available and consists of the following ingredients:


  1. Finding a solution $ P_1$ of a Riccati equation from the state feedback case.

  2. Finding a solution $ P_2$ of another Riccati equation obtained from the first one by the substitutions $ B\rightarrow C$ and $ D\leftrightarrow E$ .

  3. Checking that the largest singular value of the product $ P_1P_2$ is less than $ \gamma^2$ .


These elegant conditions in terms of two coupled Riccati equations yield a controller that can be interpreted as a state feedback law combined with an estimator (observer).


next up previous contents index
Next: 7.3.3 Riccati inequalities and Up: 7.3 Riccati equations and Previous: 7.3.1 gain   Contents   Index
Daniel 2010-12-20