next up previous contents index
Next: 1.2.1 Unconstrained optimization Up: 1. Introduction Previous: 1.1 Optimal control problem   Contents   Index


1.2 Some background on finite-dimensional optimization

Consider a function $ f:\mathbb{R}^n\to\mathbb{R}.
$ Let $ D$ be some subset of $ \mathbb{R}^n$ , which could be the entire $ \mathbb{R}^n$ . We denote by $ \vert\cdot\vert$ the standard Euclidean norm on $ \mathbb{R}^n$ .

A point $ x^*\in D$ is a local minimum of $ f$ over $ D$ if there exists an $ \varepsilon >0$ such that for all $ x\in D$ satisfying $ \vert x-x^*\vert<\varepsilon $ we have

$\displaystyle f(x^*)\le f(x).$ (1.3)

In other words, $ x^*$ is a local minimum if in some ball around it, $ f$ does not attain a value smaller than $ f(x^*)$ . Note that this refers only to points in $ D$ ; the behavior of $ f$ outside $ D$ is irrelevant, and in fact we could have taken the domain of $ f$ to be $ D$ rather than $ \mathbb{R}^n$ .

If the inequality in (1.3) is strict for $ x\ne x^*$ , then we have a strict local minimum. If (1.3) holds for all $ x\in D$ , then the minimum is global over $ D$ . By default, when we say ``a minimum" we mean a local minimum. Obviously, a minimum need not be unique unless it is both strict and global.

The notions of a (local, strict, global) maximum are defined similarly. If a point is either a maximum or a minimum, it is called an extremum. Observe that maxima of $ f$ are minima of $ -f$ , so there is no need to develop separate results for both. We focus on the minima, i.e., we view $ f$ as a cost function to be minimized (rather than a profit to be maximized).



Subsections
next up previous contents index
Next: 1.2.1 Unconstrained optimization Up: 1. Introduction Previous: 1.1 Optimal control problem   Contents   Index
Daniel 2010-12-20