# Subgradients

## Contents

# 9.16. Subgradients#

Primary references for this section are [7, 9].

Throughout this section, we assume that \(\VV, \WW\) are finite dimensional real vector spaces. Wherever necessary, they are equipped with a norm \(\| \cdot \|\) or an real inner product \(\langle \cdot, \cdot \rangle\). They are also equipped with a metric \(d(\bx, \by) = \| \bx - \by \|\) as needed.

## 9.16.1. Subgradients#

(Subgradient)

Let \(f : \VV \to \RERL\) be a
proper function.
Let \(\bx \in \dom f\).
A vector \(\bg \in \VV^*\) is called a *subgradient* of \(f\) at \(\bx\)
if

This inequality is known as the *subgradient inequality*.

Here, we assume that \(f\) is an extended value extension whenever required; i.e., \(f(\bx) = \infty\) for all \(\bx \notin \dom f\).

As discussed in Theorem 4.106, the vector spaces \(\VV\) and \(\VV^*\) are isomorphic. Therefore, we follow the convention that both \(\VV\) and \(\VV^*\) have exactly the same elements. The primary difference between \(\VV\) and \(\VV^*\) comes from the computation of norm. If \(\VV\) is endowed with a norm \(\| \cdot \|\) then \(\VV^*\) is endowed with a dual norm \(\| \cdot \|_*\).

In the arguments below \(B[\ba, r]\) or \(B_{\| \cdot \|}[\ba, r]\) denotes the closed ball of radius \(r\) in the normed space \((\VV, \| \cdot \|)\). The closed ball of radius \(r\) in the dual space \((\VV, \| \cdot \|_*)\) shall be denoted by \(B_*[\ba, r]\) or \(B_{\| \cdot \|_*}[\ba, r]\). Open balls shall be denoted similarly.

(Global affine underestimator)

If \(\bg\) is a subgradient of \(f\) at some \(\bx \in \dom f\), then the affine function \(a : \VV \to \RR\) given by:

is a global affine underestimator for \(f\). Note that the term \(f(\bx) - \langle \bx, \bg \rangle\) is a constant since \(\bx \in \VV\) and \(\bg \in \VV^*\) are fixed. This comes from the subgradient inequality:

(Subgradient inequality alternative form)

For \(\by \notin \dom f\), \(f(\by) = \infty\). Thus, the subgradient inequality is trivially satisfied for all \(\by \notin \dom f\). An alternate form of the inequality is:

### 9.16.1.1. Geometric Interpretation#

(Subgradient and supporting hyperplane)

Let \(f: \VV \to \RERL\) be a proper function. Then \(\bg\) be a subgradient of \(f\) at \(\bx\) if and only if \(\epi f\) has a supporting hyperplane at \((\bx, f(\bx))\) with a normal \((-\bg, 1)\).

Let \(H\) be a supporting hyperplane of \(\epi f\) at \((\bx, f(\bx))\) with the normal \((-\bg, 1)\).

Then

\[ H = \{ (\by, t) \ST \langle \by, -\bg \rangle + t = \langle \bx, -\bg \rangle + f(\bx) \}. \]For any \((\by, f(\by)) \in \epi f\), we must have

\[\begin{split} & \langle \by, -\bg \rangle + f(\by) \geq \langle \bx, -\bg \rangle + f(\bx) \\ \iff & f(\by) \geq f(\bx) + \langle \by - \bx, \bg \rangle. \end{split}\]Then \(\bg\) is a subgradient of \(f\) at \(\bx\).

Now let \(\bg\) be a subgradient of \(f\) at \(\bx\).

Let \((\by, t) \in \epi f\).

Then we have

\[ t \geq f(\by) \geq f(\bx) + \langle \by - \bx, \bg \rangle. \]Rearranging the terms, we have

\[ \langle \by, -\bg \rangle + t \geq \langle \bx, -\bg \rangle + f(\bx) \]for every \((\by, t) \in \epi f\).

Then the hyperplane

\[ H = \{ (\by, t) \ST \langle \by, -\bg \rangle + t = \langle \bx, -\bg \rangle + f(\bx) \} \]is indeed a supporting hyperplane for \(\epi f\).

The normal vector for this hyperplane is \((-\bg, 1)\) and it passes through the point \((\bx, f(\bx))\).

## 9.16.2. Subdifferential#

At a point \(\bx \in \dom f\), it is possible that there are more than one subgradients. It is thus natural to introduce the notion of the set of all subgradients of \(f\) at a specific point \(\bx \in \dom f\).

(Subdifferential set)

Let \(f : \VV \to \RERL\) be a proper function.
The set of all subgradients of \(f\) at a point \(\bx \in \dom f\)
is called the *subdifferential* of \(f\) at \(\bx\) and is denoted by
\(\partial f(\bx)\).

For all \(\bx \notin \dom f\), we define \(\partial f(\bx) = \EmptySet\).

(Subdifferential of norm at origin)

Let \(f : \VV \to \RR\) by given by:

where \(\| \cdot \|\) is the norm endowed on \(\VV\). Then, the subdifferential of \(f\) at \(\bx = \bzero\) is given by the dual norm unit ball:

Proof. \(\bg \in \partial f (\bzero)\) if and only if

This reduces to:

Maximizing both sides of this inequality over the set \(\{ \by \ST \| \by \| \leq 1 \}\), we obtain:

Thus, \(\| \bg \|_* \leq 1\) is a necessary condition.

We now show that \(\| \bg \|_* \leq 1\) is sufficient too. By Generalized Cauchy Schwartz inequality:

Thus, if \(\| \bg \|_* \leq 1\), then \(\bg\) is a subgradient.

Thus, the vectors that satisfy the subgradient inequality are exactly the same as those in \( B_{\| \cdot \|_*}[\bzero, 1]\).

The subdifferential of a function \(f\) may be empty at specific points \(\bx \in \VV\).

(Subdifferentiability)

A proper function \(f : \VV \to \RERL\) is called *subdifferentiable*
at some \(\bx \in \dom f\) if \(\partial f(\bx) \neq \EmptySet\).

(Domain of subdifferentiability)

The set of points at which a proper function \(f : \VV \to \RERL\) is subdifferentiable, denoted by \(\dom (\partial f)\), is defined as:

### 9.16.2.1. Closedness and Convexity#

(Closedness and convexity of the subdifferential set)

Let \(f: \VV \to \RERL\) be a proper function. Then the set \(\partial f(\bx)\) is closed and convex for any \(\bx \in \VV\).

Proof. Let \(\bx \in \VV\) be fixed. For any \(\by \in \VV\), define the set

Note that \(H_{\by}\) is a closed half space in \(\VV^*\).

It is easy to see that

Thus, \(\partial f(\bx)\) is an infinite intersection of closed and convex sets. Hence \(\partial f(\bx)\) is closed and convex.

### 9.16.2.2. Subdifferentiability and Convex Domain#

\(\implies\) Convexity)

(Subdifferentiability + Convex domainLet \(f: \VV \to \RERL\) be a proper function. Assume that \(\dom f\) is convex. If \(f\) is subdifferentiable at every \(\bx \in \dom f\), then \(f\) is convex.

In other words:

Proof. Let \(\bx, \by \in \dom f\). Let \(t \in [0,1]\). Let \(\bz = (1- t) \bx + t \by\).

Since \(\dom f\) is convex, hence \(\bz \in \dom f\).

By hypothesis, \(f\) is subdifferentiable at \(\bz\).

Thus, there exists \(\bg \in \partial f(\bz)\).

By subgradient inequality (9.7)

\[\begin{split} f(\by) \geq f(\bz) + \langle \by - \bz, \bg \rangle = f(\bz) + (1-t) \langle \by - \bx, \bg \rangle\\ f(\bx) \geq f(\bz) + \langle \bx - \bz, \bg \rangle = f(\bz) - t \langle \by - \bx, \bg \rangle. \end{split}\]Multiplying the first inequality by \(t\), second by \((1-t)\) and adding, we get:

\[ t f(\by) + (1-t) f(\bx) \geq f(\bz). \]Thus,

\[ f((1-t)\bx + t \by) = f(\bz) \leq t f(\by) + (1-t) f(\bx) \]holds true for any \(\bx, \by \in \dom f\) and any \(t \in [0,1]\).

Thus, \(f\) is convex.

A convex function need not be subdifferentiable at every point in its domain. The problem usually occurs at the boundary points of the domain if the domain is not open.

### 9.16.2.3. Positive Scaling#

(Multiplication by a positive scalar)

Let \(f: \VV \to \RERL\) be a proper function. Let \(\bx \in \dom f\). For any \(\alpha > 0\),

Proof. Let \(\bg \in \partial f(\bx)\).

By subgradient inequality (9.7)

\[ f(\by) \geq f(\bx) + \langle \by - \bx, \bg \rangle \Forall \by \in \dom f. \]Multiplying by \(\alpha\), we get:

\[ (\alpha f)(\by) \geq (\alpha f)(\bx) + \langle \by - \bx, \alpha \bg \rangle \Forall \by \in \dom (\alpha f). \]Thus, \(\alpha \bg \in \partial (\alpha f)(\bx)\).

Thus, \(\alpha \partial f(\bx) \subseteq \partial (\alpha f)(\bx)\).

It is easy to see the same argument backwards to show that

\[ \partial (\alpha f)(\bx) = \alpha \partial f(\bx). \]

## 9.16.3. Proper Convex Functions#

In this subsection, we discuss the properties of the subdifferential sets for proper convex functions.

A proper convex function may not be subdifferentiable at every point in its domain. However, it is indeed subdifferentiable at the interior points and relative interior points of its domain.

### 9.16.3.1. Nonemptiness and Boundedness at Interior Points#

(Nonemptiness and boundedness of the subdifferential at interior points)

Let \(f: \VV \to \RERL\) be a proper convex function with \(S = \dom f\). Let \(\ba \in \interior S\). Then, \(\partial f (\ba)\) is nonempty and bounded.

In other words, for a proper convex function, the subdifferential at the interior points of its domain is nonempty and bounded. We have

Proof. Outline of the proof

Identify a supporting hyperplane for the epigraph of \(f\) at \((\ba, f(\ba)\).

Make use of the local Lipschitz continuity of the convex function at its interior points.

Show that the normal to the supporting hyperplane leads to a subgradient at \(\ba\).

Show that the subgradients are bounded by using the local Lipschitz continuity inequality and the subgradient inequality.

Consider the direct sum vector space \(\VV \oplus \RR\).

\(\epi f \subseteq \VV \oplus \RR\).

Since \(f\) is convex, hence \(\epi f\) is convex.

For some \(\ba \in \interior S\), consider the point \((\ba, f(\ba)) \in \VV \oplus \RR\).

Since \(f\) is convex, hence \((\ba, f(\ba)) \in \boundary \epi f\).

By supporting hyperplane theorem, there exists a vector \((\bp, -\alpha) \in \VV^* \oplus \RR\) such that

\[ \langle \bx, \bp \rangle - t \alpha \leq \langle \ba, \bp \rangle - f(\ba) \alpha \Forall (\bx, t) \in \epi f. \]We shall show that \(\alpha > 0\) must hold true and \(\bg = \frac{\bp}{\alpha}\) is indeed a subgradient at \(\ba\).

We note that, \((\ba, f(\ba) + 1) \in \epi f\). Putting it in,

\[\begin{split} & \langle \ba, \bp \rangle - (f(\ba) + 1) \alpha \leq \langle \ba, \bp \rangle - \alpha f(\ba)\\ &\iff -\alpha \leq 0 \\ &\iff \alpha \geq 0. \end{split}\]Thus, \(\alpha \geq 0\).

Recall from Theorem 9.174 that \(f\) is locally Lipschitz continuous at \(\ba \in \interior \dom f\).

Thus, there exists \(r > 0\) and \(L > 0\) such that \(B[\ba, r] \subseteq S\) and

\[ |f(\bx) - f(\ba)| \leq L \| \bx - \ba \| \Forall \bx \in B[\ba, r]. \]Since \(B[\ba, r] \subseteq S\), hence \((\bx, f(\bx)) \in \epi f\) for every \(\bx \in B[\ba, r]\).

Plugging \(t = f(\bx)\) in the supporting hyperplane inequality, we get

\[ \langle \bx, \bp \rangle - f(\bx) \alpha \leq \langle \ba, \bp \rangle - f(\ba) \alpha \Forall \bx \in B[\ba, r]. \]Rearranging the terms,

\[ \langle \bx - \ba, \bp \rangle \leq \alpha (f(\bx) - f(\ba)) \Forall \bx \in B[\ba, r]. \]Using the local Lipschitz property,

\[ \langle \bx - \ba, \bp \rangle \leq \alpha L \| \bx - \ba \| \Forall \bx \in B[\ba, r]. \]Recall that the dual norm for \(\bp \in \VV^*\) is given by

\[ \| \bp \|_* = \sup \{ |\langle \bx, \bp \rangle | \ST \bx \in \VV, \bx \| \leq 1 \}. \]Let \(\bp^{\dag} \in \VV\) with \(\| \bp^{\dag} \| = 1\) be the vector at which the supremum is attained.

Then, \(\| \bp \|_* = \langle \bp^{\dag}, \bp \rangle\) (since \(\VV\) is real).

Since \(\bp^{\dag}\) is a unit vector, hence \(\ba + r \bp^{\dag} \in B[\ba, r]\).

Plugging \(\bx = \ba + r \bp^{\dag}\) in the inequality above, we get

\[ r \langle \bp^{\dag}, \bp \rangle \leq \alpha L \| r \bp^{\dag} \| \Forall \bx \in B[\ba, r]. \]Simplifying

\[ r \| \bp \|_* \leq \alpha L r \Forall \bx \in B[\ba, r]. \]This means that \(\alpha > 0\) must be true.

If \(\alpha = 0\), then this inequality would require \(\bp = \bzero\).

But \((\bp, -\alpha)\) is a nonzero vector describing the supporting hyperplane.

Going back to the supporting hyperplane inequality and putting \(t=f(\bx)\), we have

\[ \langle \bx, \bp \rangle - f(\bx) \alpha \leq \langle \ba, \bp \rangle - f(\ba) \alpha \Forall \bx \in S. \]Rearranging the terms, we get

\[ \alpha (f(\bx) - f(\ba)) \geq \langle \bx - \ba, \bp \rangle \Forall \bx \in S. \]Letting \(\bg = \frac{1}{\alpha} \bp\) and dividing on both sides by \(\alpha\) (which is positive), we obtain

\[ f(\bx) - f(\ba) \geq \langle \bx - \ba, \bg \rangle \Forall \bx \in S. \]Rearranging again

\[ f(\bx) \geq f(\ba) + \langle \bx - \ba, \bg \rangle \Forall \bx \in S \]which is the subgradient inequality.

Thus, \(\bg \in \partial f(\ba)\).

Thus, \(\partial f(\ba)\) is nonempty.

We next show the boundedness of \(\partial f(\ba)\).

Let \(\bg \in \partial f(\ba)\).

Let \(\bg^{\dag} \in \VV\) such that \(\| \bg^{\dag} \| = 1\) and

\[ \| \bg \|_* = \langle \bg^{\dag}, \bg \rangle. \]Let \(\bx = \ba + r \bg^{\dag}\).

Applying the subgradient inequality on \(\bx\), we get:

\[ f(\bx) \geq f(\ba) + \langle r \bg^{\dag}, \bg \rangle = f(\ba) + r \| \bg \|_*. \]Thus,

\[ r \| \bg \|_* \leq f(\bx) - f(\ba) \leq L \| \bx - \ba \| = L \| r \bg^{\dag} \| = L r. \]Thus, \(\| \bg \|_* \leq L\) for every \(\bg \in \partial f(\ba)\).

Thus, \(\partial f(\ba)\) is bounded.

If \(f\) is a proper convex function, then the only points at which \(f\) may not be subdifferentiable (i.e. the subdifferential set is empty) are the points at the frontier of \(\dom f\) (i.e., \(\dom f \setminus \interior \dom f\)). \(f\) may be subdifferentiable on the frontier points too.

(Subdifferentiability of real valued convex functions)

Let \(f: \VV \to \RR\) be a convex function. Then, \(f\) is subdifferentiable over \(\VV\).

Proof. We have \(\dom f = \VV\).

Let \(\bx \in \VV\).

\(\VV\) is open in \((\VV, \| \cdot \|)\).

Thus, \(\bx \in \interior \VV = \dom f\).

By Theorem 9.214, \(\partial f(\bx)\) is nonempty and bounded as \(\bx \in \interior \dom f\).

Hence, \(f\) is subdifferentiable at \(\bx\).

Since this is valid for every \(\bx \in \VV\), hence \(\dom (\partial f) = \VV\).

### 9.16.3.2. Nonempty, Convex and Compact Subdifferentials#

(Nonempty, convex and compact subdifferentials for proper convex functions)

Let \(f : \VV \to \RERL\) be a proper and convex function. Let \(\bx \in \interior S\). Then \(\partial f(\bx)\) is nonempty, convex and compact.

Proof. Let \(\bx \in \interior S\).

By Theorem 9.211, \(\partial f(\bx)\) is closed and convex.

By Theorem 9.214, \(\partial f(\bx)\) is nonempty and bounded.

Since \(\partial f(\bx)\) is closed and bounded, hence it must be compact since \(\VV\) is a finite dimensional normed linear space.

We present an alternative proof based on the min common/max crossing framework developed in Min Common/Max Crossing Duality. This proof can be skipped in first reading.

Proof. This proof is based on the second min common/max crossing theorem Theorem 10.34.

We fix some \(\bx \in \interior S\). We construct the set \(M\) as

We first consider the min common problem.

Note that \((\bzero, f(\bx)) \in M\) since \(f(\bx + \bzero) \leq f(\bx)\).

Further see that the min common value

\[ p^* = \inf_{(\bzero, p) \in M} p = f(\bx). \]Hence \(p^*\) is finite.

Note that \(\overline{M} = M\) where

\[ \overline{M} = \{ (\bx, t) \in \VV \oplus \RR \ST \text{ there exists } \bar{t} \in \RR \text{ with } \bar{t} \leq t \text{ and } (\bx, \bar{t}) \in M \}. \]\(M\) is convex.

Let \((\bu_1, t_1), (\bu_2, t_2) \in M\) and let \(r \in (0, 1)\).

Let \((\bu, t) = r (\bu_1, t_1) + (1-r) (\bu_2, t_2)\).

We have \(f(\bx + \bu_1) \leq t_1\) and \(f(\bx + \bu_2) \leq t_2\).

Now

\[\begin{split} f(\bx + \bu) &= f(\bx + r \bu_1 + (1-r) \bu_2) \\ &= f(r (\bx + \bu_1) + (1-r)(\bx + \bu_2)) \\ &\leq r f(\bx + \bu_1) + (1-r) f(\bx + \bu_2) \\ &\leq r t_1 + (1-r) t_2 = t. \end{split}\]Hence \((\bu, t) \in M\).

Hence \(M\) is convex.

Next consider the max crossing problem and strong duality.

We have

\[ q^* = \sup_{\ba \in \VV} q(\ba) \]where

\[ q(\ba) = \inf_{(\bu, t) \in M} \{ \langle \bu, \ba \rangle + t \}. \]The set of optimal solutions of the max crossing problem is given by

\[ Q^* = \{ \ba \in \VV \ST q(\ba) = q^* \} \]For some \(\ba \in Q^*\), we can attain strong duality with

\[ p^* = q^* = q(\ba) \]if and only if

\[ f(\bx) = \inf_{(\bu, t) \in M} \{ \langle \bu, \ba \rangle + t \}. \]Equivalently

\[ f(\bx) \leq f(\bx + \bu) + \langle \bu, \ba \rangle \Forall \bu \in \VV. \]Equivalently

\[ f(\bx + \bu) \geq f(\bx) + \langle \bu, -\ba \rangle \Forall \bu \in \VV. \]But this is nothing but the subgradient inequality with \(-\ba\) as the subgradient at \(\bx\).

In other words, strong duality is attained at \(\ba\) as a solution of the max crossing problem if and only if \(-\ba\) is a subgradient of \(f\).

Hence \(Q^*\) with strong duality is given by \(-\partial f(\bx)\).

We next establish the conditions for the second min common/max crossing theorem Theorem 10.34

Consider the set

\[ D = \{ \bu \in \VV \ST \text{ there exists } t \in \RR \text{ with } (\bu, t) \in M \}. \]It is easy to see that \(D = S - \bx\).

Consider the set \(T = S - \bx\).

Let \(\bu \in T\).

Then \(\bx + \bu \in S\).

Hence \(f(\bx + \bu) \leq f(\bx + \bu)\).

Hence \((\bu, f(\bx + \bu)) \in M\).

Hence \(\bu \in D\).

Hence \(T = S - \bx \subseteq D\).

Let \(\bu \notin T\).

Then \(\bu + \bx \notin S\).

Hence \(f(\bu + \bx) = \infty\).

Hence for every \(t \in \RR\), \(f(\bu + \bx) > t\).

Hence \((\bu, t) \notin M\) for every \(t \in \RR\).

Hence \(\bu \notin D\).

Hence \(D = T = S - \bx\).

Since \(\bx \in \interior S\), hence \(\bzero \in \interior D\).

We see that all the conditions of the second min common/max crossing theorem are satisfied.

\(p^*\) is finite.

\(\overline{M} = M\) is convex.

The set \(D\) contains \(\bzero\) in its interior.

Hence \(-\partial f(\bx)\) is nonempty, convex and compact.

Hence \(\partial f(\bx)\) is also nonempty, convex and compact.

### 9.16.3.3. Subgradients over a Compact Set#

(Subgradients over a compact set are nonempty and bounded)

Let \(f: \VV \to \RERL\) be a proper convex function with \(S = \dom f\). Let \(A \subseteq \interior S\) be a nonempty and compact subset of the interior of the domain of \(f\). Then, the set of subgradients over \(A\) given by

is nonempty and bounded.

Proof. We are given that \(A\) is nonempty and compact subset of interior of domain of \(f\).

For any \(\bx \in A\), by Theorem 9.214, \(\partial f(\bx)\) is nonempty and bounded.

Thus, \(Y\) is nonempty.

We next prove that \(Y\) must be bounded also.

Let \(T = \VV \setminus (\interior S)\).

\(T\) is closed and \(A\) is closed. \(A \subseteq \interior S\). Hence, \(A \cap T = \EmptySet\).

Since \(A\) is compact and \(T\) is closed and \(A \cap T = \EmptySet\), hence distance between \(A\) and \(T\) is nonzero due to Theorem 3.128.

\[ r = d(A, T) > 0. \]Thus,

\[ \| \bx - \by \| \geq r \Forall \bx \in A, \by \notin \interior S. \]Let \(s = \frac{r}{2}\).

Let \(D = B[\bzero, s]\). \(D\) is a closed and bounded set. Hence, it is compact due to Theorem 4.66.

Let \(E = A + D\). Then \(E \subseteq \interior S\).

Let \(\by \in E\).

Then, there is \(\bx \in A\) and \(\bv \in D\) such that \(\by = \bx + \bv\).

Thus, \(\by - \bx = \bv\).

Hence \(\| \by - \bx \| \leq s < r\).

Since both \(A\) and \(D\) are compact, hence \(E\) is compact due to Theorem 4.67.

By Theorem 9.174, \(f\) is local Lipschitz continuous at every \(\bx \in E\) since \(\bx \in E \subseteq \interior S\).

Then, by Theorem 3.79, \(f\) is Lipschitz continuous on \(E\).

Thus, there exists \(L > 0\) such that

\[ |f(\bx) - f(\by)| \leq L \| \bx - \by \| \Forall \bx, \by \in E. \]Let \(\bg \in Y\). Then, there is \(\bx \in A\) such that \(\bg \in \partial f(\bx)\).

we can choose \(\bg^{\perp} \in \VV\) such that

\[ \| \bg \|_* = \langle \bg^{\perp}, \bg \rangle \text{ and } \| \bg^{\perp} \| = 1. \]Now, let \(\by = \bx + s \bg^{\perp}\). Then, \(\by \in E\).

\( \| \by - \bx \| = \| s \bg^{\perp}\| = s\).

Thus, \(s \bg^{\perp} \in D\).

Thus, \(\by \in E\) since \(\bx \in A\).

Also, \(\bx \in E\) since \(\bx = \bx + \bzero\) and \(\bzero \in D\).

Consequently, by Lipschitz continuity

\[ |f(\by) - f(\bx)| \leq L \| \by - \bx \| = L s. \]By subgradient inequality at \(\bx\)

\[ f(\by) - f(\bx) \geq \langle \by - \bx, \bg \rangle = s \langle \bg^{\perp}, \bg \rangle = s \| \bg \|_*. \]Using the Lipschitz bound above, we get

\[ s \| \bg \|_* \leq L s. \]Thus, \(\| \bg \|_* \leq L\).

Since \(\bg\) was chosen arbitrarily, hence \(Y\) is bounded.

We recall from Definition 9.56 that the relative interior of a convex set is given by

It is interior of the convex set w.r.t. the subspace topology of its affine hull.

### 9.16.3.4. Nonempty Subdifferential at Relative Interior Points#

(Nonemptiness of the subdifferential at relative interior points)

Let \(f: \VV \to \RERL\) be a proper convex function with \(S = \dom f\). Let \(\bx \in \relint S\). Then, \(\partial f (\bx)\) is nonempty and has the form

where \(L\) is the subspace that is parallel to \(\affine S\) and \(G\) is a nonempty and compact set.

In other words, for a proper convex function, the subdifferential at the relative interior points of its domain is nonempty. We have

The proof is based on the min common/max crossing framework developed in Min Common/Max Crossing Duality. It can be skipped in first reading. It follows the proof of Theorem 9.215.

Proof. This proof is based on the second min common/max crossing theorem Theorem 10.34.

We fix some \(\bx \in \relint S\). We construct the set \(M\) as

We have already established the following in the proof of Theorem 9.215:

\(M\) is convex.

\(\overline{M} = M\).

\(p^* = f(\bx)\).

\(p^*\) is finite.

\(q^* = \sup_{\ba \in \VV} q(\ba)\) where

\[ q(\ba) = \inf_{(\bu, t) \in M} \{ \langle \bu, \ba \rangle + t \}. \]\(Q^* = \{ \ba \in \VV \ST q(\ba) = q^* \}\).

When the strong duality holds, then

\[ Q^* = -\partial f(\bx). \]The set

\[ D = \{ \bu \in \VV \ST \text{ there exists } t \in \RR \text{ with } (\bu, t) \in M \} = S - \bx. \]

Continuing further

Since \(\bx \in \relint S\), hence \(\bzero \in \relint D\).

Hence \(\affine D = \affine S - \bx = L\).

Hence by the second min common/max crossing theorem

\[ -\partial f(\bx) = Q^* = L^{\perp} + \tilde{Q} \]where \(\tilde{Q}\) is a nonempty, convex and compact set.

Negating on both sides, we obtain

\[ \partial f(\bx) = L^{\perp} + G \]where \(G\) is a nonempty, convex and compact set.

(Existence of points with nonempty subdifferential)

Let \(f: \VV \to \RERL\) be a proper convex function with \(S = \dom f\). Then, there exists \(\bx \in S\) such that \(\partial f(\bx)\) is nonempty.

Proof. The effective domain of a proper convex function is convex and nonempty.

By Theorem 9.142, the relative interior of \(S = \dom f\) is nonempty.

Thus, there exists \(\bx \in S\).

By Theorem 9.217, \(\partial f(\bx)\) is nonempty.

### 9.16.3.5. Unbounded Subdifferential#

(Unboundedness condition for subdifferential)

Let \(f: \VV \to \RERL\) be a proper convex function with \(S = \dom f\).

Assume that \(\dim S < \dim \VV\). Let \(\bx \in S\). If \(\partial f(\bx) \neq \EmptySet\), then \(\partial f(\bx)\) is unbounded.

Proof. We proceed as follows.

Let \(n = \dim \VV\).

Let \(A = \affine S\). \(A\) is an affine set.

We have \(\bx \in S \subseteq A\).

Then, \(\WW = A - \bx\) is the subspace parallel to \(A\).

Accordingly, \(m = \dim \WW < \dim \VV = n\).

Then, the orthogonal complement of \(\WW\) is a nontrivial subspace with dimension \(n -m\).

Let \(\bv \in \WW^{\perp}\) be a nonzero vector.

Then, \(\langle \bw, \bv \rangle = 0\) for every \(\bw \in \WW\).

Now let \(\bg \in \partial f(\bx)\) be an arbitrary subgradient at \(\bx\).

By subgradient inequality

\[ f(\by) \geq f(\bx) + \langle \by - \bx, \bg \rangle \Forall \by \in S. \]Note that both \(\bx \in S\) and \(\by \in S\).

Hence, \(\by - \bx \in \WW\).

Thus, \(\langle \by - \bx, \bv \rangle = 0\).

But then, for any \(\alpha \in \RR\),

\[\begin{split} \langle \by - \bx, (\bg + \alpha \bv) \rangle &= \langle \by - \bx, \bg \rangle + \alpha \langle \by - \bx, \bv \rangle\\ &= \langle \by - \bx, \bg \rangle. \end{split}\]Thus, if \(\bg \in \partial f(\bx)\), then \(\bg + \alpha \bv \in \partial f(\bx)\) for every \(\alpha \in \RR\).

Thus, \(\partial f(\bx)\) is unbounded.

## 9.16.4. Directional Derivatives#

The directional derivative of a proper convex function is closely linked with its subdifferential. To see this, let \(\bx \in \interior \dom f\), let \(\bd \in \VV\) be a nonzero direction and \(t > 0\). Let \(\bg \in \partial f(\bx)\) and consider the subgradient inequality

Hence

We saw in Observation 9.8 that \(\frac{f(\bx + t \bd) - f(\bx)}{t}\) is a nondecreasing quantity and

This establishes the basic relation

for every \(\bg \in \partial f(\bx)\). In fact a stronger result is available in the form of max formula.

### 9.16.4.1. Max Formula#

The max formula is one of the key results in this section. It connects subgradients with directional derivatives.

(Max formula)

Let \(f: \VV \to \RERL\) be a proper convex function with \(S = \dom f\). Then for any \(\bx \in \interior S\) and \(\bd \in \VV\),

In words, the directional derivative is the supremum of the inner product of the subgradients with the direction.

Proof. Let \(\bx \in \interior S\) and \(\bd \in \VV\).

Let \(t > 0\). Then, by subgradient inequality

\[ f(\bx + t \bd) - f(\bx) \geq \langle t \bd , \bg \rangle \Forall \bg \in \partial f(\bx). \]Thus,

\[ \frac{f(\bx + t \bd) - f(\bx)}{t} \geq \langle \bd , \bg \rangle \Forall \bg \in \partial f(\bx). \]Taking the limit

\[ f'(\bx; \bd) = \lim_{t \to 0^+} \frac{f(\bx + t \bd) - f(\bx)}{t} \geq \lim_{t \to 0^+} \langle \bd , \bg \rangle = \langle \bd , \bg \rangle \]for every \(\bg \in \partial f(\bx)\).

Taking the supremum over \(\partial f(\bx)\) on the R.H.S., we obtain

\[ f'(\bx;\bd) \geq\sup \{ \langle \bd, \bg \rangle \ST \bg \in \partial f(\bx) \}. \]

We now show that the inequality is indeed an equality.

Let \(h : \VV \to \RR\) be given by

\[ h(\bv) = f'(\bx; \bv). \]By Theorem 9.206, \(h\) is a real valued convex function and nonnegative homogeneous.

By Corollary 9.19, \(h\) is subdifferentiable everywhere in \(\VV\).

In particular, \(h\) is subdifferentiable at \(\bd\).

Let \(\bg \in \partial h(\bd)\).

For any \(\bv \in \VV\) and \(t \geq 0\),

\[ t f'(\bx; \bv) = t h(\bv) = h (t \bv) \]since \(h\) is nonnegative homogeneous.

By subdifferential inequality

\[\begin{split} t f'(\bx; \bv) &= h(t \bv)\\ &\geq h(\bd) + \langle t \bv - \bd , \bg \rangle\\ &=f'(\bx; \bd) + \langle t \bv - \bd , \bg \rangle. \end{split}\]Rearranging the terms,

\[ t (f'(\bx; \bv) - \langle \bv, \bg \rangle) \geq f'(\bx; \bd) - \langle \bd, \bg \rangle. \]Since this inequality is valid for every \(t \geq 0\), hence the term \(f'(\bx; \bv) - \langle \bv, \bg \rangle\) must be nonnegative. Otherwise, the inequality will be invalided for large enough \(t\). Thus,

\[ f'(\bx; \bv) \geq \langle \bv, \bg \rangle. \]By Theorem 9.207, for any \(\by \in S\),

\[ f(\by) \geq f(\bx) + f'(\bx; \by - \bx). \]From previous inequality,

\[ f'(\bx; \by - \bx) \geq \langle \by - \bx, \bg \rangle. \]Thus, for any \(\by \in S\),

\[ f(\by) \geq f(\bx) + \langle \by - \bx, \bg \rangle. \]But this is a subgradient inequality. Hence, \(\bg \in \partial f(\bx)\).

Taking \(t=0\), in the subgradient inequality for \(h\),

\[ 0 \geq f'(\bx; \bd) - \langle \bd, \bg \rangle. \]Thus, there exists \(\bg \in \partial f(\bx)\) such that

\[ f'(\bx; \bd) \leq \langle \bd, \bg \rangle. \]Consequently,

\[ f'(\bx;\bd) \leq\sup \{ \langle \bd, \bg \rangle \ST \bg \in \partial f(\bx) \}. \]

Combining the two inequalities, we obtain the max formula:

Recall from Definition 9.51 that support function for a set \(C\) is given by

(Max formula as a support function)

The max formula can be written as

## 9.16.5. Differentiability#

### 9.16.5.1. Subdifferential and gradient#

(Subdifferential at points of differentiability)

Let \(f : \VV \to \RERL\) be a proper convex function with \(S = \dom f\). Let \(\bx \in \interior S\).

Then \(f\) is differentiable at \(\bx\) if and only if

In other words, if \(f\) is differentiable at \(\bx\) then its subdifferential is a singleton set consisting of the gradient and if the subdifferential at \(\bx\) is a singleton, then \(f\) is differentiable at \(\bx\).

Proof. Assume that \(f\) is differentiable at \(\bx\).

Let \(\bd \in \VV\) be some direction.

By Theorem 9.203,

\[ f'(\bx; \bd) = \langle \bd, \nabla f(\bx) \rangle. \]By Theorem 9.214, \(f\) is subdifferentiable at \(\bx\) since \(f\) is convex and \(\bx \in \interior S\).

Let \(\bg \in \partial f(\bx)\).

By the max formula (Theorem 9.219):

\[ f'(\bx;\bd) \geq \langle \bd, \bg \rangle \]as the directional derivative is the supremum of the inner product of the subgradients with the direction.

Thus,

\[ \langle \bd, \nabla f(\bx) \rangle \geq \langle \bd, \bg \rangle. \]In turn,

\[ \langle \bd, \bg - \nabla f(\bx) \rangle \leq 0. \]This holds for every \(\bd \in \VV\).

By the definition of dual norm

\[ \| \bg - \nabla f(\bx) \|_* = \underset{\| \bd \| \leq 1}{\sup} \{ \langle \bd, \bg - \nabla f(\bx) \rangle \}. \]Using the previous inequality

\[ \| \bg - \nabla f(\bx) \|_* \leq 0. \]Since, dual norm is a norm, hence it cannot be negative. Thus,

\[ \| \bg - \nabla f(\bx) \|_* = 0 \]Moreover, due to positive definiteness of a norm

\[ \bg - \nabla f(\bx) = \bzero \]must hold true.

Thus, \(\bg = \nabla f(\bx)\).

In other words, if \(\bg\) is a subgradient to \(f\) at \(\bx\), then it must equal \(\nabla f(\bx)\).

Thus, the only subgradient for \(f\) at \(\bx\) is \(\nabla f(\bx)\).

Thus,

\[ \partial f(\bx) = \{ \nabla f(\bx) \}. \]

For the converse, assume that \(f\) is subdifferentiable at \(\bx\) with \(\partial f(\bx) = \{ \bg \}\).

By the subgradient inequality

\[ f(\bx + \bu) \geq f(\bx) + \langle \bu, \bg \rangle \Forall \bu \in \VV. \]Thus,

\[ f(\bx + \bu) - f(\bx) - \langle \bu, \bg \rangle \geq 0 \Forall \bu \in \VV. \]Define a function \(h : \VV \to \RERL\) as

\[ h(\bu) \triangleq f(\bx + \bu) - f(\bx) - \langle \bu, \bg \rangle. \]We list some properties of \(h\).

By definition \(h(\bu) \geq 0\) for every \(\bu \in \VV\).

\(h\) is a convex function since \(f(\bx + \bu)\) is convex, \(\langle \bu, \bg \rangle\) is linear and \(f(\bx)\) is a constant (w.r.t. the variable \(\bu\)).

\(\dom h = \dom f - \bx = S - \bx\).

Thus, since \(\bx \in \interior S\), hence \(\bzero = \bx - \bx \in \interior \dom h\).

\(h(\bzero) = f(\bx) - f(\bx) - \langle \bzero, \bg \rangle = 0\).

If we are able to show that

\[ \lim_{\bu \to \bzero}\frac{h(\bu)}{\| \bu \|} = 0 \]then, by the definition of gradient (Definition 9.71),

\[ \bg = \nabla f(\bx). \]We can easily show that \(\partial h(\bzero) = \{ \bzero \}\).

If \(\tilde{\bg}\) is a subgradient of \(h\) at \(\bzero\), then by subgradient inequality

\[ h(\bu) \geq h(\bzero) + \langle \bu, \tilde{\bg} \rangle = \langle \bu, \tilde{\bg} \rangle \Forall \bu \in \VV. \]Then, \(\tilde{\bg} = \bzero\) satisfies this inequality since \(h(\bu) \geq 0\) by definition.

For contradiction, assume a nonzero \(\tilde{\bg}\) can satisfy this inequality.

Then,

\[\begin{split} & h(\bu) \geq \langle \bu, \tilde{\bg} \rangle \\ & \iff f(\bx + \bu) - f(\bx) - \langle \bu, \bg \rangle \geq \langle \bu, \tilde{\bg} \rangle \\ & \iff f(\bx + \bu) \geq f(\bx) + \langle \bu, \tilde{\bg} + \bg \rangle \\ & \iff \tilde{\bg} + \bg \in \partial f(\bx). \end{split}\]This contradicts the hypothesis that the subgradient of \(f\) at \(\bx\) is \(\{ \bg \}\).

Thus, \(\partial h(\bzero) = \{ \bzero \}\).

Then, max formula (Theorem 9.219):

\[ h'(\bzero; \bd) = \sigma_{\partial h (\bzero)} (\bd) = \langle \bd, \bzero \rangle = 0. \]Thus, from the definition of directional derivatives

\[ 0 = h'(\bzero; \bd) = \lim_{\alpha \to 0^+} \frac{h(\alpha \bd) - h(\bzero)}{\alpha} = \lim_{\alpha \to 0^+} \frac{h(\alpha \bd)}{\alpha}. \]Let us now introduce an orthonormal basis for \(\VV\) as \(\{\be_1, \dots, \be_n \}\).

Assume that \(\VV\) has been equipped with various \(\ell_p\) norms as described in Remark 9.1.

Since \(\bzero \in \interior \dom h\), there exists \(r \in (0, 1)\) such that

\[ B_1[\bzero, r] \subseteq \dom h. \]It is a cross polytope of radius \(r\) with \(2n\) vertices given by \(\{\pm r \be_i \}_{i=1}^n\).

\[ B_1[\bzero, r] = \convex \{\pm r \be_i \}_{i=1}^n. \]Let us denote these \(2n\) vectors as \(\bw_1, \dots, \bw_{2n}\).

By Remark 9.1

\[ B[\bzero, s] = B_2[\bzero, s] \subseteq B_1[\bzero, r] \]where \(s = \frac{r}{\sqrt{n}}\).

Let \(\bu \in B[\bzero, s^2]\) be a nonzero vector.

Since \(r < 1\), hence \(s < 1\), hence \(s^2 < s\).

Let \(\bv = s \frac{\bu}{\| \bu \|}\).

Then, \(\bv \in B[\bzero, s] \subseteq B_1[\bzero, r]\).

Thus, \(\bv \in \convex \{\bw_i \}_{i=1}^n\).

Thus, there exists \(\bt \in \Delta_{2 n}\) such that

\[ s \frac{\bu}{\| \bu \|} = \bv = \sum_{i=1}^{2n} t_i \bw_i. \]Then,

\[\begin{split} \frac{h(\bu)}{\| \bu \|} &= \frac{h\left ( \frac{\| \bu \|}{s} s \frac{\bu}{\| \bu \|} \right ) }{\| \bu \|} & \\ &= \frac{h \left ( \sum_{i=1}^{2n} t_i \frac{\| \bu \|}{s} \bw_i \right ) }{\| \bu \|} & \text{convex combination} \\ &\leq \sum_{i=1}^{2n} t_i \frac{h \left ( \| \bu \| \frac{\bw_i}{s} \right ) }{\| \bu \|} & h \text{ is convex and } \bt \in \Delta_{2 n}\\ &\leq \underset{i=1,\dots, 2n}{\max} \left \{ \frac{h \left ( \| \bu \| \frac{\bw_i}{s} \right ) }{\| \bu \|} \right \} & \text{since } \sum t_i = 1. \end{split}\]Note that \(\| \bu \| \frac{\bw_i}{s} \in B[\bzero, s] \subseteq B_1[\bzero, r] \subseteq \dom h\).

Now,

\[ \lim_{\bu \to \bzero} \frac{h \left ( \| \bu \| \frac{\bw_i}{s} \right ) }{\| \bu \|} = \lim_{ \| \bu \| \to 0} \frac{h \left ( \| \bu \| \frac{\bw_i}{s} \right ) }{\| \bu \|} = \lim_{ \alpha \to 0^+} \frac{h \left ( \alpha \frac{\bw_i}{s} \right ) }{\alpha} = 0. \]Thus,

\[ \lim_{\bu \to \bzero} \frac{h(\bu)}{\| \bu \|} = 0. \]Thus, \(\bg = \nabla f(\bx)\) as desired.

## 9.16.6. Subdifferential Calculus#

### 9.16.6.1. Function Sums#

(Subdifferential subadditivity with sum of functions)

Let \(f_1, f_2 : \VV \to \RERL\) be proper functions with \(S_1 = \dom f_1\) and \(S_2 = \dom f_2\). For any \(\bx \in S_1 \cap S_2\)

Proof. Let \(f = f_1 + f_2\). We note that \(\dom f = \dom (f_1 + f_2) = \dom f_1 \cap \dom f_2 = S_1 \cap S_2\).

Let \(\bx \in S_1 \cap S_2\).

Let \(\bg \in \partial f_1(\bx) + \partial f_2(\bx)\).

Then, there exist \(\bg_1 \in \partial f_1(\bx)\) and \(\bg_2 \in \partial f_2(\bx)\) such that \(\bg = \bg_1 + \bg_2\).

Then, by subgradient inequality, for any \(\by \in S_1 \cap S_2\)

\[\begin{split} &f_1(\by) \geq f_1(\bx) + \langle \by - \bx , \bg_1 \rangle, \\ &f_2(\by) \geq f_2(\bx) + \langle \by - \bx , \bg_2 \rangle. \end{split}\]Summing the two inequalities, we get

\[ f_1(\by) + f_2(\by) \geq f_1(\bx) + f_2(\bx) + \langle \by - \bx , \bg_1 + \bg_2 \rangle. \]Rewriting, for every \(\by \in \dom f\)

\[ (f_1 + f_2)(\by) \geq (f_1 + f_2)(\bx) + \langle \by - \bx , \bg \rangle. \]Thus, \(\bg = \bg_1 + \bg_2 \in \partial (f_1 + f_2)(\bx)\) = \partial f (\bx).

Thus, \(\partial f_1(\bx) + \partial f_2(\bx) \subseteq \partial f(\bx)\).

We can generalize this result for a finite sum of functions using simple mathematical induction.

(Weak sum rule of subdifferential calculus)

Let \(f_1, \dots, f_m : \VV \to \RERL\) be proper functions. For any \(\bx \in \cap_{i=1}^m \dom f_i\)

(Subdifferential additivity with sum of convex functions)

Let \(f_1, f_2 : \VV \to \RERL\) be proper convex functions with \(S_1 = \dom f_1\) and \(S_2 = \dom f_2\). For any \(\bx \in \interior S_1 \cap \interior S_2\)

Proof. With \(f = f_1 + f_2\), by Theorem 3.22,

Let \(\bx \in \interior S_1 \cap \interior S_2\).

Thus, \(\bx \in \interior \dom f\).

By max formula, for any \(\bd \in \VV\),

\[ f'(\bx;\bd) = \sup \{ \langle \bd, \bg \rangle \ST \bg \in \partial f(\bx) \} = \sigma_{\partial f(\bx)}(\bd). \]Since directional derivative is additive, hence

\[ f'(\bx;\bd) = f'_1(\bx;\bd) + f'_2(\bx;\bd). \]Expanding on this

\[\begin{split} \sigma_{\partial f(\bx)} (\bd) &= f'(\bx;\bd)\\ &= f'_1(\bx;\bd) + f'_2(\bx;\bd)\\ &= \sup \{ \langle \bd, \bg_1 \rangle \ST \bg_1 \in \partial f_1(\bx) \} + \sup \{ \langle \bd, \bg_1 \rangle \ST \bg_2 \in \partial f_2(\bx) \}\\ &= \sup \{ \langle \bd, \bg_1 + \bg_2 \rangle \ST \bg_1 \in \partial f_1(\bx) , \bg_2 \in \partial f_2(\bx) \}\\ &= \sup \{ \langle \bd, \bg \rangle \ST \bg \in \partial f_1(\bx) + \partial f_2(\bx) \}\\ &= \sigma_{\partial f_1(\bx) + \partial f_2(\bx)} (\bd). \end{split}\]In summary, for every \(\bd \in \VV\),

\[ \sigma_{\partial f(\bx)} (\bd) = \sigma_{\partial f_1(\bx) + \partial f_2(\bx)} (\bd). \]By Theorem 9.211, \(\partial f(\bx)\), \(\partial f_1(\bx)\) and \(\partial f_2(\bx)\) are closed and convex.

By Theorem 9.214, \(\partial f(\bx)\), \(\partial f_1(\bx)\) and \(\partial f_2(\bx)\) are nonempty and bounded.

Since \(\partial f_1(\bx)\) and \(\partial f_2(\bx)\) are closed and bounded, hence they are compact (\(\VV\) is finite dimensional).

Thus, \(\partial f_1(\bx) + \partial f_2(\bx)\) is also closed, bounded, convex and nonempty.

Thus, both \(\partial f(\bx)\) and \(\partial f_1(\bx) + \partial f_2(\bx)\) are nonempty, convex and closed.

Then, due to Theorem 9.90,

\[ \partial f(\bx) = \partial f_1(\bx) + \partial f_2(\bx). \]

We can generalize this result for a finite sum of proper convex functions using simple mathematical induction.

(Sum rule of subdifferential calculus for proper convex functions at interior points)

Let \(f_1, \dots, f_m : \VV \to \RERL\) be proper convex functions. For any \(\bx \in \cap_{i=1}^m \interior \dom f_i\)

For real valued convex functions, the domain is the entire \(\VV\) and interior of \(\VV\) is \(\VV\) itself.

(Sum rule of subdifferential calculus for real valued convex functions)

Let \(f_1, \dots, f_m : \VV \to \RR\) be real valued convex functions. For any \(\bx \in \VV\)

A more powerful result with less restrictive assumptions than Corollary 9.23 is possible if the intersection of the relative interiors of the domains of the individual functions is nonempty.

(Sum rule of subdifferential calculus for proper convex functions)

Let \(f_1, \dots, f_m : \VV \to \RERL\) be proper convex functions. Assume that \(\bigcap_{i=1}^m \relint \dom f_i \neq \EmptySet\). Then for any \(\bx \in \VV\)

### 9.16.6.2. Linear Transformations#

Our interest here is in compositions of the form \(h = f \circ \bAAA\) where \(\bAAA\) is a linear transformation. In other words \(h (\bx) = f (\bAAA(\bx))\).

If \(\bAAA : \VV \to \WW\) is a linear transformation then \(\bAAA^T : \WW^* \to \VV^*\) is a mapping from \(\WW^*\) to \(\VV^*\) and satisfies the relationship:

From the definition of directional derivative, we have

(Weak linear transformation rule of subdifferential calculus)

Let \(f: \WW \to \RERL\) be a proper function. Let \(\bAAA : \VV \to \WW\) be a linear transformation. Define \(h : \VV \to \RERL\) as

Assume that \(h\) is proper, i.e. \(\dom h\) is not empty:

Then, for any \(\bx \in \dom h\)

Proof. We proceed as follows.

Let \(\bx \in \dom h\).

Let \(\bg \in \partial f(\bAAA(\bx))\).

By Theorem 9.219,

\[ \langle \bz, \bg \rangle \leq f'(\bAAA(\bx); \bz) \Forall \bz \in \WW. \]Choosing \(\bz = \bAAA(\bd)\), we have

\[ \langle \bAAA(\bd), \bg \rangle \leq f'(\bAAA(\bx); \bAAA(\bd)) \Forall \bd \in \VV. \]Equivalently

\[ \langle \bd, \bAAA^T(\bg) \rangle \leq h'(\bx; \bd) \Forall \bd \in \VV. \]Hence \(\bAAA^T(\bg) \in \partial h(\bx)\) due to (9.8).

Hence \(\bAAA^T (\partial f(\bAAA(\bx)) ) \subseteq \partial h(\bx)\).

(Strong linear transformation rule for subdifferential calculus)

Let \(f: \WW \to \RERL\) be a proper convex function. Let \(\bAAA : \VV \to \WW\) be a linear transformation. Define \(h : \VV \to \RERL\) as

Assume that \(h\) is proper, i.e. \(\dom h\) is not empty:

Then, for any \(\bx \in \interior \dom h\) such that \(\bAAA(\bx) \in \interior \dom f\), we have:

Proof. We showed \(\bAAA^T (\partial f (\bAAA(\bx))) \subseteq \partial h(\bx)\) in Theorem 9.224. We show the reverse inclusion by contradiction.

Let \(\bx \in \interior \dom h\) such that \(\bAAA(\bx) \in \interior \dom f\).

Assume that there exists \(\bd \in \partial h(\bx)\) such that \(\bd \notin \bAAA^T (\partial f(\bAAA(\bx)) )\).

By Theorem 9.215, the set \(\partial f(\bAAA(\bx))\) is nonempty, convex and compact.

Hence \(\bAAA^T (\partial f(\bAAA(\bx)))\) is also nonempty, convex and compact.

By strict separation theorem (Theorem 9.169), there exists a vector \(\bp\) and a scalar \(c\) such that

\[ \langle \bAAA^T(\bg), \bp \rangle < c < \langle \bd, \bp \rangle \Forall \bg \in \partial f(\bAAA(\bx)). \]Equivalently

\[ \langle \bg, \bAAA(\bp) \rangle < c < \langle \bd, \bp \rangle \Forall \bg \in \partial f(\bAAA(\bx)). \]Taking the supremum over \( \partial f(\bAAA(\bx))\) on the L.H.S., we obtain

\[ \sup_{\bg \in \partial f(\bAAA(\bx))} \langle \bg, \bAAA(\bp) \rangle < \langle \bd, \bp \rangle. \]By the max formula

\[ f'(\bAAA(\bx); \bAAA(\bp)) < \langle \bd, \bp \rangle. \]But this means that

\[ h'(\bx; \bp) < \langle \bd, \bp \rangle. \]This contradicts the assumption that \(\bd \in \partial h(\bx)\).

Hence we must have

\[ \bAAA^T (\partial f(\bAAA(\bx)) ) = \partial h(\bx). \]

### 9.16.6.3. Affine Transformations#

(Weak affine transformation rule of subdifferential calculus)

Let \(f: \WW \to \RERL\) be a proper function. Let \(\bAAA : \VV \to \WW\) be a linear transformation. Let \(\bb \in \WW\). Define \(h : \VV \to \RERL\) as

Assume that \(h\) is proper, i.e. \(\dom h\) is not empty:

Then, for any \(\bx \in \dom h\)

Proof. We proceed as follows.

Let \(\bx \in \dom h\).

Then, \(\bx' = \bAAA(\bx) + \bb \in \dom f\) such that \(h(\bx) = f(\bx')\).

Let \(\bg \in \bAAA^T (\partial f (\bx'))\).

Then, there is \(\bd \in \WW^*\) such that \(\bg = \bAAA^T (\bd)\) with \(\bd \in \partial f(\bx')\).

Let \(\by \in \dom h\).

Then, \(\by' = \bAAA(\by) + \bb \in \dom f\) such that \(h(\by)= f(\by')\).

Applying subgradient inequality for \(f\) at \(\bx'\) with the subgradient being \(\bd\), we get

\[ f(\by') \geq f(\bx') + \langle \by' - \bx', \bd \rangle. \]We have \(h(\by) = f(\by')\), \(h(\bx) = f(\bx')\) and \(\by' - \bx' = \bAAA(\by - \bx)\).

Thus, the subgradient inequality simplifies to

\[ h(\by) \geq h(\bx) + \langle \bAAA(\by - \bx), \bd \rangle. \]We note that

\[ \langle \bAAA(\by - \bx), \bd \rangle = \langle \by - \bx, \bAAA^T(\bd) \rangle. \]Thus, for any \(\by \in \dom h\), we have

\[ h(\by) \geq h(\bx) + \langle \by - \bx, \bAAA^T(\bd) \rangle. \]Thus, \(\bg = \bAAA^T(\bd) \in \partial h (\bx)\).

Since this is valid for any \(\bx \in \dom h\) and for every \(\bg \in \bAAA^T (\partial f (\bAAA(\bx) + \bb))\), hence

(Affine transformation rule of subdifferential calculus)

Let \(f: \WW \to \RERL\) be a proper convex function. Let \(\bAAA : \VV \to \WW\) be a linear transformation. Let \(\bb \in \WW\). Define \(h : \VV \to \RERL\) as

Assume that \(h\) is proper, i.e. \(\dom h\) is not empty:

Then, for any \(\bx \in \interior \dom h\) such that \(\bAAA(\bx) + \bb \in \interior \dom f\), we have:

Proof. We note that \(h\) is a proper convex function since it is a composition of an affine transformation with a proper convex function.

Let \(\bx \in \interior \dom h\) such that \(\bx' = \bAAA(\bx) + \bb \in \interior \dom f\).

Then, for any direction \(\bd \in \VV\), by the max formula,

\[ h'(\bx; \bd) = \sigma_{\partial h(\bx)} (\bd). \]By the definition of the directional derivative, we have

\[\begin{split} h'(\bx; \bd) &= \lim_{\alpha \to 0^+} \frac{h(\bx + \alpha \bd ) - h(\bx)}{\alpha}\\ &= \lim_{\alpha \to 0^+} \frac{f(\bAAA(\bx) + \bb + \alpha \bAAA(\bd) ) - f(\bAAA(\bx) + \bb}{\alpha}\\ &= f'(\bAAA(\bx) + \bb; \bAAA(\bd)). \end{split}\]Thus,

\[ \sigma_{\partial h(\bx)} (\bd) = f'(\bAAA(\bx) + \bb; \bAAA(\bd)). \]Using the max formula on the R.H.S., we get

\[\begin{split} \sigma_{\partial h(\bx)} (\bd) &= f'(\bAAA(\bx) + \bb; \bAAA(\bd))\\ &= \sup_{\bg \in \partial f(\bAAA(\bx) + \bb)} \langle \bAAA(\bd), \bg \rangle \\ &= \sup_{\bg \in \partial f(\bAAA(\bx) + \bb)} \langle \bd, \bAAA^T (\bg) \rangle \\ &= \sup_{\bg' \in \bAAA^T(\partial f(\bAAA(\bx) + \bb))} \langle \bd, \bg' \rangle \\ &= \sigma_{\bAAA^T(\partial f(\bAAA(\bx) + \bb))}(\bd). \end{split}\]Since \(\bx \in \interior \dom h\), hence by Theorem 9.211 and Theorem 9.214 \(\partial h(\bx)\) is nonempty, closed and convex.

Since \(\bAAA(\bx) + \bb \in \interior \dom f\), hence by Theorem 9.211 and Theorem 9.214 \(\partial f(\bAAA(\bx) + \bb)\) is nonempty, closed and convex.

It follows that \(\bAAA^T(\partial f(\bAAA(\bx) + \bb))\) is nonempty, closed and convex since \(\bAAA^T\) is a linear operator and both \(\VV\) and \(\WW\) are finite dimensional.

Then, due to Theorem 9.90,

\[ \bAAA^T (\partial f (\bAAA(\bx) + \bb)) = \partial h(\bx). \]

### 9.16.6.4. Composition#

Chain rule is a key principle in computing derivatives of composition of functions. A chain rule is available for subgradient calculus also.

We first recall a result on the derivative of composition of real functions.

(Chain rule for real functions)

Let \(f : \RR \to \RR\) be a real function which is continuous on \([a,b]\) with \(a < b\). Assume that \(f'_+(a)\) exists. Let \(g : \RR \to \RR\) be another real function defined on an open interval \(I\) such that \(\range f \subseteq I\). Assume \(g\) is differentiable at \(f(a)\). Then the composite real function \(h : \RR \to \RR\) given by

is right differentiable at \(t=a\). In particular,

Proof. We show this by working with the definition of right hand derivative as a limit

We can now develop a chain rule for subdifferentials of multidimensional functions with the help of max formula.

(Subdifferential chain rule)

Let \(f : \VV \to \RR\) be convex and let \(g : \RR \to \RR\) be a nondecreasing convex function. Let \(\bx \in \VV\). Assume that \(g\) is differentiable at \(f(\bx)\). Let \(h = g \circ f\). Then

Proof. We are given \(\bx \in \VV\) at which \(g\) is differentiable and \(f\) is convex.

Since \(f\) is convex and \(g\) is nondecreasing convex function, hence \(h\) is also convex.

We now introduce two real functions parametrized on \(\bx\) and an arbitrary direction \(\bd \in \VV\)

\[\begin{split} & f_{\bx, \bd} (t) = f(\bx + t \bd), t \in \RR \\ & h_{\bx, \bd} (t) = h(\bx + t \bd), t \in \RR \end{split}\]It is now easy to see that

\[ h_{\bx, \bd} (t) = h(\bx + t \bd) = g ( f (\bx + t\bd)) = g (f_{\bx, \bd} (t)). \]Thus, \(h_{\bx, \bd} = g \circ f_{\bx, \bd}\).

Since \(f_{\bx, \bd}\) and \( h_{\bx, \bd}\) are restrictions of \(f\) and \(h\) along a line, they are also convex.

Due to Theorem 9.204, the directional derivatives of \(f\) and \(h\) exist in every direction.

By the definition of directional derivative Definition 9.70,

\[\begin{split} & (f_{\bx, \bd})'_+ (0) = f'(\bx; \bd), \\ & (h_{\bx, \bd})'_+ (0) = h'(\bx; \bd). \end{split}\]Also note that \(f_{\bx, \bd} (0) = f(\bx)\) and \(h_{\bx, \bd} (0) = h(\bx)\).

\(f_{\bx, \bd}\) is right differentiable at \(t=0\), and \(g\) is differentiable at \(f(\bx)\).

Hence, by the chain rule in Theorem 9.228,

\[ h'(\bx; \bd) = g'(f(\bx)) f'(\bx; \bd). \]By the max formula in Corollary 9.21,

\[\begin{split} & h'(\bx; \bd) = \sigma_{\partial h (\bx)} (\bd) \\ & f'(\bx; \bd) = \sigma_{\partial f (\bx)} (\bd). \end{split}\]Thus,

\[\begin{split} \sigma_{\partial h (\bx)} (\bd) &= h'(\bx; \bd) \\ &= g'(f(\bx)) f'(\bx; \bd) \\ &= g'(f(\bx)) \sigma_{\partial f (\bx)} (\bd) \\ &= \sigma_{g'(f(\bx)) \partial f (\bx)} (\bd). \end{split}\]The last step is due to Theorem 9.92. Since \(g\) is nondecreasing, hence \(g'(f(\bx)) \geq 0\).

By Theorem 9.211 and Theorem 9.214, the sets \(\partial f(\bx)\) and \(\partial h(\bx)\) are nonempty, closed and convex.

Then, the set \(g'(f(\bx)) \partial f (\bx)\) is also nonempty, closed and convex.

Thus, by Theorem 9.90,

\[ \partial h (\bx) = g'(f(\bx)) \partial f (\bx). \]

Applications of this rule are presented later in Example 9.70.

### 9.16.6.5. Max Rule#

(Max rule of subdifferential calculus)

Let \(f_1, f_2, \dots, f_m : \VV \to \RERL\) be a set of proper convex functions. Let \(f : \VV \to \RERL\) be given by:

Let \(\bx \in \bigcap_{i=1}^m \interior \dom f_i\) be a point common to the interiors of domains of all the functions.

The subdifferential set of \(f\) at \(\bx\) can be obtained from the subdifferentials of \(f_i\) as follows:

where \(I(\bx) = \{ i \ST f_i(\bx) = f(\bx)\}\).

Proof. Since \(f_i\) are proper convex, hence their pointwise maximum \(f\) is proper convex.

Let \(I(\bx) = \{ i \in 1,\dots,m \ST f_i(\bx) = f(\bx)\}\).

For any (nonzero) direction, \(\bd \in \VV\), by Theorem 9.209:

\[ f'(\bx; \bd) = \underset{i \in I(\bx)}{\max} f'_i(\bx;\bd). \]Without loss of generality, let us assume that \(I(\bx) = 1,\dots,k\) for some \(k \in 1,\dots,m\). This can be achieved by reordering \(f_i\).

By max formula (Theorem 9.219),

\[ f_i'(\bx;\bd) = \sup \{ \langle \bd, \bg \rangle \ST \bg \in \partial f_i(\bx) \}. \]Thus,

\[ f'(\bx; \bd) = \underset{i \in 1,\dots,k}{\max} \underset{\bg_i \in \partial f_i(\bx)}{\sup} \langle \bd, \bg_i \rangle. \]Recall that for any \(a_1, \dots, a_k \in \RR\), the identity below holds:

\[ \max \{ a_1, \dots, a_k \} = \underset{\bt \in \Delta_k }{\sup} \sum_{i=1}^k t_i a_i. \]Thus, we can expand \(f'(\bx; \bd)\) as:

\[\begin{split} f'(\bx; \bd) &= \underset{i \in 1,\dots,k}{\max} \underset{\bg_i \in \partial f_i(\bx)}{\sup} \langle \bd, \bg_i \rangle\\ &= \underset{\bt \in \Delta_k }{\sup} \left \{ \sum_{i=1}^k t_i \underset{\bg_i \in \partial f_i(\bx)}{\sup} \langle \bd, \bg_i \rangle \right \} \\ &= \underset{\bt \in \Delta_k }{\sup} \left \{ \sum_{i=1}^k \sup \left \langle \bd, t_i \bg_i \right \rangle \ST \bg_i \in \partial f_i(\bx) \right \} \\ &= \sup \left \{ \left \langle \bd, \sum_{i=1}^k t_i \bg_i \right \rangle \ST \bg_i \in \partial f_i(\bx), \bt \in \Delta_k \right \} \\ &= \sup \left \{ \left \langle \bd, \bg \right \rangle \ST \bg \in \convex \left (\bigcup_{i=1}^k \partial f_i(\bx) \right) \right \} \\ &= \sigma_A (\bd) \end{split}\]where \(A = \convex \left (\bigcup_{i=1}^k \partial f_i(\bx) \right )\) and \(\sigma\) denotes the support function.

Since \(\bx \in \interior \dom f\), hence, by the max formula (Corollary 9.21)

\[ f'(\bx;\bd) = \sigma_{\partial f(\bx)} (\bd). \]Thus, we have

\[ \sigma_{\partial f(\bx)} (\bd) = \sigma_A {\bd}. \]By Theorem 9.211, \(\partial f(\bx)\) is closed and convex.

By Theorem 9.214, \(\partial f(\bx)\) is nonempty and bounded.

Thus, \(\partial f(\bx)\) is nonempty, closed and convex.

Similarly, \(\partial f_i(\bx)\) are nonempty, closed, convex and bounded.

Thus, \(\bigcup_{i=1}^k \partial f_i(\bx)\) is a finite union of nonempty, closed, convex and bounded sets.

Thus, \(\bigcup_{i=1}^k \partial f_i(\bx)\) is also nonempty and compact.

A finite union of nonempty sets is nonempty.

A finite union of bounded sets is bounded.

A finite union of closed sets is closed.

Thus, \(\bigcup_{i=1}^k \partial f_i(\bx)\) is closed and bounded.

Since \(\VV\) is finite dimensional, hence closed and bounded sets are compact.

Since \(A\) is a convex hull of \(\bigcup_{i=1}^k \partial f_i(\bx)\), hence \(A\) is nonempty, closed and convex.

Recall from Theorem 9.129 that convex hull of a compact set is compact.

Also, recall that compact sets are closed and bounded.

Since \(\sigma_{\partial f(\bx)} (\bd) = \sigma_A (\bd)\) is true for any \(\bd \in \VV\), the support functions for the underlying nonempty, closed and convex set are equal. Hence by Theorem 9.90,

\[ \partial f(\bx) = A = \convex \left ( \bigcup_{i=1}^k \partial f_i(\bx) \right ). \]

Some applications of this rule are presented later in Example 9.75, Example 9.76, Example 9.78, Example 9.79.

We now present a weaker version of the max rule which is applicable for pointwise supremum over an arbitrary set of functions.

(Weak max rule of subdifferential calculus)

Let \(I\) be an arbitrary index set and suppose that for every \(i \in I\), there exists a proper convex function \(f_i : \VV \to \RERL\). Let \(f : \VV \to \RERL\) be given by:

Then for any \(\bx \in \dom f\),

where \(I(\bx) = \{ i \in I \ST f_i(\bx) = f(\bx)\}\).

In words, if \(f_i(\bx) = f(\bx)\), then a subgradient of \(f_i\) at \(\bx\) is also a subgradient of \(f\) at \(\bx\). Also, for all \(i \in I\) such that \(f_i(\bx) = f(\bx)\), any convex combination of their subgradients at \(\bx\) is also a subgradient of \(f\) at \(\bx\).

Proof. Pick some \(\bx \in \dom f\).

Let \(\bz \in \dom f\) be arbitrary.

Let \(I(\bx) = \{ i \in I \ST f_i(\bx) = f(\bx)\}\).

Let \(i \in I(\bx)\) be arbitrary.

Let \(\bg \in \partial f_i(\bx)\) be a subgradient of \(f_i\) at \(\bx\).

Then, by definition of \(f\) and subgradient inequality:

\[ f(\bz) \geq f_i(\bz) \geq f_i(\bx) + \langle \bz - \bx, \bg \rangle = f(\bx) + \langle \bz - \bx, \bg \rangle. \]We used the fact that \(f_i(\bx) = f(\bx)\) for \(i \in I(\bx)\).

Thus, \(\bg \in \partial f(\bx)\). \(\bg\) is a subgradient of \(f\) at \(\bx\).

Since this is valid for every subgradient of \(f_i\) at \(\bx\), hence \(\partial f_i(\bx) \subseteq \partial f(\bx)\).

Since this is valid for every \(i \in I(\bx)\), hence

\[ \bigcup_{i\in I(\bx)} \partial f_i (\bx) \subseteq \partial f(\bx). \]Recall from Theorem 9.211 that \(\partial f(\bx)\) is convex.

Thus, it contains the convex hull of any of its subsets. Hence,

\[ \convex \left ( \bigcup_{i\in I(\bx)} \partial f_i (\bx) \right ) \subseteq \partial f(\bx). \]

Next is an example application of the weak max rule.

\(\lambda_{\max}(\bA_0 + \sum_{i=1}^m x_i \bA_i\))

(Subgradient ofLet \(\bA_0, \bA_1, \dots, \bA_m \in \SS^n\) be \(m+1\) given symmetric matrices. Define an affine transformation \(\bAAA : \RR^m \to \SS^n\) as

For every vector \(\bx \in \RR^m\), this mapping defines a symmetric matrix \(\bAAA(\bx)\). We can compute the largest eigen value of \(\bAAA(\bx)\). We introduce a function \(f: \RR^m \to \RR\) as

Our task is to find a subgradient of \(f\) at \(\bx\).

Recall from the definition of largest eigen values,

\[ f(\bx) = \underset{\by \in \RR^n; \| \by \|_2 = 1 }{\sup} \by^T \bAAA(\bx) \by. \]For every \(\by \in \RR^n\) such that \(\| \by \|_2 = 1\), we can define a function:

\[ f_{\by}(\bx) \triangleq \by^T \bAAA(\bx) \by. \]Then,

\[ f(\bx) = \underset{\by \in \RR^n; \| \by \|_2 = 1 }{\sup} f_{\by}(\bx). \]The function \(f_{\by}(\bx)\) is affine (in \(\bx\)) for every \(\by\).

Thus, \(f_{\by}\) is convex for every \(\by\).

Thus, \(f\) is a pointwise supremum of a family of functions \(f_{\by}\).

Thus, \(f\) is also convex (see Theorem 9.114).

Consequently, we can use the weak max rule Theorem 9.231 to identify a subgradient of \(f\) at \(\bx\).

Let \(\tilde{\by}\) be a normalized eigenvector of \(\bAAA(\bx)\) corresponding to its largest eigenvalue. Then

\[ f(\bx) = \tilde{\by}^T \bAAA(\bx) \tilde{\by}. \]This means that \(f(\bx) = f_{\tilde{\by}}(\bx)\).

By the weak max rule, a subgradient of \(f_{\tilde{\by}}\) at \(\bx\) is also a subgradient of \(f\) at \(\bx\).

Expanding \(f_{\tilde{\by}}(\bx)\):

\[ f_{\tilde{\by}}(\bx) = \tilde{\by}^T \bAAA(\bx) \tilde{\by} = \tilde{\by}^T \bA_0\tilde{\by} + \sum_{i=1}^m \tilde{\by}^T \bA_i \tilde{\by} x_i. \]Then, the gradient of \(f_{\tilde{\by}}\) at \(\bx\) (computed by taking partial derivatives w.r.t. \(x_i\)) is

\[ \nabla f_{\tilde{\by}}(\bx) = (\tilde{\by}^T \bA_1 \tilde{\by}, \dots, \tilde{\by}^T \bA_m \tilde{\by}). \]Since \(f_{\by}\) is affine (thus convex), hence its gradient is also a subgradient.

Thus,

\[ (\tilde{\by}^T \bA_1 \tilde{\by}, \dots, \tilde{\by}^T \bA_m \tilde{\by}) \in \partial f(\bx). \]

## 9.16.7. Lipschitz Continuity#

(Lipschitz continuity and boundedness of the subdifferential sets)

Let \(f : \VV \to \RERL\) be a proper convex function. Suppose that \(X \subseteq \interior \dom f\). Consider the following two claims:

\(| f(\bx) - f(\by) | \leq L \| \bx - \by \|\) for any \(\bx, \by \in X\).

\( \| \bg \|_* \leq L\) for any \(\bg \in \partial f(\bx)\) where \(\bx \in X\).

Then,

(2) implies (1). In other words, if subgradients are bounded then, the function is Lipschitz continuous.

If \(X\) is open, then (1) holds if and only if (2) holds.

In other words, if the subgradients over a set \(X\) are bounded then \(f\) is Lipschitz continuous over \(X\). If \(X\) is open then \(f\) is Lipschitz continuous over \(X\) if and only if the subgradients over \(X\) are bounded.

Proof. (a) We first show that \((2) \implies (1)\).

Assume that (2) is satisfied.

Pick any \(\bx, \by \in X\).

Since \(f\) is proper and convex and \(\bx, \by \in \interior \dom f\), hence due to Theorem 9.214, \(\partial f(\bx)\) and \(\partial f(\by)\) are nonempty.

Let \(\bg_x \in \partial f(\bx)\) and \(\bg_y \in \partial f(\by)\).

By subgradient inequality

\[\begin{split} & f(\by) \geq f(\bx) + \langle \by - \bx, \bg_x \rangle; \\ & f(\bx) \geq f(\by) + \langle \bx - \by, \bg_y \rangle. \end{split}\]We can rewrite this as

\[\begin{split} & f(\bx) - f(\by) \leq \langle \bx - \by , \bg_x \rangle; \\ & f(\by) - f(\bx) \leq \langle \by - \bx , \bg_y \rangle. \end{split}\]By generalized Cauchy Schwartz inequality (Theorem 4.108),

\[\begin{split} \langle \bx - \by , \bg_x \rangle \leq \| \bx - \by \| \| \bg_x \|_* \leq L \| \bx - \by \|; \\ \langle \by - \bx , \bg_y \rangle \leq \| \by - \bx \| \| \bg_y \|_* \leq L \| \bx - \by \|. \end{split}\]Combining the two inequalities, we get

\[ | f(\bx) - f(\by) | \leq L \| \bx - \by \|. \]Thus, \((2) \implies (1)\).

(b) If \(X\) is open, then we need to show that \((1) \iff (2)\).

We have already shown that \((2) \implies (1)\).

Assume that \(X\) is open and \((1)\) holds.

Let \(\bx \in X\).

Since \(\bx\) is an interior point of \(\dom f\), hence the subdifferential is nonempty.

Pick any \(\bg \in \partial f(\bx)\).

Let \(\bg^{\dag} \in \VV\) be a vector with \(\| \bg^{\dag} \|=1\) and \(\langle \bg^{\dag}, \bg \rangle = \| \bg \|_*\). Such a vector exists by definition of the dual norm.

Since \(X\) is open, we can choose \(\epsilon > 0\) small enough such that \(\bx + \epsilon \bg^{\dag} \in X\).

By the subgradient inequality, we have:

\[ f(\bx + \epsilon \bg^{\dag}) \geq f(\bx) + \langle \epsilon \bg^{\dag}, \bg \rangle. \]Thus,

\[\begin{split} \epsilon \| \bg \|_* &= \langle \epsilon \bg^{\dag}, \bg \rangle \\ &\leq f(\bx + \epsilon \bg^{\dag}) - f(\bx) \\ &\leq L \| (\bx + \epsilon \bg^{\dag} - \bx \| & \text{ by hypothesis in (1)} \\ &= L \epsilon \| \bg^{\dag} \| = L \epsilon. \end{split}\]Canceling \(\epsilon\), we get:

\[ \| \bg \|_* \leq L \]holds true for every \(\bg \in \partial f(\bx)\) where \(\bx \in X\) as desired.

(Lipschitz continuity of convex functions over compact domains)

Let \(f: \VV \to \RERL\) be a proper and convex function. Suppose that \(X \subseteq \interior \dom f\) is compact. Then, there exists \(L > 0\) such that

Proof. Recall from Theorem 9.216 that the subgradients of a proper convex function over a compact set are nonempty and bounded.

In other words, the set

\[ Y = \bigcup_{\bx \in X} \partial f(\bx) \]is nonempty and bounded.

Thus, for every \(\bg \in Y\), there exists \(L > 0\) such that \(\| \bg \|_* \leq L\).

Then by Theorem 9.232,

\[ | f(\bx) - f(\bx)| \leq L \| \bx - \by \| \Forall \bx, \by \in X. \]Thus, \(f\) is indeed Lipschitz continuous over \(X\).

## 9.16.8. \(\epsilon\)-Subgradients#

\(\epsilon\)-Subgradient)

(Let \(f : \VV \to \RERL\) be a proper function.
Let \(\bx \in \dom f\).
A vector \(\bg \in \VV^*\) is called an \(\epsilon\)-*subgradient*
of \(f\) at \(\bx\)
if

### 9.16.8.1. Geometric Interpretation#

\(\epsilon\)-subgradient and supporting hyperplane)

(Let \(f: \VV \to \RERL\) be a proper function. Then \(\bg\) be an \(\epsilon\)-subgradient of \(f\) at \(\bx\) if and only if \(\epi f\) is contained in the positive halfspace of the hyperplane with a normal \((-\bg, 1)\) passing through \((\bx, f(\bx) - \epsilon)\).

Proof. Let \(H\) denote the hyperplane

The positive halfspace of \(H\) is given by

Assume that \(\bg\) is an \(\epsilon\)-subgradient of \(f\) at \(\bx\).

For any \((\by, t) \in \epi f\), we have

\[ t \geq f(\by) \geq f(\bx) + \langle \by - \bx, \bg \rangle - \epsilon. \]This is equivalent to

\[ \langle \by, -\bg \rangle + t \geq \langle \bx, -\bg \rangle + f(\bx) - \epsilon \]for all \((\by, t) \in \epi f\).

Hence \(\epi f \subseteq H_+\).

Now assume that \(\epi f \subseteq H_+\).

Let \((\by, f(\by)) \in \epi f\).

Then we have

\[ \langle \by, -\bg \rangle + f(\by) \geq \langle \bx, -\bg \rangle + f(\bx) - \epsilon. \]Rearranging the terms, we have

\[ f(\by) \geq f(\bx) + \langle \by - \bx, \bg \rangle - \epsilon \Forall \by \in \VV. \]But this means that \(\bg\) is an \(\epsilon\)-subgradient of \(f\) at \(\bx\).

### 9.16.8.2. \(\epsilon\)-Subdifferential#

\(\epsilon\)-subdifferential)

(Let \(f : \VV \to \RERL\) be a proper function.
The set of all \(\epsilon\)-subgradients of \(f\) at a point \(\bx \in \dom f\)
is called the \(\epsilon\)-*subdifferential* of \(f\) at \(\bx\) and is denoted by
\(\partial_{\epsilon} f(\bx)\).

For all \(\bx \notin \dom f\), we define \(\partial_{\epsilon} f(\bx) = \EmptySet\).

It is easy to see that

Also, if \(\epsilon_2 \geq \epsilon_1 > 0\), then

## 9.16.9. Optimality Conditions#

A well known result for differentiable functions is that at the point of optimality \(\nabla f(\bx) = \bzero\) (see Theorem 8.1). Subdifferentials are useful in characterizing the minima of a function. The idea of vanishing gradients can be generalized for subgradients also.

(Fermat’s optimality condition)

Let \(f : \VV \to \RERL\) be a proper convex function. Then

if and only if \(\bzero \in \partial f(\ba)\).

In other words, \(\ba\) is a minimizer of \(f\) if and only if \(\bzero\) is a subgradient of \(f\) at \(\ba\).

Proof. Assume that \(\bzero \in \partial f(\ba)\) where \(\ba \in \dom f\).

By subgradient inequality

\[ f(\bx) \geq f(\ba) + \langle \bx - \ba, \bzero \rangle \Forall \bx \in \VV. \]This simplifies to

\[ f(\bx) \geq f(\ba) \Forall \bx \in \VV. \]Thus, \(\ba \in \argmin \{ f(\bx) \ST \bx \in \VV \}\).

For the converse, assume that \(\ba \in \argmin \{ f(\bx) \ST \bx \in \VV \}\).

Then,

\[ f(\bx) \geq f(\ba) \Forall \bx \in \VV. \]But then

\[\begin{split} & f(\bx) \geq f(\ba) \\ & \iff f(\bx) \geq f(\ba) + 0 \\ & \iff f(\bx) \geq f(\ba) + \langle \bx - \ba, \bzero \rangle \end{split}\]holds true for every \(\bx \in \VV\).

This implies that \(\bzero \in \partial f(\ba)\).

## 9.16.10. Mean Value Theorem#

The following result is from [48].

(A subgradients based mean value theorem for 1D functions)

Let \(f : \RR \to \RERL\) be a proper closed convex function. Let \([a,b] \subseteq \dom f\) with \(a < b\). Then,

where \(h : (a, b) \to \RR\) satisfies \(h(t) \in \partial f(t)\) for every \(t \in (a, b)\).

In the reminder of this section, we compute the subgradients and subdifferential sets for a variety of standard functions.

## 9.16.11. Norm Functions#

We recall from Theorem 9.210 that the subdifferential of a norm \(\| \cdot \|: \VV \to \RR\) at \(x = \bzero\) is given by:

### 9.16.11.1. \(\ell_1\)-Norm#

\(\ell_1\) norm at origin)

(Subdifferential ofLet \(f : \RR^n \to \RR\) be given by \(f(\bx) = \| \bx \|_1\). We recall that the dual norm of \(\ell_1\) is \(\ell_{\infty}\). The unit ball of \(\ell_{\infty}\)-norm at origin is given by

Following Theorem 9.210, the subdifferential of \(f\) at \(\bx = \bzero\) is given by:

(Subdifferential of absolute value function at origin)

Let \(g : \RR \to \RR\) be the absolute value function given by

This is a special case of \(\ell_1\) norm for \(\RR^1\). Thus, following Example 9.67, the subdifferential of \(g\) at \(x = 0\) is given by:

For a complete specification of the subdifferential of \(g\), see Example 9.73 below.

\(\ell_1\) norm )

(Subdifferential ofLet \(f : \RR^n \to \RR\) be given by \(f(\bx) = \| \bx \|_1\). We can write \(f\) as a sum of \(n\) functions

where

Let \(g (x) = | x |\). Then

Due to affine transformation rule (Theorem 9.227),

The subdifferential of the absolute value function \(g\) is described in Example 9.73 below.

Thus, the subdifferential set of \(f_i\) is given by:

Using the sum rule Corollary 9.24, we have:

We define the index set:

Expanding the sum of subdifferentials,

We can rewrite this as:

We also have a weak result from this:

\(\ell_1\) norm squared)

(Subdifferential ofLet \(f : \RR^n \to \RR\) be given by \(f(\bx) = \| \bx \|_1\).

Now let \(g(t) = [t]_+^2\). And consider the function \(h = g \circ f\) given by

By subdifferential chain rule (Theorem 9.229):

We have used the subdifferential of \(f\) from Example 9.69.

### 9.16.11.2. \(\ell_2\)-Norm#

\(\ell_2\) norm at origin)

(Subdifferential ofLet \(f : \RR^n \to \RR\) be given by \(f(\bx) = \| \bx \|_2\). We recall that the dual norm of \(\ell_1\) is also \(\ell_2\) as this norm is self dual.

Following Theorem 9.210, the subdifferential of \(f\) at \(\bx = \bzero\) is given by:

\(\ell_2\) norm)

(Subdifferential ofLet \(f : \RR^n \to \RR\) be given by \(f(\bx) = \| \bx \|_2\). At \(\bx \neq \bzero\), \(f\) is differentiable with the gradient (see Example 5.10):

Since \(f\) is convex and differentiable at \(\bx \neq \bzero\), hence due to Theorem 9.220,

Combining this with the subdifferential of \(f\) at origin from Example 9.71, we obtain:

(Subdifferential of absolute value function)

Let \(g : \RR \to \RR\) be the absolute value function given by

This is a special case of \(\ell_2\) norm for \(\RR^1\). Following Example 9.72,

c.f. Example 9.68.

### 9.16.11.3. \(\ell_{\infty}\)-Norm#

\(\ell_{\infty}\) norm at origin)

(Subdifferential ofLet \(f : \RR^n \to \RR\) be given by \(f(\bx) = \| \bx \|_{\infty}\). We recall that the dual norm of \(\ell_{\infty}\) is \(\ell_{1}\). The unit ball of \(\ell_{1}\)-norm at origin is given by

Following Theorem 9.210, the subdifferential of \(f\) at \(\bx = \bzero\) is given by:

\(\ell_{\infty}\) norm)

(Subdifferential ofLet \(f : \RR^n \to \RR\) be given by \(f(\bx) = \| \bx \|_{\infty}\). Let us compute the subdifferential of \(f\) at \(\bx \neq \bzero\).

We have:

where \(f_i(\bx) = |x_i|\). We define:

Then, following Example 9.69

This is valid since \(\bx \neq \bzero\) implies that \(f(\bx) \neq 0\) which in turn implies that \(x_i \neq 0\) for every \(i \in I(\bx)\).

Then, using the max rule for proper convex functions (Theorem 9.230):

We can rewrite this as:

Combining this with the subdifferential of \(f\) at origin from Example 9.74, we obtain:

### 9.16.11.4. \(\ell_1\) Norm over Affine Transformation#

Let \(A \in \RR^{m \times n}\). Let \(b \in \RR^m\). Let \(f : \RR^m \to \RR\) be given by \(f(y) = \| y \|_1\).

Let \(h : \RR^n \to \RR\) be the function \(h(x) = \| A x + b \|_1 = f(A x + b)\).

By affine transformation rule, we have:

Denoting \(i\)-th row of \(A\) as \(a_i^T\), we define the index set:

we have:

In particular, we have the weak result:

### 9.16.11.5. \(\ell_2\) Norm over Affine Transformation#

Let \(A \in \RR^{m \times n}\). Let \(b \in \RR^m\). Let \(f : \RR^m \to \RR\) be given by \(f(y) = \| y \|_2\).

Let \(h : \RR^n \to \RR\) be the function \(h(x) = \| A x + b \|_2 = f(A x + b)\).

We have:

Applying the affine transformation rule, we get:

For \(x \ST A x + b = 0\), we can write this as

### 9.16.11.6. \(\ell_{\infty}\) Norm over Affine Transformation#

\(|\bA \bx + \bb |_{\infty}\))

(Subdifferential ofLet \(\bA \in \RR^{m \times n}\). Let \(\bb \in \RR^m\). Let \(f : \RR^m \to \RR\) be given by

Let \(h : \RR^n \to \RR\) be the function

With \(\by = \bA \bx + \bb\), we have \(y_i = \ba_i^T \bx + b_i\) where \(\ba_i^T\) is the \(i\)-th row vector of \(\bA\).

Following Example 9.75

where \(I(\by) = \{i \in [1,\dots,n] \ST |y_i | = f(\by) = \| \by \|_{\infty} \}\).

Due to affine transformation rule (Theorem 9.227),

We have the following cases.

(a) \(\by = \bzero\).

In terms of \(\bx\), the condition \(\by = \bzero\) is equivalent to \(\bA \bx + \bb = \bzero\).

Then,

\[ \partial f (\bA \bx + \bb) = \partial f(\bzero) = B_{\| \cdot \|_1} [\bzero, 1]. \]Thus,

\[ \partial h(\bx) = \bA^T B_{\| \cdot \|_1} [\bzero, 1]. \]

(b) \(\by \neq \bzero\).

In terms of \(\bx\), the condition \(\by \neq \bzero\) is equivalent to \(\bA \bx + \bb \neq \bzero\).

Then,

\[\begin{split} \partial f(\bA \bx + \bb) &= \left \{\sum_{i \in I(\by)} \lambda_i \sgn(y_i) \be_i \ST \sum_{i \in I(\by)} \lambda_i = 1, \lambda_j \geq 0, j \in I(\by) \right \} \\ &= \left \{\sum_{i \in I_x} \lambda_i \sgn(\ba_i^T \bx + b_i) \be_i \ST \sum_{i \in I_x} \lambda_i = 1, \lambda_j \geq 0, j \in I_x \right \} \end{split}\]where

\[ I_x = I(\by) = I(\bA \bx + \bb). \]Note that \(\bA^T \be_i = \ba_i\).

Then,

\[\begin{split} \partial h(\bx) &= \bA^T \partial f (\bA \bx + \bb) \\ &= \left \{\sum_{i \in I_x} \lambda_i \sgn(\ba_i^T \bx + b_i) \ba_i \ST \sum_{i \in I_x} \lambda_i = 1, \lambda_j \geq 0, j \in I_x \right \}. \end{split}\]

Combining the two cases, we get:

## 9.16.12. Indicator Functions#

(Subdifferential of indicator function)

The subdifferential of indicator function for a nonempty set \(S \subset \VV\) at any point \(\bx \in S\) is given by

where \(N_S (\bx)\) is the normal cone of \(S\) at \(\bx\).

Proof. Let \(\bx \in S\) and \(\bg \in \partial I_S(\bx)\). The subgradient inequality (9.7) gives us:

(Subdifferential of the indicator function of the unit ball)

The unit ball at origin is given by:

From Theorem 9.64, the normal cone of \(S\) at \(\bx \in S\) is given by:

For any \(\bx \notin S\), \(N_S(\bx) = \EmptySet\). Combining:

## 9.16.13. Maximum Eigen Value Function#

The maximum eigen value function for symmetric matrices, denoted as \(f : \SS^n \to \RR\), is given by:

(Subgradient for maximum eigen value function)

Let \(f : \SS^n \to \RR\) be the maximum eigen value function. Then

where \(\bv\) is a normalized eigen-vector of \(\bX \in \SS^n\) associated with its maximum eigen value.

Proof. Let \(\bX \in \SS^n\) be given. Let \(\bv\) be a normalized eigen vector associated with the largest eigen value of \(\bX\). Then, \(\| \bv \|_2 = 1\).

For any \(\bY \in \SS^n\), we have:

In this derivation, we have used the following results:

The maximum eigen value can be obtained by maximizing \(\bu^T \bY \bu\) over the unit sphere.

For a scalar \(x \in \RR\), \(\bx = \Trace(x)\).

\(\Trace (AB) = \Trace (BA)\) if both \(AB\) and \(BA\) are well defined.

\(\langle A, B \rangle = \Trace(AB)\) for the space of symmetric matrices.

Thus, \(\bv \bv^T \in \partial f(\bX)\).

We note here that this result only identifies one of the
subgradients of \(f\) at \(\bX\). It doesn’t characterize
the entire subdifferential of \(f\) at \(\bX\). In this sense,
this result is a *weak result*. In contrast, a *strong result*
would characterize the entire subdifferential.

## 9.16.14. The Max Function#

(Subdifferential of the max function)

Let \(f : \RR^n \to \RR\) be given by:

Let \(f_i(\bx) = x_i = \be_i^T \bx\). Then

We note that \(f_i\) are differentiable and their gradient is given by (see Example 5.4):

Also, \(f_i\) are linear, hence convex. Thus, due to Theorem 9.220:

We denote the index set of functions which equal the value of \(f(\bx\) at \(\bx\) by:

Then, using the max rule for proper convex functions (Theorem 9.230):

As and example, consider the case where \(\bx = \alpha \bone\) for some \(\alpha \in \RR\).

In other words, \(\bx = (\alpha, \dots, \alpha)\).

Then, \(f(\bx) = \alpha\).

\(f_i(\bx) = \alpha = f(\bx)\) for ever \(i \in [1,\dots, n]\).

\(I(\bx) = \{1, \dots, n \}\).

\(\nabla f_i(\bx) = \be_i\).

\(\convex ( \bigcup_{i \in I(\bx)} \{ \be_i\} ) = \convex \{ \be_1, \dots, \be_n \}\).

But \(\convex \{ \be_1, \dots, \be_n \} = \Delta_n\).

Thus,

\[ \partial f (\alpha \bone) = \Delta_n \Forall \alpha \in \RR. \]

## 9.16.15. Space of Matrices#

Let \(\VV = \RR^{m \times n}\). Let the standard inner product for \(x, y, \in \VV\) be \(\langle x, y \rangle = \Trace(x^T y)\).

Let \(f : \VV \to \RERL\) be a proper function. Let \(x \in \interior \dom f\).

The gradient at \(x\), if it exists, is given by:

Let \(H\) be a positive definite matrix and define an inner product for \(\VV\) as:

Then

## 9.16.16. Convex Piecewise Linear Functions#

(Subdifferential of convex piecewise linear functions)

Let a convex piecewise linear function \(f : \RR^n \to \RR\) be given by:

where \(\ba_i \in \RR^n, b_i \in \RR\) for \(i=1,\dots,m\).

We define a set of functions \(f_i : \RR^n \to \RR\) for \(i=1,\dots,m\) as

We can see that \(f\) is a pointwise maximum of these functions.

Clearly,

We define:

Then, using the max rule for proper convex functions (Theorem 9.230):

By Fermat’s optimality condition (Theorem 9.233), \(\bx^*\) is a minimizer of \(f\) if and only if \(\bzero \in f(\bx^*)\).

Thus, \(\bx^*\) is a minimizer if and only if there exists \(\blambda \in \Delta_m\) such that

Note that at any \(\bx\), for every \(j \notin I(\bx)\), we have

Thus, the complimentary condition

denotes the fact that whenever \(\ba_j^T \bx + b_j - f(\bx) < 0\), then \(\lambda_j\) must be zero and whenever \(\ba_j^T \bx + b_j - f(\bx) = 0\) then \(\lambda_j \geq 0\) is allowed (since \(\blambda \in \Delta_m\)).

If we put together a matrix \(\bA \in \RR^{m \times n}\) whose rows are \(\ba_1^T, \dots, \ba_m^T\), then the optimality condition can be succinctly stated as

## 9.16.17. Minimization Problems#

Minimization problem:

Dual function:

Assume that for \(\lambda = \lambda_0\) the minimization in R.H.S. is obtained at \(x = x_0\).

Subgradient of the (negative of the) dual function \(-q\):