7.14. Recession Cones#

Main references for this section are [5].

Throughout this section, we assume that \(\VV\) is a finite dimensional real vector space. Wherever necessary, it is equipped with a norm \(\| \cdot \| : \VV \to \RR\) or a real inner product \(\langle \cdot, \cdot \rangle : \VV \times \VV \to \RR\). It is also equipped with a metric \(d(\bx, \by) = \| \bx - \by \|\). The norm in general is not necessarily induced by the inner product (if the vector space is indeed endowed with an inner product).

Definition 7.65 (Direction of recession)

Given a nonempty convex set \(C\), we say that a vector \(\by\) is in the direction of recession of \(C\) if \(\bx + \alpha \by \in C\) for all \(\bx \in C\) and \(\alpha \geq 0\).

In words, starting from any point \(\bx \in C\) and going indefinitely along the direction \(\by\), we always stay inside \(C\). We never cross the relative boundary of \(C\) to a point outside \(C\).

7.14.1. Recession Cone#

Theorem 7.176 (Recession cone)

Let \(C\) be a nonempty convex set. The set of all directions of recession for \(C\) is a convex cone.

Proof. Let \(R\) be the set of recession directions of \(C\).

  1. Note that \(\bx + \alpha \bzero = \bx \in C\) for every \(\bx \in C\) and every \(\alpha \geq 0\).

  2. Hence \(\bzero \in R\).

  3. Now assume \(\bu \in R\).

  4. Let \(\bx \in C\).

  5. Let \(t > 0\).

  6. Let \(\bv = t \bu\).

  7. Let \(\alpha \geq 0\).

  8. Then \(\bx + \alpha \bv = \bx + (\alpha t) \bu \in C\) since \(\alpha t \geq 0\) and \(\bu\) is a recession direction.

  9. Hence \(\bv\) is also a recession direction.

  10. Thus \(R\) is a cone.

We now show that \(R\) is in fact a convex cone.

  1. Let \(t_1, t_2 \geq 0\) such that \(t_1 + t_2 = 1\) and let \(\bu, \bv \in R\).

  2. Let \(\bw = t_1 \bu + t_2 \bv\) be a convex combination of \(\bu\) and \(\bv\).

  3. Let \(\bx \in C\) and \(\alpha \geq 0\).

  4. Then

    \[\begin{split} \bx + \alpha \bw &= (t_1 + t_2) \bx + \alpha (t_1 \bu + t_2 \bv) \\ &= t_1(\bx + \alpha \bu) + t_2(\bx + \alpha \bv). \end{split}\]
  5. Since \(\bu, \bv \in R\), hence \(\by = \bx + \alpha \bu, \bz = \bx + \alpha \bv \in C\).

  6. But then \(t_1 \by + t_2 \bz\) is a convex combination of \(\by, \bz\).

  7. Hence \(t_1 \by + t_2 \bz \in C\).

  8. Hence \(\bx + \alpha \bw \in C\) for every \(\alpha \geq 0\) and every \(\bx \in C\).

  9. Hence \(\bw \in R\).

  10. Hence \(R\) is convex.

  11. Since \(R\) is also a cone, hence \(R\) is a convex cone.

Definition 7.66 (Recession cone)

Let \(C\) be a nonempty convex set. The set of all directions of recession for \(C\) is called its recession cone and is denoted by \(R_C\).

7.14.1.1. Linear Subspaces#

Theorem 7.177 (Recession cone of a linear subspace)

Let \(\VV\) be a real vector space. Let \(S\) be a linear subspace of \(\VV\). Then \(R_S = S\).

Proof. We first show that \(S \subseteq R_S\).

  1. Pick some \(\by \in S\).

  2. Pick any \(\bx \in S\).

  3. Then for all \(\alpha \geq 0\), \(\bx + \alpha \by \in S\) since \(S\) is a subspace and is closed under scalar multiplication and vector addition.

  4. Hence \(\by \in R_S\).

  5. Thus \(S \subseteq R_S\).

We now show that \(R_S \subseteq S\).

  1. Pick some \(\by \in R_S\).

  2. Pick some fixed \(\bx \in S\).

  3. Since \(\by\) is a recession direction, hence for every \(\alpha > 0\), \(\bx + \alpha \by \in S\).

  4. Let \(\bz = \bx + \alpha \by\).

  5. Then \(\by = \frac{1}{\alpha}(\bz - \bx)\).

  6. But that means \(\by \in S\).

  7. Hence \(R_S \subseteq S\).

7.14.1.2. Affine Subspaces#

Theorem 7.178 (Recession cone of an affine subspace)

Let \(\VV\) be a real vector space. Let \(A\) be an affine subspace of \(\VV\). Let \(S\) be the linear subspace parallel to \(A\). Then \(R_A = S\).

Proof. Let \(\ba \in A\) be some fixed point. Then \(S = A - \ba\). We first show that \(S \subseteq R_A\).

  1. Pick any \(\by \in S\).

  2. Then \(\by = \bu - \ba\) for some \(\bu \in A\).

  3. Pick some \(\bx \in A\) and let \(\alpha \geq 0\).

  4. Then

    \[ \bx + \alpha \by = \bx + \alpha (\bu - \ba) = 1 \cdot \bx + \alpha \cdot \bu + (- \alpha) \cdot \ba. \]
  5. Hence \(\bx + \alpha \by\) is an affine combination of \(\bx, \bu, \ba \in A\).

  6. Hence \(\bx + \alpha \by \in A\).

  7. Hence \(\by \in R_A\).

  8. Hence \(S \subseteq R_A\).

We now show that \(R_A \subseteq S\).

  1. Pick some \(\by \in R_A\).

  2. Pick some fixed \(\bx \in A\).

  3. Since \(\by\) is a recession direction, hence for every \(\alpha > 0\), \(\bx + \alpha \by \in A\).

  4. Let \(\bz = \bx + \alpha \by - \ba\).

  5. Then \(\bz \in S\). And \(\bu = \bx - \ba \in S\).

  6. Hence \(\bz = \bu + \alpha \by\).

  7. Hence \(\by = \frac{1}{\alpha} (\bz - \bu)\).

  8. But that means \(\by \in S\).

  9. Hence \(R_A \subseteq S\).

7.14.1.3. Set Negation#

Property 7.38 (Recession cone under set negation)

Let \(C\) be a nonempty and convex set. Then

\[ R_{-C} = -R_{C}. \]

Proof. Let \(D = -C\). Then \(C = -D\).

  1. Let \(\by \in R_{D}\).

  2. Then for any \(\bx \in D\) and every \(\alpha \geq 0\),

    \[ \bx + \alpha \by \in D \]
  3. Hence for every \(\alpha \geq 0\),

    \[ (-\bx) + \alpha (-\by) \in C. \]
  4. Since \(\bx \in D\), hence \(-\bx \in C\).

  5. Hence \(-\by \in R_C\).

  6. The argument runs similarly for the converse.

The following results describe the properties of recession cones for nonempty, closed and convex sets.

7.14.1.4. Closedness of Recession Cone#

Property 7.39 (Closedness of recession cone)

Let \(C\) be a nonempty, closed and convex set. Then the recession cone \(R_C\) is a closed and convex cone.

Proof. We established in Theorem 7.176 that \(R_C\) is a convex cone. We next prove the closedness.

  1. Let \(\{ \by_k \}\) be a convergent sequence of \(R_C\).

  2. Let \(\by = \lim_{k \to \infty} \by_k\).

  3. For any \(\bx \in C\) and \(\alpha \geq 0\), we have \(\bz_k = \bx + \alpha \by_k \in C\) since \(\by_k\) is a recession direction.

  4. Now let \(\bz = \lim \bz_k = \bx + \alpha \by\).

  5. Since \(C\) is closed, hence \(\bz \in C\).

  6. Thus, for any \(\bx \in C\) and \(\alpha \geq 0\), \(\bx + \alpha \by \in C\).

  7. Hence \(\by\) is a recession direction of \(C\) and \(\by \in R_C\).

  8. Thus every convergent sequence of \(R_C\) converges in \(R_C\).

  9. Hence \(R_C\) is closed.

7.14.1.5. Recession Directions for Closed and Convex Sets#

Theorem 7.179 (Characterization of recession directions)

Let \(C\) be a nonempty, closed and convex set. A vector \(\by\) belongs to \(R_C\) if and only if there exists a vector \(\bx \in C\) such that \(\bx + \alpha \by \in C\) for all \(\alpha \geq 0\).

Proof. Suppose \(\by \in R_C\). Then by definition, for every \(\bx \in C\) and \(\alpha \geq 0\), \(\bx + \alpha \by \in C\).

Now, for the converse, let \(\by \in \VV\) and assume that there exists a vector \(\bx \in C\) such that for every \(\alpha \geq 0\), \(\bx + \alpha \by \in C\). If \(\by = \bzero\), then there is nothing to prove as \(\bzero \in R_C\) always. Hence, assume that \(\by \neq \bzero\). Let \(\widehat{\bx} \in C\) be arbitrary. We shall first show that \(\widehat{\bx} + \by \in C\). We shall do this by building a converging sequence \(\{ \by_k \}\) that converges to \(\by\) and showing that for sufficiently large \(k\), \(\widehat{\bx} + \by_k \in C\), hence \(\{ \widehat{\bx} + \by_k \}\) is a converging sequence of \(C\) converging in \(C\) due to closedness of \(C\).

  1. By hypothesis, \(\bz_k = \bx + k \by \in C\), where \(k \in \Nat\).

  2. If \(\widehat{x} = \bz_k\) for some \(k\), then \(\widehat{\bx} + \by = \bx + (k+1) \by \in C\) and we are done.

  3. Let us then consider the case where \(\widehat{\bx} \neq \bz_k\) for every \(k\).

  4. Define

    \[ \by_k = \frac{\bz_k - \widehat{\bx}}{\| \bz_k - \widehat{\bx} \|} \| \by \|. \]
  5. Then

    \[ \widehat{\bx} + \by_k = \frac{\| \by \|}{\| \bz_k - \widehat{\bx} \|} \bz_k + \frac{\| \bz_k - \widehat{\bx} \| - \| \by \|}{\| \bz_k - \widehat{\bx} \|} \widehat{\bx}. \]
  6. We can see that \(\widehat{\bx} + \by_k\) lies on a line that starts at \(\widehat{\bx}\) and passes through \(\bz_k\) since \(\widehat{\bx} + \by_k\) is an affine combination of \(\widehat{\bx}\) and \(\bz_k\).

  7. Also, note that \(\widehat{\bx} + \by_k\) lies between \(\widehat{\bx}\) and \(\bz_k\) for all \(k\) for which \(\| \bz_k - \widehat{\bx} \| > \| \by \|\).

  8. Hence, there exists \(k_0 \in \Nat\) such that for all \(k > k_0\), \(\widehat{\bx} + \by_k \in C\) due to convexity of \(C\).

  9. We have

    \[\begin{split} \frac{\by_k}{\| \by \|} &= \frac{\bz_k - \widehat{\bx}}{\| \bz_k - \widehat{\bx} \|}\\ &= \frac{\bz_k - \bx}{\| \bz_k - \widehat{\bx} \|} + \frac{\bx - \widehat{\bx}}{\| \bz_k - \widehat{\bx} \|}\\ &= \frac{\| \bz_k - \bx \|}{\| \bz_k - \widehat{\bx} \|} \frac{\bz_k - \bx}{\| \bz_k - \bx \|} + \frac{\bx - \widehat{\bx}}{\| \bz_k - \widehat{\bx} \|}\\ &= \frac{\| \bz_k - \bx \|}{\| \bz_k - \widehat{\bx} \|} \frac{\by}{\| \by \|} + \frac{\bx - \widehat{\bx}}{\| \bz_k - \widehat{\bx} \|}\\ \end{split}\]
  10. Note that \(\{ \bz_k \}\) is an unbounded sequence.

  11. Hence \(\| \bz_k - \widehat{\bx} \| \to \infty\) and

    \[ \frac{\| \bz_k - \bx \|}{\| \bz_k - \widehat{\bx} \|} \to 1, \quad \frac{\bx - \widehat{\bx}}{\| \bz_k - \widehat{\bx} \|} \to \bzero \]

    as \(k \to \infty\).

  12. Hence \(\by_k \to \by\) as \(k \to \infty\).

  13. After dropping the first \(k_0\) terms, the sequence \(\{\widehat{\bx} + \by_k \}\) is a sequence of \(C\).

  14. Since \(\widehat{\bx} + \by_k \to \widehat{\bx} + \by\) as \(k \to \infty\), hence \(\{\widehat{\bx} + \by_k \}\) is a converging sequence of \(C\) (after the first \(k_0\) terms).

  15. Since \(C\) is closed, hence \( \widehat{\bx} + \by \in C\).

We have established that for every \(\widehat{\bx} \in C\), \(\widehat{\bx} + \by \in C\). We now need to show that \(\widehat{\bx} + \beta \by \in C\) for every \(\beta \geq 0\).

  1. If \(\beta = 0\) then \(\widehat{\bx} \in C\). Hence assume that \(\beta > 0\).

  2. Consider the vector \(\beta \by\).

  3. For ever \(\alpha \geq 0\),

    \[ \bx + \alpha (\beta \by) = \bx + (\alpha \beta) \by \in C. \]
  4. Hence due to previous argument, \(\widehat{\bx} + \beta \by \in C\).

  5. This proves that \(\by\) is indeed a recession direction of \(C\).

7.14.1.6. Closure of Set#

Theorem 7.180 (Recession cone and set closure)

Let \(C\) be a nonempty and convex set. Then

\[ R_C \subseteq R_{\closure C}. \]

Also

\[ \closure R_C \subseteq R_{\closure C}. \]

Proof. We first show that \(R_C \subseteq R_{\closure C}\).

  1. Let \(\by \in R_C\).

  2. Then for every \(\bx \in C\) and \(\alpha \geq 0\), we have \(\bx + \alpha \by \in C\).

  3. Since \(C \subseteq \closure C\), hence we have for every \(\bx \in C\) and \(\alpha \geq 0\), \(\bx + \alpha \by \in C\).

  4. Since \(C\) is nonempty, there is at least one \(\bx \in \closure C\) such that for every \(\alpha \geq 0\), \(\bx + \alpha \by \in \closure C\).

  5. Since \(\closure C\) is a nonempty, closed and convex set, hence due to Theorem 7.179, \(\by\) is a recession direction of \(\closure C\).

  6. Hence \(\by \in R_{\closure C}\).

  7. Hence \(R_C \subseteq R_{\closure C}\).

We now show that \(\closure R_C \subseteq R_{\closure C}\).

  1. By taking closure on both sides of previous inclusion

    \[ \closure R_C \subseteq \closure R_{\closure C}. \]
  2. Since \(\closure C\) is nonempty, closed and convex. Hence due to Property 7.39, \(R_{\closure C}\) is closed.

  3. Hence \(\closure R_{\closure C} = R_{\closure C}\).

  4. Hence \(\closure R_C \subseteq R_{\closure C}\).

7.14.1.7. Unboundedness#

Theorem 7.181 (Unboundedness = Nonzero recession directions)

Let \(C\) be a nonempty, closed and convex set. Its recession cone \(R_C\) contains a nonzero direction if and only if \(C\) is unbounded.

Proof. It is easy to show that \(C\) must be unbounded to have a nonzero recession direction. We now show that if \(C\) is unbounded, then \(R_C\) does contain a nonzero recession direction.

  1. Assume that \(C\) is unbounded.

  2. Pick some \(\bx \in C\).

  3. Let \(\{ \bz_k \}\) be an unbounded sequence of \(C\).

  4. Consider the sequence \(\{ \by_k \}\) where

    \[ \by_k = \frac{\bz_k - \bx}{\| \bz_k - \bx \|}. \]
  5. Note that \(\{ \by_k \}\) is a bounded sequence since \(\| \by_k \| = 1\).

  6. Let \(\by\) be a limit point of \(\{ \by_k \}\).

  7. Note that \(\by\) must be a nonzero vector since it is a limit point of \(\{ \by_k \}\).

  8. Let \(\alpha \geq 0\) and consider the vector \(\bx + \alpha \by_k\).

    \[\begin{split} \bx + \alpha \by_k &= \bx + \alpha \frac{\bz_k - \bx}{\| \bz_k - \bx \|} \\ &= \frac{\alpha}{\| \bz_k - \bx \|}\bz_k + \frac{\| \bz_k - \bx \| - \alpha}{\| \bz_k - \bx \|} \bx. \end{split}\]
  9. \(\bx + \alpha \by_k\) is an affine combination of \(\bx\) and \(\bz_k\). Hence it lies on the line starting from \(\bx\) passing through \(\bz_k\).

  10. Also since \(\{ \bz_k \}\) is unbounded, hence there exists \(k_0 \in \Nat\) such that for all \(k > k_0\), the inequality \(\| \bz_k - \bx \| > \alpha\) holds true.

  11. Hence for all \(k > k_0\), the point \(\bx + \alpha \by_k\) lies on the line segment between \(\bx\) and \(\bz_k\).

  12. Due to convexity of \(C\), for all \(k > k_0\), we have \(\bx + \alpha \by_k \in C\).

  13. Since \(\bx + \alpha \by\) is a limit point of \(\{ \bx + \alpha \by_k \}\) and \(C\) is closed, hence \(\bx + \alpha \by \in C\).

  14. Thus for a given \(\bx \in C\) there exists a nonzero \(\by\) such that for all \(\alpha \geq 0\), we have \(\bx + \alpha \by \in C\).

  15. By Theorem 7.179, \(\by\) must be a (nonzero) recession direction of \(C\).

Corollary 7.13 (Boundedness and recession cone)

A nonempty, closed and convex set \(C\) is bounded if and only if \(R_C = \{ \bzero \}\).

Recall that in a finite dimensional ambient vector space, closed and bounded sets are compact. Hence a nonempty, compact and convex set has a zero recession cone.

7.14.1.8. Relative Interior#

Theorem 7.182 (Recession cone of relative interior)

Let \(C\) be a nonempty, closed and convex set. The recession cones of \(C\) and \(\relint C\) are equal.

Proof. We need to show that \(R_C = R_{\relint C}\). We first show that \(R_{\relint C} \subseteq R_C\).

  1. Recall that since \(C\) is nonempty and convex, hence \(\relint C\) is nonempty and convex (Theorem 7.142).

  2. Let \(\by \in R_{\relint C}\).

  3. Pick some \(\bx \in \relint C \subseteq C\).

  4. Then for all \(\alpha \geq 0\), we have \(\bx + \alpha \by \in \relint C \subseteq C\).

  5. By Theorem 7.179, \(\by\) is also a recession direction for \(C\).

  6. Since this applies for every \(\by \in R_{\relint C}\), hence \(R_{\relint C} \subseteq R_C\).

For the converse, we proceed as follows.

  1. Let \(\by \in R_C\).

  2. Pick some \(\bx \in \relint C\).

  3. Then for any \(\alpha \geq 0\), we have \(\bx + \alpha \by \in C\).

  4. Hence by line segment principle (Theorem 7.143), \(\bx + \alpha \by \in \relint C\) for all \(\alpha \geq 0\).

    1. We have \(\bx \in \relint C\).

    2. Pick any \(\beta > \alpha\).

    3. We also have \(\bx + \beta \by \in C = \closure C\).

    4. Then \(\bx + \alpha \by\) lies on the line segment between \(\bx\) and \(\bx + \beta \by\).

    5. Hence \(\bx + \alpha \by \in \relint C\).

  5. We have established that for any fixed \(\bx \in \relint C\) and all \(\alpha \geq 0\), \(\bx + \alpha \by \in \relint C\).

  6. Hence \(\by \in R_{\relint C}\).

  7. Thus \(R_C \subseteq R_{\relint C}\).

Together we have \(R_C = R_{\relint C}\).

7.14.1.9. Intersection#

Theorem 7.183 (Intersection and recession cones)

Let \(C\) and \(D\) be nonempty, closed and convex sets such that \(C \cap D \neq \EmptySet\), we have

\[ R_{C \cap D} = R_C \cap R_D. \]

More generally, for any collection of closed convex sets \(\{ C_i \}_{i \in I}\), where \(I\) is an arbitrary index set and \(\bigcap_{i \in I} C_i \neq \EmptySet\), we have

\[ R_{\bigcap_{i \in I} C_i} = \bigcap_{i \in I}R_{C_i}. \]

Proof. We first show it for two nonempty, closed and convex sets \(C\) and \(D\) with nonempty intersection.

  1. We can see that \(C \cap D\) is also a nonempty, closed and convex set.

  2. Let \(\by \in R_{C \cap D}\).

  3. Then for every \(\bx \in C \cap D\) and all \(\alpha \geq 0\), we have \(\bx + \alpha \by \in C \cap D\).

  4. Hence, by Theorem 7.179, \(\by \in R_C\) and \(\by \in R_D\).

  5. Hence \(\by \in R_C \cap R_D\).

  6. Hence \(R_{C \cap D} \subseteq R_C \cap R_D\).

  7. Conversely, let \(\by \in R_C \cap R_D\).

  8. Pick any \(\bx \in C \cap D\).

  9. By definition, \(\bx + \alpha \by \in C \cap D\) for all \(\alpha \geq 0\).

  10. Hence \(\by \in R_{C \cap D}\).

  11. Hence \(R_C \cap R_D \subseteq R_{C \cap D}\).

  12. Together we have \(R_{C \cap D} = R_C \cap R_D\).

It is straightforward to generalize this argument for an arbitrary family of nonempty closed and convex sets with a nonempty intersection.

Corollary 7.14 (Recession cones and containment)

Let \(C\) and \(D\) be nonempty, closed and convex sets such that \(C \subseteq D\). Then \(R_C \subseteq R_D\).

Proof. We note that \(C = C \cap D\). Due to Theorem 7.183,

\[ R_C = R_{C \cap D} = R_C \cap R_D. \]

Hence \(R_C \subseteq R_D\).

7.14.1.10. Linear Transformations#

Theorem 7.184 (Recession cone of inverse image under linear transformation)

Let \(\VV\) and \(\WW\) be real, finite dimensional, normed linear spaces. Let \(\bAAA : \VV \to \WW\) be a linear operator. Let \(C \subseteq \VV\) be a nonempty, closed and convex subset of \(\VV\). Let \(D \subseteq \WW\) be a nonempty, compact and convex subset of \(\WW\). Consider the set

\[ E = \{ \bx \in C \ST \bAAA(\bx) \in D \}. \]

Assume that \(E\) is nonempty. Then, \(E\) is closed and convex and its recession cone is given by

\[ R_E = R_C \cap (\nullspace \bAAA) \]

where \(\nullspace \bAAA\) denotes the null space of the linear operator \(\bAAA\). Furthermore, \(E\) is compact if and only if \(R_E = \{ \bzero \}\).

Proof. Consider the set \(F = \{ \bx \in \VV \ST \bAAA(\bx) \in D \}\).

  1. \(\bAAA\) is a continuous (linear) transformation since both \(\VV\) and \(\WW\) are finite dimensional.

  2. \(F\) is the inverse image of a closed and convex set \(D\) under \(\bAAA\).

  3. Hence \(F\) is closed and convex.

  4. We further note that \(E = C \cap F\).

  5. Since \(E\) is nonempty by hypothesis, hence \(F\) is also nonempty.

  6. We can easily show that the recession cone of \(F\), is \(R_F = \nullspace \bAAA\).

    1. Let \(\by \in \nullspace \bAAA\).

    2. Pick any fixed \(\bx \in F\) and \(\alpha \geq 0\).

    3. Then \(\bAAA(\bx + \alpha \by) = \bAAA(\bx) \in D\).

    4. Hence \(\bx + \alpha \by \in F\).

    5. Hence \(\by \in R_F\).

    6. Hence \(\nullspace \bAAA \subseteq R_F\).

    7. Conversely, for contradiction assume that there exists \(\by \in R_F\) such that \(\by \notin \nullspace \bAAA\).

    8. Then \(\bz = \bAAA(\by) \neq \bzero\).

    9. We require that for every \(\bx \in F\) and for every \(\alpha > 0\), we must have

      \[ \bAAA(\bx + \alpha \by) = \bAAA(\bx) + \alpha \bz \in D. \]
    10. However, this can only happen if \(D\) is unbounded.

    11. Since \(D\) is compact, it is bounded, hence we arrive at a contradiction.

    12. This establishes that \(R_F = \nullspace \bAAA\).

  7. Due to Theorem 7.183, since \(E = C \cap F\) and both \(C\) and \(F\) are nonempty, closed and convex, hence

    \[ R_E = R_C \cap R_F = R_C \cap (\nullspace \bAAA). \]

Finally, due to Corollary 7.13, \(E\) is compact if and only if \(R_C \cap (\nullspace \bAAA) = \{ \bzero \}\).

7.14.2. Lineality Space#

Definition 7.67 (Lineality space)

Let \(C\) be a nonempty and convex set. The lineality space of \(C\), denoted by \(L_C\), is the set of directions of recession \(\by\) whose opposite, \(-\by\), is also a direction of recession.

In other words,

\[ L_C = R_C \cap (- R_C). \]

As a consequence of this definition, if \(\by \in L_C\), then \(\bx + \alpha \by \in C\) for every \(\alpha \in \RR\). In other words, the whole line in the direction of \(\by\) with \(\bx\) as one of its points belongs to \(C\).

7.14.2.1. As a Subspace#

For nonempty, closed and convex sets, their lineality space is a subspace of the ambient vector space.

Theorem 7.185 (Lineality space as a subspace)

Let \(\VV\) be an \(n\)-dim real normed linear space. Let \(C \subseteq \VV\) be a nonempty, closed and convex subset of \(\VV\). Then the lineality space of \(C\) is a subspace of \(\VV\).

Proof. Let \(\by_1, \by_2 \in L_C\) and \(\alpha_1, \alpha_2 \in \RR\) be nonzero scalars.

  1. Then

    \[ \alpha_1 \by_1 + \alpha_2 \by_2 = |\alpha_1 | (\sgn(\alpha_1)) \by_1 + |\alpha_2 | (\sgn(\alpha_2)) \by_2. \]
  2. Let \(c = |\alpha_1 | + |\alpha_2 |\) and define \(t = \frac{ |\alpha_1 |}{|\alpha_1 | + |\alpha_2 |} = \frac{|\alpha_1 |}{c}\).

  3. We note that \(\frac{|\alpha_2 |}{c} = 1 - t\).

  4. Also define \(\bz_1 = (\sgn(\alpha_1)) \by_1\) and \(\bz_2 = (\sgn(\alpha_2)) \by_2\).

  5. Then

    \[\begin{split} \alpha_1 \by_1 + \alpha_2 \by_2 &= |\alpha_1 | (\sgn(\alpha_1)) \by_1 + |\alpha_2 | (\sgn(\alpha_2)) \by_2 \\ &= |\alpha_1 | \bz_1 + |\alpha_2 | \bz_2 \\ &= (|\alpha_1 | + |\alpha_2 |) (t \bz_1 + (1-t) \bz_2) \\ &= c ( t \bz_1 + (1- t) \bz_2). \end{split}\]
  6. Since \(R_C\) is a convex cone, hence \(L_C\) is also a convex cone.

  7. We can see that \(\bz_1\) and \(\bz_2\) in \(L_C\) because \(\bz_i\) is either equal to \(\by_i\) or equal to \(-\by_i\).

  8. By convexity of \(L_C\), \(\bz = t \bz_1 + (1- t) \bz_2 \in L_C\).

  9. Since \(L_C\) is a cone, hence \(c \bz \in L_C\).

  10. Hence \(\alpha_1 \by_1 + \alpha_2 \by_2 \in L_C\).

  11. Hence \(L_C\) is closed under linear combinations.

  12. Hence \(L_C\) is indeed a linear subspace of \(\VV\).

7.14.2.2. Relative Interior#

Theorem 7.186 (Lineality space of relative interior)

Let \(C\) be a nonempty, closed and convex set. The lineality spaces of \(C\) and \(\relint C\) are equal.

Proof. We have

\[ L_{\relint C} = R_{\relint C} \cap (- R_{\relint C}) = R_C \cap (- R_C) = L_C. \]

We used Theorem 7.182.

7.14.2.3. Intersection#

Theorem 7.187 (Intersection and lineality spaces)

Let \(C\) and \(D\) be nonempty, closed and convex sets such that \(C \cap D \neq \EmptySet\), we have

\[ L_{C \cap D} = L_C \cap L_D. \]

More generally, for any collection of closed convex sets \(\{ C_i \}_{i \in I}\), where \(I\) is an arbitrary index set and \(\bigcap_{i \in I} C_i \neq \EmptySet\), we have

\[ L_{\bigcap_{i \in I} C_i} = \bigcap_{i \in I}L_{C_i}. \]

Proof. We have

\[\begin{split} L_{\bigcap_{i \in I} C_i} &= R_{\bigcap_{i \in I} C_i} \bigcap (- R_{\bigcap_{i \in I} C_i}) \\ &= (\bigcap_{i \in I}R_{C_i}) \bigcap (- \bigcap_{i \in I}R_{C_i}) \\ &= \bigcap_{i \in I} (R_{C_i} \cap (-R_{C_i})) \\ &= \bigcap_{i \in I} L_{C_i}. \end{split}\]

We made use of Theorem 7.183.

Corollary 7.15 (Lineality spaces and containment)

Let \(C\) and \(D\) be nonempty, closed and convex sets such that \(C \subseteq D\). Then \(L_C \subseteq L_D\).

Proof. We note that \(C = C \cap D\). Due to Theorem 7.187,

\[ L_C = L_{C \cap D} = L_C \cap L_D. \]

Hence \(L_C \subseteq L_D\).

7.14.2.4. Linear Transformations#

Theorem 7.188

Let \(\VV\) and \(\WW\) be real, finite dimensional, normed linear spaces. Let \(\bAAA : \VV \to \WW\) be a linear operator. Let \(C \subseteq \VV\) be a nonempty, closed and convex subset of \(\VV\). Let \(D \subseteq \WW\) be a nonempty, compact and convex subset of \(\WW\). Consider the set

\[ E = \{ \bx \in C \ST \bAAA(\bx) \in D \}. \]

Assume that \(E\) is nonempty. Then, \(E\) is closed and convex and its lineality space is given by

\[ L_E = L_C \cap (\nullspace \bAAA) \]

where \(\nullspace \bAAA\) denotes the null space of the linear operator \(\bAAA\). Furthermore, \(E\) is compact if and only if \(R_E = \{ \bzero \}\).

Proof. We have

\[\begin{split} L_E &= R_E \cap (-R_E) \\ &= (R_C \cap (\nullspace \bAAA)) \cap (- (R_C \cap (\nullspace \bAAA))) \\ &= (R_C \cap (\nullspace \bAAA)) \cap ((- R_C) \cap (\nullspace \bAAA))) \\ &= (R_C \cap (- R_C)) \cap (\nullspace \bAAA)\\ &= L_C \cap (\nullspace \bAAA). \end{split}\]

We used Theorem 7.184.

7.14.2.5. Decomposition along Lineality Spaces#

Theorem 7.189 (Decomposition of a convex set along subspaces of its lineality space)

Let \(\VV\) be an \(n\)-dim real vector space. Let \(C\) be a nonempty convex subset of \(\VV\). Let \(L_C\) be its lineality space. Then for every linear subspace \(S \subseteq L_C\), we have

\[ C = S + (C \cap S^{\perp}). \]

Note that since \(C\) is not required to be closed, hence \(L_C\) itself may not be a linear subspace.

Proof. Since \(\VV\) is finite dimensional, hence it supports an orthogonal decomposition (Theorem 4.86)

\[ \VV = S + S^{\perp}. \]

We first show that \(C \subseteq S + C \cap S^{\perp}\).

  1. Let \(\bx \in C\).

  2. Then \(\bx = \by + \bz\) where \(\by \in S\) and \(\bz \in S^{\perp}\).

  3. Since \(\by \in S \subseteq L_C\), hence \(\by\) is a recession direction of \(C\) and so is \(-\by\).

  4. Hence \(\bz = \bx - \by \in C\).

  5. Hence \(\bz \in C \cap S^{\perp}\).

  6. Thus for every \(\bx \in C\), we have \(\bx = \by + \bz\) such that \(\by \in S\) and \(\bz \in C \cap S^{\perp}\).

  7. Hence \(C \subseteq S + C \cap S^{\perp}\).

We now show the converse \(S + C \cap S^{\perp} \subseteq C\).

  1. Let \(\bx \in S + C \cap S^{\perp}\).

  2. Then \(\bx = \by + \bz\) such that \(\by \in S\) and \(\bz \in C \cap S^{\perp}\).

  3. Thus \(\bz \in C\).

  4. Since \(S \subseteq L_C\), hence \(\by\) is a recession direction for \(C\).

  5. Hence \(\bz + \by = \bx \in C\).

  6. Hence \(S + C \cap S^{\perp} \subseteq C\).

7.14.3. Nested Sequences of Closed and Convex Sets#

Recall from Theorem 3.77 that the intersection of a nested sequence of nonempty compact sets is nonempty and compact.

In this subsection, we consider a nested sequence \(\{ C_k \}\) of nonempty, closed and convex sets. We derive a number of conditions under which their intersection \(\bigcap_{k=1}^{\infty}C_k\) is nonempty.

A very good example of a nested sequence of closed and convex sets is a sequence of sublevel sets of a closed and convex function.

Theorem 7.190 (General condition for nonempty intersection)

Let \(\VV\) be an \(n\)-dim real vector space. Let \(\{ C_k \}\) be a sequence of nonempty, closed and convex subsets of \(\VV\) such that \(C_{k + 1} \subseteq C_k\) for all \(k \in \Nat\). Let \(R_k\) and \(L_k\) be the recession cone and lineality space of \(C_k\) respectively for every \(k \in \Nat\). Further, let

\[ R = \bigcap_{k=1}^{\infty} R_k \text{ and } L = \bigcap_{k=1}^{\infty} L_k. \]

Assume that \(R = L\). Then the intersection \(C = \bigcap_{k=1}^{\infty} C_k\) is nonempty and has the form

\[ C = \bigcap_{k=1}^{\infty} C_k = L + \tilde{C} \]

where \(\tilde{C}\) is some nonempty and compact set.

Proof. We note some properties of the sequences \(\{ R_k \}\) and \(\{ L_k \}\).

  1. Since \(C_k\) are nested, hence the recession cones \(R_k\) are also nested (Corollary 7.14).

  2. Since \(C_k\) are nested, hence the lineality spaces \(L_k\) are also nested (Corollary 7.15).

  3. Since \(C_k\) are nonempty closed and convex, hence \(L_k\) are linear subspaces due to Theorem 7.185.

We first show that \(L\) is a linear subspace and for all sufficiently large \(k\), \(L_k = L\).

  1. Since \(L_k\) are nested subspaces, hence \(\dim L_{k+1} \leq \dim L_k\) for every \(k\).

  2. \(L \neq \EmptySet\) since \(\bzero \in L_k\) for every \(k\). Hence \(\bzero \in L\).

  3. We can also see that \(L\) must be a linear subspace of \(\VV\) since it is an (infinite) intersection of linear subspaces.

  4. Since \(\VV\) is finite dimensional, it follows that for all \(k\) sufficiently large, we have \(L_k = L\). In other words, there exists \(k_0 \in \Nat\) such that for all \(k > k_0\), \(L_k = L\).

  5. Without loss of generality, assume that

    \[ L_k = L, \Forall k. \]

    We can achieve this by dropping the first \(k_0\) terms from the sequence \(\{ C_k \}\) for which \(L\) is a proper subset of \(L_k\).

We next show that for all sufficiently large \(k\), we have \(R_k \cap L^{\perp} = \{ \bzero \}\).

  1. Since \(\bzero \in R_k\) for every \(R_k\) and \(L\) is a linear subspace, hence \(\bzero \in R_k \cap L^{\perp}\) for every \(k\).

  2. For contradiction, assume that it is not true that there exists \(k_0 \in \Nat\) such that for every \(k > k_0\), \(R_k \cap L^{\perp} = \{ \bzero \}\).

  3. Then, since \(R_k\) are nested, hence for each \(k\) there exists \(\by_k \in R_k \cap L^{\perp}\) with \(\| \by_k \| = 1\).

    1. Consider some \(k \in \Nat\).

    2. There exists some \(l > k\) such that \(R_l \cap L^{\perp} \neq \{ \bzero \}\).

    3. Then \(R_l \cap L^{\perp}\) contains a nonzero vector \(\bu\).

    4. Since \(R_l\) is a cone, hence \(\bu\) can be scaled to some \(\bv\) so that \(\| \bv \| = 1\).

    5. Since \(L^{\perp}\) is a linear subspace, hence \(\bv \in L^{\perp}\) also.

    6. Hence \(\bv \in R_l \cap L^{\perp}\).

    7. But since \(R_l \subseteq R_k\), hence \(\bv \in R_k \cap L^{\perp}\).

  4. Now consider the set \(D = \{\by \ST \| \by \| = 1 \}\).

  5. Let \(E_k = D \cap R_k \cap L^{\perp}\) for every \(k\).

  6. Then \(E_k\) is nonempty for every \(k\).

  7. We note that \(E_k\) is also compact.

    1. The set \(D\) is closed and bounded.

    2. \(R_k\) is closed for every \(k\) (Property 7.39).

    3. \(L^{\perp}\) is a subspace of a finite dimensional space, hence it is closed.

    4. \(E_k\) is an intersection of closed sets, hence it is closed.

    5. Since \(D\) is bounded, hence \(E_k \subseteq D\) is also bounded.

    6. Since \(E_k\) is closed and bounded, hence it is compact as \(\VV\) is finite dimensional.

  8. Then \(\{ E_k \}\) is a sequence of nested compact sets since \(E_{k + 1} \subseteq E_k\) for every \(k\).

  9. Hence \(E = \bigcap_{k=1}^{\infty} E_k\) is nonempty (Theorem 3.77).

  10. But expanding \(E\) we see that

    \[\begin{split} E &= \bigcap_{k=1}^{\infty} E_k \\ &= \bigcap_{k=1}^{\infty} (D \cap R_k \cap L^{\perp}) \\ &= D \cap L^{\perp} \cap \left (\bigcap_{k=1}^{\infty} R_k \right ) \\ &= D \cap L^{\perp} \cap R \\ &= D \cap L^{\perp} \cap L \\ &= D \cap \{ \bzero \} \\ &= \EmptySet. \end{split}\]

    We used the facts that

    1. \(R = \bigcap_{k=1}^{\infty} R_k\).

    2. \(L = R\) by hypothesis.

    3. \(L^{\perp} \cap L = \{ \bzero \}\) since they are orthogonal complements.

    4. All members of \(D\) are nonzero since they are unit norm.

  11. We have arrived at a contradiction that \(E\) must be an empty set.

  12. Hence, we conclude that \(R_k \cap L^{\perp} = \{ \bzero \}\) holds true for all sufficiently large \(k\).

  13. Again without loss of generality, we shall assume that

    \[ R_k \cap L^{\perp} = \{ \bzero \}, \quad \Forall k \in \Nat. \]

We have established so far that \(L\) is a linear subspace, \(L_k = L\) and \(R_k \cap L^{\perp} = \{ \bzero \}\) for all \(k\) without loss of generality. We shall now show that \(C =\bigcap_{k=1}^{\infty} C_k\) is nonempty.

  1. Note that \(C_k \cap L^{\perp}\) is not empty.

    1. We have \(\VV = L \oplus L^{\perp}\).

    2. Let \(\bx \in C_k\).

    3. Then \(\bx = \by + \bz\) where \(\by \in L\) and \(\bz \in L^{\perp}\).

    4. But then \(\by \in L_k\) since \(L \subseteq L_k\).

    5. Then \(\by\) and \(-\by\) are recession directions of \(C_k\).

    6. Hence \(\bz = \bx - \by \in C_k\).

    7. Hence \(\bz \in C_k \cap L^{\perp}\).

  2. By Theorem 7.183, the recession cone of \(C_k \cap L^{\perp}\) is given by

    \[ R_{C_k \cap L^{\perp}} = R_k \cap R_{L^{\perp}}. \]
  3. Since \(L^{\perp}\) is a linear subspace of \(\VV\), hence due to Theorem 7.177, \(R_{L^{\perp}} = L^{\perp}\).

  4. Hence

    \[ R_{C_k \cap L^{\perp}} = R_k \cap L^{\perp} = \{ \bzero \}. \]
  5. Then due to Theorem 7.181, \(C_k \cap L^{\perp}\) is bounded for ever \(k\).

  6. Since \(C_k\) is closed and \(L^{\perp}\) is closed, hence \(C_k \cap L^{\perp}\) is also closed for every \(k\).

  7. Hence \(C_k \cap L^{\perp}\) is a compact set for every \(k\).

  8. Since \(\{ C_k \}\) are nested, hence \(\{ C_k \cap L^{\perp} \}\) are also a nested sequence of compact sets.

  9. Then their intersection is nonempty and compact. In other words, the set

    \[ \tilde{C} = \bigcap_{k=1}^{\infty} \left (C_k \cap L^{\perp} \right ) = \left ( \bigcap_{k=1}^{\infty} C_k \right ) \cap L^{\perp} = C \cap L^{\perp} \]

    is nonempty and compact.

  10. Hence \(C\) is also nonempty.

  11. Hence, due to Theorem 7.187, the lineality space of \(C = \bigcap_{k=1}^{\infty} C_k\) is given by \(L = \bigcap_{k=1}^{\infty} L_k\).

  12. By Theorem 7.189,

    \[ C = L + (C \cap L^{\perp}) = L + \tilde{C}. \]

Corollary 7.16 (Compactness of the nested intersection)

Under the assumptions of Theorem 7.190, if \(R = \bigcap_{k=1}^{\infty} R_k = \{ \bzero \}\), then the intersection \(C = \bigcap_{k=1}^{\infty} C_k\) is nonempty and compact.

Proof. Since \(L = R = \{ \bzero \}\), hence \(C = L + \tilde{C} = \tilde{C}\). We have already established that \(\tilde{C}\) is both nonempty and compact.

7.14.3.1. Nested Sequence with Linear Inequality Constraints#

Consider the problem of minimizing a closed and convex function under linear inequality constraints.

  1. The sublevel sets form a nested sequence of closed and convex sets.

  2. The solution set of a set of linear inequality constraints is another convex set (an intersection of closed half spaces).

Theorem 7.191 (Nested sequence with inequality constraints)

Let \(\VV\) be a Euclidean space. Let \(\{ C_k \}\) be a sequence of nonempty, closed and convex subsets of \(\VV\). Let \(R_k\) and \(L_k\) be the recession cone and lineality space of \(C_k\) respectively for every \(k \in \Nat\). Further, let

\[ R = \bigcap_{k=1}^{\infty} R_k \text{ and } L = \bigcap_{k=1}^{\infty} L_k \text{ and } C = \bigcap_{k=1}^{\infty} C_k \]

Let \(X\) be a subset of \(\VV\) specified by linear inequality constraints; i.e.,

\[ X = \{ \bx \in \VV \ST \langle \bx, \ba_i \rangle \leq b_i, i=1,\dots,r \} \]

where \(\ba_i \in \VV\) and \(b_i \in \RR\). Further assume that

  1. \(C_{k+1} \subseteq C_k\) for all \(k\).

  2. The intersection \(X \cap C_k\) is nonempty for all \(k\).

  3. We have \(R_X \cap R \subseteq L\) where \(R_X\) is the recession cone of \(X\).

Then the intersection \(X \cap C\) is nonempty.

7.14.3.2. Quadratic Function with Quadratic Constraints#

Theorem 7.192 (Quadratic function with quadratic constraints)

Let \(\{ C_k \}\) be a sequence of subsets of \(\RR^n\) given by

\[ C_k = \{ \bx \ST \bx^T \bQ \bx + \ba^T \bx + b \leq w_k \} \]

where \(\bQ \in \SS^n_+\) is a symmetric positive semidefinite matrix, \(\ba \in \RR^n\) is a vector, \(b \in \RR\) is a scalar, and \(\{ w_k \}\) is a nonincreasing sequence of scalars that converges to \(0\).

Also, let \(X\) be a subset of \(\RR^n\) of the form

\[ X = \{ \bx \in \RR^n \ST \bx^T \bQ_j \bx + \ba^T_j \bx + b_j \leq 0, j=1,\dots, r \} \]

where \(\bQ_j \in \SS^n_+\), \(\ba_j \in \RR^n\) and \(b_j \in \RR\).

Assume further that \(X \cap C_k \neq \EmptySet\) for all \(k\). Then, the intersection \(X \cap (\bigcap_{k=1}^{\infty} C_k)\) is nonempty.

Note that \(\bx^T \bQ \bx + \ba^T \bx + b\) is a quadratic function and \(C_k\) are its sublevel sets.

7.14.4. Closedness Under Linear Transformations#

We address the question of guarantees under which the image of a closed and convex set under a linear transformation is also closed from the perspective of recession cones.

If a set \(C\) is closed and bounded, then it is compact in a finite dimensional space, the linear operator is continuous in a finite dimensional space, hence the image of \(C\) is also compact. The results in this section are relevant for closed and convex sets which are not bounded.

Theorem 7.193 (Closedness of image under linear transformation)

Let \(\VV\) and \(\WW\) be real, finite dimensional, normed linear spaces. Let \(\bAAA : \VV \to \WW\) be a linear operator. Let \(C \subseteq \VV\) be a nonempty, closed and convex subset of \(\VV\). Let the nullspace of \(\bAAA\) be denoted by \(\nullspace \bAAA\).

If \(R_C \cap \nullspace \bAAA \subseteq L_C\), then the set \(\bAAA(C)\) is closed.

Proof. We shall show that every converging sequence of \(D = \bAAA(C)\) converges in \(D\).

  1. Let \(\{ \by_k \}\) be a sequence of points in \(D\) converging to some point \(\by \in \WW\).

  2. We introduce the sets

    \[ W_k = \{ \bz \in \WW \ST \| \bz - \by \| \leq \| \by_k - \by \| \}. \]
  3. Note that \(W_k\) are closed balls centered at \(\by\) and with radii \(\| \by_k - \by \|\).

  4. Hence \(W_k\) are nonempty, closed and convex.

  5. By taking appropriate subsequence of \(\{ W_k \}\) if necessary, we can ensure that it is a nested sequence of closed balls.

  6. Further define

    \[ C_k = \{ \bx \in C \ST \bAAA(\bx) \in W_k \}. \]
  7. Since \(\by_k \in W_k\), hence \(C_k\) is nonempty.

  8. Then due to Theorem 7.184, \(C_k\) is closed and convex.

  9. Since \(\{ W_k \}\) is nested, hence \(\{ C_k \}\) is also a nested sequence of nonempty, closed and convex sets.

  10. We note that every \(C_k\) has the same recession cone given by \(R = R_C \cap (\nullspace \bAAA)\) due to Theorem 7.184.

  11. Similarly, every \(C_k\) has the same lineality space given by \(L = L_C \cap (\nullspace \bAAA)\) due to Theorem 7.188.

  12. By hypothesis, \(R_C \cap \nullspace \bAAA \subseteq L_C\).

  13. Hence \(R_C \cap \nullspace \bAAA \subseteq L_C \cap \nullspace \bAAA\).

  14. In other words, \(R \subseteq L\).

  15. By definition \(L \subseteq R\).

  16. Hence \(R = L\).

  17. Consequently, due to Theorem 7.190, the set \(\bigcap_{k=1}^{\infty} C_k\) is nonempty.

  18. Now pick some \(\bx \in \bigcap_{k=1}^{\infty} C_k\).

  19. Then \(\bx \in C\) since \(C_k \subseteq C\) for ever \(k\).

  20. Since \(\bx \in C_k\) for every \(k\), hence \(\bAAA(\bx) \in W_k\) for every \(k\).

  21. Hence, \(\bAAA(\bx) \in \bigcap_{k=1}^{\infty} W_k\).

  22. But \(\bigcap_{k=1}^{\infty} W_k = \{ \by \}\).

  23. Hence \(\bAAA(\bx) = \by\).

  24. Since \(\bx \in C\), hence \(\by = \bAAA(\bx) \in D = \bAAA(C)\).

  25. Hence the sequence \(\{ \by_k \}\) converges in \(D\).

  26. Since the sequence was chosen arbitrarily, hence every convergent sequence of \(D\) converges in \(D\).

  27. Hence \(D\) is closed.

In the special case where the closed and convex set \(C\) is bounded, then \(R_C = L_C = \{ \bzero \}\) and \(R_C \cap \nullspace \bAAA = L_C\). Hence \(\bAAA(C)\) is also closed.

7.14.4.1. Linear Inequality Constraints#

Theorem 7.194 (Closedness of image of a closed and convex set with linear inequality constraints under linear transformation)

Let \(\VV\) and \(\WW\) be real, finite dimensional, normed linear spaces. Let \(\bAAA : \VV \to \WW\) be a linear operator. Let \(C \subseteq \VV\) be a nonempty, closed and convex subset of \(\VV\). Let the nullspace of \(\bAAA\) be denoted by \(\nullspace \bAAA\).

Let \(X\) be a subset of \(\VV\) specified by linear inequality constraints; i.e.,

\[ X = \{ \bx \in \VV \ST \langle \bx, \ba_i \rangle \leq b_i, i=1,\dots,r \} \]

where \(\ba_i \in \VV\) and \(b_i \in \RR\).

If \(R_X \cap R_C \cap \nullspace \bAAA \subseteq L_C\), then the set \(\bAAA(X \cap C)\) is closed.

Proof. The argument is similar to Theorem 7.193 and uses Theorem 7.191 to establish the nonemptiness of the intersection of the nested sequence of closed and convex sets.

Let \(D = \bAAA(X \cap C)\). We shall show that every converging sequence of \(D = \bAAA(C)\) converges in \(D\).

  1. Let \(\{ \by_k \}\) be a sequence of points in \(D\) converging to some point \(\by \in \WW\).

  2. Let \(W_k\) and \(C_k\) be defined as in Theorem 7.193.

  3. By choosing a suitable subsequence, we are guaranteed that \(C_k\) are nested.

  4. We have \(\by_k \in W_k\). Hence \(C_k \subseteq C\) is nonempty.

  5. Also \(\by_k \in D = \bAAA(X \cap C)\).

  6. Let \(\bu \in C_k\) such that \(\bAAA(\bu) = \by_k\).

  7. Then \(\bu \in X \cap C\) holds true also.

  8. Hence \((X \cap C) \cap C_k = X \cap C_k\) is not empty for every \(k\).

  9. By hypothesis, \(R_X \cap R_C \cap \nullspace \bAAA \subseteq L_C\).

  10. Hence \(R_X \cap R_C \cap \nullspace \bAAA \subseteq L_C \cap \nullspace \bAAA\).

  11. Following the definition of \(R\) and \(C\) from the proof of Theorem 7.193, we have \(R_X \cap R \subseteq L\).

  12. Thus, all assumptions of Theorem 7.191 are satisfied.

  13. Hence the set \(X \cap (\bigcap_{k=1}^{\infty} C_k)\) is nonempty.

  14. Pick any point \(\bx \in X \cap (\bigcap_{k=1}^{\infty} C_k)\).

  15. Then \(\bx \in X\) and \(\bx \in C\) with \(\bAAA(\bx) = \by\).

  16. Hence \(\by \in \bAAA(X \cap C)\).

  17. Hence the sequence \(\{ \by_k \}\) converges in \(D\).

  18. Since the sequence was chosen arbitrarily, hence every convergent sequence of \(D\) converges in \(D\).

  19. Hence \(D\) is closed.

7.14.4.2. Quadratic Inequality Constraints#

Theorem 7.195 (Closedness of image of a set specified by quadratic inequality constraints under linear transformation)

Let \(\bA \in \RR^{m \times n}\). Let the nullspace of \(\bA\) be denoted by \(\nullspace \bA\). Let \(C\) be a nonempty, closed and convex subset of \(\RR^n\) specified by convex quadratic inequalities

\[ C = \{ \bx \in \RR^n \ST \bx^T \bQ_j \bx + \ba^T_j \bx + b_j \leq 0, j=1,\dots, r \} \]

where \(\bQ_j \in \SS^n_+\) are symmetric positive semidefinite matrices, \(\ba_j \in \RR^n\) are vectors and \(b_j \in \RR\) are scalars.

Then the set \(\bA C\) is closed.

7.14.4.3. Vector Sum of Closed Convex Sets#

Theorem 7.196 (Closedness of vector sum of closed and convex sets)

Let \(\VV\) be an \(n\)-dim real vector space. Let \(C_1, \dots, C_m\) be nonempty, closed and convex subsets of \(\VV\) such that the equality \(\by_1 + \dots + \by_m = \bzero\) for vectors \(\by_i \in R_{C_i}\) implies that \(\by_i = \bzero\) for every \(i=1,\dots,m\). Then the vector sum \(C_1 + \dots + C_m\) is a closed set.

Proof. We proceed as follows.

  1. Let \(C\) be the Cartesian product \(C_1 \times \dots \times C_m\).

  2. Let \(\VV^m\) denote the direct sum vector space \(\VV \oplus \dots \oplus \VV\).

  3. Then \(C \subseteq \VV^m\) is a nonempty, closed and convex subset of \(\VV^m\).

  4. Let \(\bAAA : \VV^m \to \VV\) be a linear transformation that maps a vector \((\bx_1, \dots, \bx_m) \in \VV^m\) into \(\bx_1 + \dots + \bx_m\).

    \[ \bAAA(\bx_1, \dots, \bx_m) = \bx_1 + \dots + \bx_m. \]
  5. We have \(R_C = R_{C_1} \times \dots \times R_{C_m}\).

  6. The null space of \(\bAAA\) is given by

    \[ \nullspace \bAAA = \{ (\bx_1, \dots, \bx_m) \in \VV^m \ST \bx_1 + \dots + \bx_m = \bzero, \bx_i \in \VV \}. \]
  7. Under the given hypothesis, we have \(R_C \cap \nullspace \bAAA = \{ \bzero \}\).

  8. Hence, due to Theorem 7.190, \(\bAAA(C)\) is closed.

  9. But \(\bAAA(C) = C_1 + \dots + C_m\).

  10. Thus \(C_1 + \dots + C_m\) is closed as desired.

Corollary 7.17 (Closedness of \(C_1 - C_2\) via recession cones)

Let \(\VV\) be an \(n\)-dim real vector space. Let \(C_1\) and \(C_2\) be nonempty, closed and convex subsets of \(\VV\). Then \(C_1 - C_2\) is closed if there is no common nonzero direction of recession of \(C_1\) and \(C_2\); i.e.,

\[ R_{C_1} \cap R_{C_2} = \{ \bzero \}. \]

Proof. 1. By Property 7.38

\[ R_{C_2} = - R_{-C_2}. \]
  1. We are given that \(R_{C_1} \cap R_{C_2} = \{ \bzero \}\).

  2. Let \(\by_1 \in R_{C_1}\) and \(\by_2 \in R_{-C_2}\) such that \(\by_1 + \by_2 = \bzero\).

  3. But then \(\by_3 = - \by_2 \in R_{C_2}\).

  4. Hence \(\by_1 - \by_3 = \bzero\).

  5. Hence \(\by_1 = \by_3\).

  6. But then \(\by_1 = \by_3 = \bzero\) since \(R_{C_1} \cap R_{C_2} = \{ \bzero \}\).

  7. Hence \(\by_2 = -\by_3 = \bzero\).

  8. Thus \(\by_1 + \by_2 = \bzero\) for any \(\by_1 \in R_{C_1}\) and \(\by_2 \in R_{- C_2}\) implies that \(\by_1 = \by_2 = \bzero\).

  9. Hence by Theorem 7.196, \(C_1 + (- C_2) = C_1 - C_2\) is a closed set.

Corollary 7.18 (Closedness of \(C_1 - C_2\) via recession cones and lineality spaces)

Let \(\VV\) be an \(n\)-dim real vector space. Let \(C_1\) and \(C_2\) be nonempty, closed and convex subsets of \(\VV\). Then \(C_1 - C_2\) is closed if

\[ R_{C_1} \cap R_{C_2} = L_{C_1} \cap L_{C_2}. \]

7.14.5. Convex Functions#

We now develop the concept of recession cones and lineality spaces for proper, closed and convex functions. We develop these concepts along the following lines.

  1. All sublevel sets are closed and convex.

  2. All nonempty sublevel sets share the same recession cone.

  3. The shared recession cone of all nonempty sublevel sets is defined as the recession cone of the function itself.

  4. Similarly, all nonempty sublevel sets share the same lineality space and the same is defined as the lineality space of the function itself.

  5. Along any direction in the lineality space of a proper, closed and convex function, the function doesn’t change its value.

  6. Hence such directions are also known as directions of constancy of the function.

  7. Accordingly, the lineality space of a proper, closed and convex function is also known as its constancy space.

Throughout this subsection, we shall assume that \(\VV\) is a Euclidean space unless otherwise specified.

7.14.5.1. Recession Cones of Sublevel Sets#

Theorem 7.197 (Recession cones of sublevel sets)

Let \(f: \VV \to \RERL\) be a proper, closed and convex function. Consider the sublevel sets \(S_t = \{ \bx \in \VV \ST f(\bx) \leq t \}\) where \(t \in \RR\). Then

  1. All the nonempty sublevel sets \(S_t\) have the same recession cone given by

    \[ R_{S_t} = \{ \by \in \VV \ST (\by, 0) \in R_{\epi f} \} \]

    where \(R_{\epi f}\) is the recession cone of the epigraph of \(f\).

  2. If one nonempty sublevel set \(S_t\) is compact, then all of the sublevel sets \(\sublevel(f, t)\) are compact.

We mention that \((\bzero, 0) \in R_{\epi f}\). Consequently, \(\bzero \in R_{S_t}\).

Proof. (1) Recession cone for sublevel sets

  1. Pick some \(t \in \RR\) such that \(S_t\) is nonempty. Since \(f\) is proper, hence such \(t\) exists.

  2. Let \(\by\) be a direction of recession of \(S_t\).

  3. Let \(\bx \in S_t\).

  4. Then \(f(\bx) \leq t\). Hence \((\bx, t) \in \epi f\).

  5. Since \(\by\) is a recession direction, we have \(f(\bx + \alpha \by) \leq t\) for every \(\alpha \geq 0\).

  6. Hence \((\bx + \alpha \by, t) \in \epi f\) for all \(\alpha \geq 0\).

  7. Rewriting \((\bx, t) + \alpha (\by, 0) \in \epi f\) for every \(\alpha \geq 0\).

  8. Since \(\epi f\) is a nonempty, closed and convex set, and there exists one \((\bx, t) \in \epi f\) such that for every \(\alpha \geq 0\), the point \((\bx, t) + \alpha (\by, 0) \in \epi f\), hence due to Theorem 7.179, \((\by, 0) \in R_{\epi f}\).

  9. Since \(\by\) was arbitrary element of \(R_{S_t}\), hence for every \(\by \in R_{S_t}\), \((\by, 0) \in R_{\epi f}\).

  10. Hence \(R_{S_t} \subseteq \{ \by \in \VV \ST (\by, 0) \in R_{\epi f} \}\).

For the converse, we proceed as follows.

  1. Let \((\by, 0) \in R_{\epi f}\).

  2. Pick some vector \((\bx, t) \in \epi f\).

  3. Then for every \(\alpha \geq 0\), we have \((\bx + \alpha \by, t) \in \epi f\).

  4. Hence \(f(\bx + \alpha \by) \leq t\) for every \(\alpha \geq 0\).

  5. Hence \(\bx + \alpha \by \in S_t\) for every \(\alpha \geq 0\).

  6. Since \(f\) is closed, hence \(S_t\) is closed.

  7. Since \(S_t\) is a nonempty, closed and convex set, \(\bx \in S_t\) and for every \(\alpha \geq 0\), we have \(\bx + \alpha \by \in S_t\), hence by Theorem 7.179, \(\by \in R_{S_t}\).

  8. Hence \(R_{S_t} \supseteq \{ \by \in \VV \ST (\by, 0) \in R_{\epi f} \}\).

(2) Compactness of sublevel sets.

  1. We are given that for some \(t\), the nonempty, closed and convex set \(S_t\) is compact.

  2. Then \(S_t\) is bounded.

  3. Then due to Theorem 7.181, \(R_{S_t} = \{ \bzero \}\).

  4. By previous argument, all nonempty sublevel sets have the same recession cone.

  5. Hence \(\{ \bzero \}\) is the recession cone of every nonempty sublevel set.

  6. Hence by Theorem 7.181 every nonempty sublevel set is bounded.

  7. Since \(f\) is closed, hence every nonempty sublevel set is closed.

  8. Since every nonempty sublevel set is closed and bounded and \(\VV\) is Euclidean (finite dimensional), hence every nonempty sublevel set is compact.

7.14.5.2. Recession Cone of a Convex Function#

Definition 7.68 (Recession cone of a convex function)

Let \(f: \VV \to \RERL\) be a proper, closed and convex function. The (common) recession cone of its nonempty sublevel sets is called the recession cone of \(f\) and is denoted by \(R_f\). Each \(\by \in R_f\) is called a direction of recession of \(f\).

  1. The requirement that \(f\) be proper guarantees that \(f(\bx) < \infty\) for at least one \(\bx \in \VV\) and hence there exist some nonempty sublevel sets.

  2. Since \(f\) is also closed and convex, hence Theorem 7.197 guarantees that all the nonempty sublevel sets have an identical recession cone.

7.14.5.3. Lineality Space of a Convex Function#

We can define the lineality space of a function in terms of its recession cone.

Definition 7.69 (Lineality space of a convex function)

Let \(f: \VV \to \RERL\) be a proper, closed and convex function. The lineality space \(L_f\) of the function \(f\) is the set of all \(\by \in \VV\) such that both \(\by\) and \(-\by\) are directions of recession of \(f\).

\[ L_f = R_f \cap (- R_f). \]

Observation 7.5 (Lineality spaces of sublevel sets)

Since the recession cone of a nonempty sublevel set of a proper, closed and convex function \(f\) is same as \(R_f\), hence \(\by \in L_f\) if and only if both \(\by\) and \(-\by\) are directions of recession of the nonempty sublevel sets.

Theorem 7.198 (Lineality space and constancy of function)

Let \(f: \VV \to \RERL\) be a proper, closed and convex function. \(\by \in L_f\) if and only if

\[ f(\bx + \alpha \by) = f(\bx), \Forall \bx \in \dom f, \Forall \alpha \in \RR. \]

Proof. 1. Pick some \(\bx \in \dom f\).

  1. Consider the sublevel set \(S = \{\bz \in \VV \ST f(\bz) \leq f(\bx) \}\).

  2. We are given that \(\by \in L_f\).

  3. Then \(\by\) and \(-\by\) are both directions of recession of \(S\).

  4. Pick some \(\alpha > 0\).

  5. Then \(\bu = \bx + \alpha \by \in S\) as well as \(\bv = \bx - \alpha \by \in S\).

  6. Thus \(f(\bu) \leq f(\bx)\) and \(f(\bv) \leq f(\bx)\).

  7. Also \(\bx = \frac{1}{2} \bu + \frac{1}{2} \bv\).

  8. By convexity of \(f\),

    \[ f(\bx) \leq \frac{1}{2} f(\bu) + \frac{1}{2} f(\bv) \leq \frac{1}{2} f(\bx) + \frac{1}{2} f(\bx) = f(\bx). \]
  9. This means that the inequalities must be equalities. Hence \(2 f(\bx) = f(\bu) + f(\bv)\).

  10. If \(f(\bu) < f(\bx)\) then \(f(\bv) > f(\bx)\), a contradiction since \(\bv \in S\).

  11. If \(f(\bv) < f(\bx)\) then \(f(\bu) > f(\bx)\), a contradiction since \(\bu \in S\).

  12. Hence we require that \(f(\bx) = f(\bu) = f(\bv)\).

  13. Since \(\bx\) and \(\alpha\) were arbitrary, hence

    \[ f(\bx + \alpha \by) = f(\bx), \Forall \bx \in \dom f, \Forall \alpha \in \RR. \]

Remark 7.12 (Constancy space of a convex function)

As a result of Theorem 7.198, any \(\by \in L_f\) is a direction along which the function \(f\) is constant. Hence \(L_f\) is also known as the constancy space of \(f\).

Example 7.64 (Recession cone and constancy space of linear functions)

Let \(f: \RR^n \to \RR\) be given by

\[ f(\bx) = \bc^T \bx \]

where \(\bc \in \RR^n\) is a given vector.

The recession cone is given by

\[ R_f = \{\by \ST \bc^T \by \leq 0 \}. \]

This is a closed half-space.

  1. Pick some \(t \in \RR\).

  2. Consider the set \(S_t = \sublevel(f, t)\).

  3. Pick some \(\bx \in S_t\).

  4. Then \(\bc^T \bx \leq t\).

  5. For any \(\by \in R_f\) and \(\alpha \geq 0\) we can see that

    \[ f(\bx + \alpha \by) = \bc^T (\bx + \alpha \by) = \bc^T \bx + \alpha (\bc^T \by) \leq t + \alpha 0 = t. \]
  6. Hence \(\bx + \alpha \by \in S_t\).

The constancy space is given by

\[ L_f = \{\by \ST \bc^T \by = 0 \}. \]

It is a hyperplane passing through origin.

  1. For any \(\by \in L_f\) and \(\alpha \in R\) we can see that

    \[ f(\bx + \alpha \by) = \bc^T (\bx + \alpha \by) = \bc^T \bx + \alpha (\bc^T \by) = \bc^T \bx + 0 \leq t. \]
  2. Hence \(\bx + \alpha \by \in S_t\).

Example 7.65 (Recession cone and constancy space of quadratic functions)

Let \(f: \RR^n \to \RR\) be given by

\[ f(\bx) = \bx^T \bQ \bx + \bc^T \bx + b \]

where \(\bQ\) is a symmetric positive semidefinite matrix, \(\bc \in \RR^n\) is a vector and \(b \in \RR\) is a scalar.

The recession cone is given by

\[ R_f = \{\by \ST \bQ \by = \bzero, \bc^T \by \leq 0 \}. \]

The constancy space is given by

\[ L_f = \{\by \ST \bQ \by = \bzero, \bc^T \by = 0 \}. \]