# Cones

## Contents

# 9.4. Cones#

Main references for this section are [6, 17, 67].

Throughout this section, \(\VV\) is a real vector space. Some material is specific to \(\RR^n\). Rest of the material is applicable for any real vector space. Wherever necessary, \(\VV\) is endowed with a norm \(\| \cdot \| : \VV \to \RR\) or a real inner product \(\langle \cdot, \cdot \rangle : \VV \times \VV \to \RR\). It is also equipped with a metric \(d(\bx, \by) = \| \bx - \by \|\). The norm in general is not necessarily induced by the inner product (if the vector space is indeed endowed with an inner product).

We start with the basic definition of a cone.

## 9.4.1. Cones#

(Cone)

A set \(C\) is called a *cone* or *nonnegative homogeneous*,
if for every \(\bx \in C\)
and \(t \geq 0\), we have \(t \bx \in C\).

In other words, a set is a cone if it is closed under nonnegative scalar multiplication.

By definition we have \(\bzero \in C\).

Some authors ([67]) prefer to restrict the definition to \(t > 0\) thus the origin is not included in the cone by default.

In our definition, a cone always includes the origin.

Thus, a cone is always nonempty.

When we think of cones, ice-cream cones naturally come to mind. Hence, we kind of think that cones happen to be pointed (at origin) and emanate in one general direction from there. However, our definition above is more general. It allows for lines (passing through origin) and linear subspaces to be cones too.

Pointed cones are a special class of cones which are discussed below. An ice-cream cone is a pointed cone.

## 9.4.2. Properties of Cones#

### 9.4.2.1. Intersection#

(Intersection of cones)

Let \(\{ C_ i \ST i \in I \}\) be a collection of cones. Then their intersection \(\bigcap_{i \in I} C_i\) is a cone.

Proof. Let \(\bx \in \bigcap_{i \in I} C_i\) and let \(t \geq 0\).

\(\bx \in C_i\) for every \(i \in I\).

Hence \(t \bx \in C_i\) for every \(i \in I\) since every \(C_i\) is a cone.

Hence \(t \bx \in \bigcap_{i \in I} C_i\).

Hence \(\bigcap_{i \in I} C_i\) is a cone.

### 9.4.2.2. Cartesian Product#

(Cartesian product of cones)

Let \(\VV\) and \(\WW\) be real vector spaces. Let \(C_1 \subseteq \VV\) and \(C_2 \subseteq \WW\) be cones. Then their Cartesian product \(C_1 \times C_2\) is a cone.

Proof. Let \(\bx \in C_1 \times C_2\) and \(t \geq 0\).

Then there exist \(\by \in C_1\) and \(\bz \in C_2\) such that \(\bx = (\by, \bz)\).

Then \(t \by \in C_1\) and \(t \bz \in C_2\) since both are cones.

Hence \((t \by, t \bz) \in C_1 \times C_2\).

But \((t \by, t \bz) = t(\by, \bz) = t \bx\).

Hence \(t \bx \in C_1 \times C_2\).

Hence \(C_1 \times C_2\) is a cone.

### 9.4.2.3. Set Addition#

(Set addition of cones)

Let \(C_1, C_2 \subseteq \VV\) be cones. Then their vector sum \(C_1 + C_2\) is a cone.

Proof. Let \(\bx \in C_1 + C_2\) and \(t \geq 0\).

Then there exist \(\bx_1 \in C_1\) and \(\bx_2 \in C_2\) such that \(\bx = \bx_1 + \bx_2\).

Then \(t \bx_1 \in C_1\) and \(t \bx_2 \in C_2\) since they are cones.

Hence \(t \bx = t (\bx_1 + \bx_2) = t \bx_1 + t \bx_2 \in C_1 + C_2\).

Hence \(C_1 + C_2\) is a cone.

### 9.4.2.4. Closure#

(Closure)

Let \(C \subseteq \VV\) be a cone. Then its closure \(\closure C\) is a cone.

Proof. Let \(\bx \in \closure C\) and \(t \geq 0\).

Then there exists a sequence \(\{ \bx_k \}\) of \(C\) such that \(\bx_k \to \bx\).

Then \(t \bx_k \in C\) for every \(k\) since \(C\) is a cone.

Consider the sequence \(\{ t \bx_k \}\) of \(C\).

We have \(\lim_{k \to \infty} t \bx_k = t \bx\).

Hence \(t \bx \in \closure C\).

Hence \(\closure C\) is a cone.

### 9.4.2.5. Linear Transformation#

(Linear transformation )

The image and inverse image of a cone under a linear transformation is a cone.

Proof. Let \(\VV\) and \(\WW\) be real vector spaces and \(\bT : \VV \to \WW\) be a linear transformation.

Image of a cone is a cone.

Let \(C \in \VV\).

Let \(D = \bT(C)\).

Let \(\bv \in D\) and \(t \geq 0\).

Then there exists \(\bu \in C\) such that \(\bv = \bT(\bu)\).

Since \(C\) is a cone, hence \(t \bu \in C\).

Then \(\bT(t \bu) = t \bT(\bu) = t \bv \in D\).

Hence \(D\) is a cone.

Inverse image of a cone is a cone.

Let \(D \in \WW\).

Let \(C = \bT^{-1}(D)\).

Let \(\bx \in C\) and \(t \geq 0\).

Then \(\by = \bT (\bx) \in D\).

Also \(\bT (t \bx) = t \bT (\bx) = t \by \in D\) since \(D\) is a cone.

Hence \(t \bx \in C\).

Hence \(C\) is a cone.

## 9.4.3. Convex Cones#

(Convex cone)

A set \(C\) is called a *convex cone* if it is convex and a cone.

(Convex cones)

A ray with its base at origin is a convex cone.

A line passing through origin is a convex cone.

A plane passing through origin is a convex cone.

Any subspace is a convex cone.

(Subspace as convex cone)

A linear subspace is a convex cone.

Proof. Let \(V \subseteq \VV\) be a subspace. We know that \(V\) is convex since \(V\) contains all its linear combinations and every convex combination is a linear combination. Now, let \(\bv \in V\). Then, \(t \bv \in V\) for every \(t \geq 0\) since \(V\) is closed under scalar multiplication. Thus, \(V\) is a cone too. Thus, \(V\) is a convex cone.

(Half spaces as convex cone)

Let \(\VV\) be a real vector space. Let \(\ba \in \VV^*\). Consider the set

Then \(H\) is a convex cone.

Proof. Half space as a cone

Let \(\bx \in H\) and \(t \geq 0\).

Then \(\langle t \bx, \ba \rangle = t \langle \bx, \ba \rangle \leq 0\).

Hence \(t \bx \in H\).

Half space as convex

Let \(\bx, \by \in H\) and \(t \in [0,1]\).

Then

Hence \(H\) is convex.

### 9.4.3.1. Characterization#

(Convex cone characterization)

A set is a convex cone if and only if it is closed under addition and nonnegative scalar multiplication.

In other words, \(C\) is a convex cone if and only if for every \(\bx_1, \bx_2 \in C\) and \(t_1, t_2 \geq 0\), the following holds true:

Proof. Let \(C \subseteq \VV\). Let \(\bx_1, \bx_2 \in C\).

If \(C\) is a convex cone, then:

\(\bx = \frac{1}{2} \bx_1 + \frac{1}{2} \bx_2 \in C\).

But then, \(2 \bx = \bx_1 + \bx_2 \in C\) since \(C\) is a cone.

Thus, \(C\) is closed under addition.

\(C\) being a cone, it is closed under nonnegative scalar multiplication.

Combining, we see that \(t_1 \bx_1 + t_2 \bx_2 \in C\).

Now, assume that \(C\) is closed under addition and nonnegative scalar multiplication.

\(C\) is a cone since it is closed under nonnegative scalar multiplication.

In particular \(t \bx_1 \in C\) for all \(t \in [0,1]\) and \((1-t) \bx_2 \in C\) for all \(t \in [0,1]\).

Since \(C\) is closed under addition, hence \(t \bx_1 + (1-t) \bx_2 \in C\) for all \(t \in [0,1]\).

Thus, \(C\) is convex too.

(Set addition characterization)

A cone \(C\) is convex if and only if \(C + C \subseteq C\).

Proof. Assume that a cone \(C\) satisfies \(C + C \subseteq C\).

Let \(\bx, \by \in C\) and \(t \in [0,1]\).

Since \(C\) is a cone, hence \(t \bx\) and \((1-t) \by\) belong to \(C\).

Hence \(t \bx + (1-t) \by \in C + C \subseteq C\).

Hence \(C\) is convex.

For converse, assume that \(C\) is a convex cone.

Let \(\bx, \by \in C\).

Since \(C\) is a cone, hence \(2 \bx, 2 \by \in C\).

Since \(C\) is convex, hence

\[ \frac{1}{2} 2 \bx + \frac{1}{2} 2 \by = \bx + \by \in C. \]Hence \(C + C \subseteq C\).

### 9.4.3.2. Intersection of Convex Cones#

(Intersection of arbitrary collection of convex cones)

Let \(\{ A_i\}_{i \in I}\) be a family of sets such that \(A_i\) is a convex cone for all \(i \in I\). Then \(\cap_{i \in I} A_i\) is a convex cone.

Proof. Let \(\bx_1, \bx_2\) be any two arbitrary elements in \(A = \cap_{i \in I} A_i\).

Hence \(A\) is a convex cone.

As a consequence, an arbitrary intersection of half-spaces (at origin) and hyperplanes (at origin) is a convex cone. Thus, the solution set of a system of linear equations and inequalities is a convex cone if the equations and inequalities are homogeneous.

### 9.4.3.3. Containing and Contained Subspaces#

(Containing and contained subspaces)

Let \(C\) be a convex cone. Then, there is a smallest linear subspace containing \(C\) given by:

And, there is a largest linear subspace contained in \(C\) given by

Proof. We show that \(U = C - C\) is a subspace containing \(C\).

Since \(\bzero \in C\), hence \(\bzero = \bzero - \bzero \in U\). Also, this means that \(C - \bzero = C \subseteq U\).

Let \(\bu, \bv \in U\).

\(\bu = \bx - \by\) for some \(\bx, \by \in C\).

\(\bv = \bz - \bw\) for some \(\bz, \bw \in C\).

\(\bu + \bv = (\bx + \bz) - (\by + \bw)\).

Since \(C\) is a convex cone, it is closed under addition.

Hence, \(\bx + \bz, \by + \bw \in C\).

Hence, \(\bu + \bv \in U\) (as it is a difference of two vectors in \(C\)).

Thus, \(U\) is closed under vector addition.

Let \(t \in \RR\), \(\bu \in U\).

\(\bu = \bx - \by\) for some \(\bx, \by \in C\).

\( t \bu = t\bx - t \by\).

If \(t \geq 0\), then \(t \bx, t \by \in C\), thus \(t \bu \in U\).

If \(t < 0\), then \(-t\bx, -t\by \in C\). We can write \(t \bu = (-t \by) - (-t \bx)\). Thus, \(t \bu \in U\).

Thus, \(U\) is closed under scalar multiplication.

Next, we show that it is the smallest subspace containing \(C\).

Let \(V\) be a subspace that contains \(C\)

Then \(V\) contains \(-C\) also.

Then, \(V\) contains \(C + (-C) = C- C = U\) also.

Thus, \(U\) is the smallest subspace containing \(C\).

In summary, \(C\) is closed under addition and nonnegative scalar multiplication. \(C\) contains \(\bzero\). To be a subspace, a set must be closed under multiplication by \(-1\) too. \(C - C\) is the smallest such subspace.

Since \(C\) contains \(\bzero\), hence its affine hull contains \(\bzero\) too. Thus, its affine hull must be a linear subspace. The affine hull then is the smallest subspace containing \(C\). Thus,

We show that \(L = (-C) \cap C\) is a subspace contained in \(C\).

[zero vector]

Since \(\bzero \in C\) and \(\bzero \in -C\), hence \(\bzero \in L\).

[Scalar multiplication]

Let a nonzero \(\bx \in L\). Then, \(\bx \in C\) and \(\bx \in -C\).

\(\bx \in -C \implies -\bx \in C\).

\(\bx \in C \implies -\bx \in -C\).

This means that \(-\bx \in C\) and \(-\bx \in -C\) too.

Thus, \(-\bx \in L\).

In summary \(\bx \in L\) implies \(\bx, -\bx \in C\) and \(\bx, -\bx \in C\).

Then, for any \(t \geq 0\)

\(\bx \in C \implies t \bx \in C\).

\(-\bx \in C \implies - t\bx \in C\).

\(-t \bx \in C \implies t \bx \in -C\).

Thus, \(t \bx \in L\).

And, for any \(t < 0\)

\(\bx \in C \implies -t \bx \in C\).

\(-\bx \in C \implies t \bx \in C\).

\(-t \bx \in C \implies t \bx \in -C\).

Thus, \(t \bx \in L\).

Thus, \(L\) is closed under scalar multiplication.

[Vector addition]

Let \(\bu, \bv \in L\).

Then, \(\bu, \bv \in C\) and \(\bu, \bv \in -C\).

Thus, \(-\bu, -\bv \in C\).

Since \(C\) is closed under addition, hence \(\bu + \bv \in C\) and \(-\bu - \bv \in C\).

But then, \(-\bu -\bv \in -C\) and \(\bu + \bv \in -C\).

Thus, \(\bu + \bv \in L\). \(-\bu -\bv \in L\) too.

Thus, \(L\) is closed under addition.

Thus, \(L\) is a vector space contained in \(C\). \(L\) is the largest such subspace since any subspace contained in \(C\) should be contained in \(-C\) too.

## 9.4.4. Conic Combinations#

(Conic combination)

A point of the form \(t_1 \bx_1 + \dots + t_k \bx_k \) with
\(t_1 , \dots, t_k \geq 0\) is called a *conic combination*
(or a *non-negative linear combination*) of \(\bx_1,\dots, \bx_k\).

A convex cone is closed under non-negative linear/conic combinations.

One way to prove that a set is a convex cone is to show that it contains all its conic combinations.

(Convex cone characterization with conic combinations)

Let \(C\) be a convex cone. Then for every \(\bx_1, \dots, \bx_k \in C\), every conic combination \(t_1 \bx_1 + \dots + t_k \bx_k \) with \(t_i \geq 0\) belongs to \(C\).

Conversely, if a set \(C\) contains all conic combinations of its points, then it is a convex cone.

In other words, \(C\) is a convex cone if and only if it is closed under conic combinations.

Proof. Assume that \(C\) is a convex cone. Then it is closed under addition and nonnegative scalar multiplication.

Let \(\bx_1, \dots, \bx_k \in C\).

Then, \(t_1 \bx_1, \dots, t_k \bx_k \in C\) for all \(t_1,\dots, t_k \geq 0\) since \(C\) is closed under nonnegative scalar multiplication.

Then, \(t_1 \bx_1 + \dots + t_k \bx_k \in C\) since \(C\) is closed under addition.

Thus, \(C\) contains all its conic combinations.

For the converse, assume that \(C\) contains all its conic combinations.

Let \(\bx \in C\).

Then, \(t \bx \in C\) for all \(t \geq 0\) since \(t \bx\) is a conic combination.

Thus, \(C\) is a cone.

Now, let \(\bx, \by \in C\) and \(t \in [0,1]\). Then, \(1-t \in [0,1]\) too.

Thus, \(t \bx + (1-t) \by\) is a conic combination of \(\bx, \by\).

Hence, \(t \bx + (1-t) \by \in C\).

Thus, \(C\) is convex.

Combining, \(C\) is a convex cone.

Here is another proof that a linear subspace is a convex cone using the idea of conic combinations.

Every subspace contains the \(\bzero\) vector.

Every conic combination is also a linear combination.

A subspace is closed under linear combinations.

Hence, it is also closed under conic combinations.

Hence, it is a convex cone.

(Conic combinations and nonnegative orthant)

We recall that the nonnegative orthant of \(\RR^k\) is given by

Thus, the coefficients for conic combinations of \(k\) points are drawn from \(\RR_+^k\).

A conic combination of conic combinations is a conic combination.

Proof. Let \(S \subseteq \VV\). Note that \(S\) is arbitrary (no convexity or conic structure assumed).

Consider \(n\) points \(\by_i\), \(i=1,\dots, n\) described as below.

Let \(\by_i = \sum_{j=1}^{m_j}t_{i,j} \bx_{i,j}\) be conic combinations of \(m_j\) points:

\(\bx_{i,1}, \dots, \bx_{i,m_j} \in S\).

\(t_{i,j} \geq 0\).

Consider the conic combination \(\by = \sum_{i=1}^n r_i \by_i\). with \(r_i \geq 0\).

We need to show that \(\by\) is a conic combination of points of \(S\).

Towards this:

Consider the terms:

Since \(r_i \geq 0\) and \(t_{i, j} \geq 0\), hence \(s_{i, j } \geq 0\). Hence,

is a conic combination of points of \(S\).

The idea of conic combinations can be generalized to infinite sums and integrals.

## 9.4.5. Conic Hulls#

(Conic hull)

The *conic hull* of a set \(S\) is the set of all conic combinations
of points in \(S\). i.e.

A conic hull is a convex cone.

Proof. Let \(C\) be the conic hull of a set \(S\).

Let \(\bx, \by \in C\).

Then, \(\bx, \by\) are conic combinations of \(S\).

Let \(\bz = t_1 \bx + t_2 \by\) with \(t_1, t_2 \geq 0\).

Then, \(\bz\) is a conic combination of conic combinations of \(S\).

By Theorem 9.52, \(\bz\) is a conic combination of \(S\).

Since \(C\) contains all conic combinations of \(S\), hence \(C\) contains \(\bz\).

Thus, for any \(\bx, \by \in C\), \(\bz = t_1 \bx + t_2 \by\) with \(t_1, t_2 \geq 0\) is in \(C\).

Thus, \(C\) is a convex cone.

Conic hull of a set is the smallest convex cone that contains the set.

Proof. Let \(S\) be an arbitrary set and \(C\) be its conic hull.

We have already shown that \(C\) is a convex cone.

Assume \(D\) to be a convex cone such that \(S \subseteq D\).

Then, \(D\) contains every conic combination of \(S\) since a convex cone is closed under conic combinations.

Thus, \(C \subseteq D\) since \(C\) is the set of all conic combinations of \(S\).

(Conic hull of a convex set)

Let \(C\) be a convex set. Let \(K\) be defined as:

Then, \(K\) is the conic hull of \(C\).

Proof. Let \(H\) be the conic hull of \(C\); i.e., \(H\) is the set of all conic combinations of \(C\). We show that \(K \subseteq H\) and \(H \subseteq K\).

\(K \subseteq H\)

For any \(\bx \in C\) and \(t \geq 0\), \(t \bx\) is a conic combination of \(C\).

Hence \(t \bx \in H\).

Thus, \(K \subseteq H\).

\(H \subseteq K\)

Let \( \bx = t_1 \bx_1 + \dots + t_k \bx_k\) be a conic combination of \(C\).

Thus, \(t_i \geq 0\) and \(\bx_i \in C\).

By definition of \(K\), \(\bzero \in K\).

If \(t_i = 0\) for \(0 \leq i \leq k\), then \(\bx = \bzero\). So \(\bx \in K\).

Now consider the case where at least one \(t_i > 0\).

Let \(t = \sum t_i\). Clearly, \(t > 0\).

Consider \(\bz = \frac{t_1}{t} \bx_1 + \dots + \frac{t_k}{t} \bx_k\).

Note that \(\bz\) is a convex combination of \(\bx_1, \dots, bx_k \in C\).

Since \(C\) is convex, hence \(\bz \in C\).

Then, \(bx = t \bz \in S\) since \(t > 0\) and \(\bz \in C\).

Thus, \(K\) contains all conic combinations of \(C\).

Thus, \(H \subseteq K\).

(Conic hull of a convex hull)

Let \(S \subseteq \VV\) be a nonempty set. Let \(C = \convex S\). Then

In other words, the conic hull of the convex hull of a set is same as the conic hull of the set.

Proof. Since \(S \subseteq C\), hence \(\cone S \subseteq \cone C\). We now prove the converse.

Let \(\bx \in \cone C\).

Then \(\bx\) is a nonnegative combination of some vectors in \(C\). There exist a positive integer \(p\), \(p\) vectors \(\bx_1, \dots, \bx_p \in C\) and \(p\) scalars \(t_1, \dots, t_p\) such that

\[ \bx = \sum_{i=1}^p t_i \bx_i. \]Each \(\bx_i\) is a convex combination of some vectors in \(S\).

Hence \(\bx\) is a nonnegative combination of some vectors in \(S\).

Hence \(\bx \in \cone S\).

Hence \(\cone C \subseteq \cone S\).

### 9.4.5.1. Unique Conic Representations#

Recall from Carathéodory theorem that in an \(n\) dimensional vector space, every point in the convex hull of a set can be represented as a convex combination of \(n+1\) points belonging to the set.

Similar representation is possible in conic hulls too.

(Conic representation theorem)

Let \(\VV\) be an \(n\)-dimensional real vector space. Let \(S \subseteq \VV\). Let \(\bx \in \ConicHull(S)\). Then, there exist \(k\) linearly independent vectors \(\bx_1, \dots, \bx_k \in S\) such that \(\bx \in \ConicHull(\{ \bx_1, \dots, \bx_k\})\); i.e., there exists \(\bt \in \RR^k_+\) such that

Since the \(k\) vectors are linearly independent, hence \(k \leq n\).

Proof. The proof is similar to Carathéodory theorem.

Let \(\bx \in \ConicHull(S)\).

Then, there exist \(m\) points \(\bx_1, \dots, \bx_m \in S\) and \(\bt \in \RR^m_+\) such that

\[ \bx = \sum_{i=1}^m t_i \bx_i. \]We can assume that \(t_i > 0\) for every \(i \in 1,\dots,m\). Otherwise, we can simply drop the corresponding points from the conic combination.

If \(\bx_1, \dots, \bx_m\) are linearly independent, then \(k=m\), \(m \leq n\) and we are done.

Let us consider the case when they are linearly dependent.

Then, there exists a nontrivial linear combination equally zero vector:

\[ r_1 \bx_1 + \dots + r_m \bx_m = \bzero. \]Then, for any \(\alpha \in \RR\)

\[ \bx = \sum_{i=1}^m t_i \bx_i + \alpha \sum_{i=1}^m r_i \bx_i = \sum_{i=1}^m (t_i + \alpha r_i) \bx_i. \]This representation is a conic combination if \(t_i + \alpha r_i \geq 0\) for every \(i=1,\dots,m\).

Since \(t_i > 0\) for every \(i\), hence, this set of inequalities is satisfied for all \(\alpha \in A\) where \(A\) is a closed interval with a nonempty interior.

Let \(A_i\) be the solution set for \(i\)-th inequality.

We have \(A = \bigcap_i A_i\).

\(\alpha = 0\) satisfies every inequality. Thus, \(0 \in A_i\). Thus, \(A \neq \EmptySet\).

If \(r_i = 0\), then \(A_i = \RR\).

If \(r_i > 0\), then \(A_i = [-\frac{t_i}{r_i}, \infty)\).

If \(r_i < 0\), then \(A_i = (-\infty, \frac{t_i}{r_i}]\).

Since not all \(r_i = 0\), hence there are several \(A_i\) with finite endpoints (either left or right).

Thus, there are three possibilities for \(A\): \([a,b]\), or \([a, \infty)\) or \((-\infty, b]\).

Both \(a\) and \(b\) correspond to an endpoint of one of the \(A_i\).

If we pick \(\alpha\) as one of the endpoints of \(A\), then, \(t_i + \alpha r_i \geq 0\) for every \(i\) and \(t_j + \alpha r_j = 0\) for some \(j \in 1,\dots,m\).

Thus, we obtain a conic representation of \(\bx\) of at most \(m-1\) vectors.

This process can be carried out repeatedly until we obtain a conic representation of \(\bx\) of \(k\) linearly independent vectors.

Since the \(k\) vectors \(\bx_1, \dots, \bx_k\) so obtained are linearly independent, hence \(k \leq n\).

## 9.4.6. Pointed Cones#

(Pointed cone)

A cone \(C \subset \VV\) is called pointed if \(\bx \in C\) and \(-\bx \in C\) implies \(\bx = \bzero\).

In other words, a pointed cone, doesn’t contain a line.

(The nonnegative orthant is a pointed convex cone)

Recall from Definition 9.14 that the nonnegative orthant is defined as:

In other words, for \(\bx \in \RR^n_+\), every component is non-negative.

Let \(\bx, \by \in \RR^n_+\). Let \(\alpha, \beta \geq 0\) and consider their conic combination

It is obvious that all components of \(\bz\) are also nonnegative. Hence \(\bz \in \RR^n_+\). Thus, \(\RR^n_+\) is closed under conic combinations. Hence, \(\RR^n_+\) is a convex cone.

Finally, \(\RR^n_+\) is pointed as \(\bx \in \RR^n_+\) and \(-\bx \in \RR^n_+\) both hold true only if \(\bx = \bzero\).

## 9.4.7. Proper Cones#

(Proper cone)

A cone \(K \in \VV\) is called a *proper cone* if it satisfies the following:

\(K\) is

*convex*.\(K\) is

*closed*.\(K\) is

*solid*; i.e., it has a nonempty interior.\(K\) is

*pointed*.

(Non-empty interior)

Consider the following sets in \(\RR^2\):

Both are closed convex cones. \(C_1\) doesn’t have an interior. All points in \(C_1\) are on the boundary of \(C_1\).

\(C_2\) has a non-empty interior; e.g., the point \((1,1) \in C_2\) is not on the boundary.

## 9.4.8. Norm Cones#

(Norm cone)

Let \(\| \cdot \| : \VV \to \RR\) be any norm on \(\VV\).
The *norm cone* associated with the norm \(\| \cdot \|\) is given by the set

\(C\) lies in the product space \(\VV \times \RR\).

If \(\VV = \RR^n\), then a norm cone belongs to \(\RR^{n+1}\).

A norm cone is convex. Moreover, it is a convex cone.

(Second order cone)

The second order cone is the norm cone for the Euclidean norm in the Euclidean space \(\RR^n\), i.e.

From definition, \(C \subseteq \RR^{n+1}\).

This can be rewritten as

## 9.4.9. Barrier Cones#

(Barrier vector)

Let \(C\) be a convex set of \(\VV\).
A vector \(\bv \in \VV^*\) is called a
*barrier* vector to \(C\) if for some \(\beta \in \RR\),

In other words, the set of inner products of points in \(C\) with \(\bv\) is bounded from above.

(Barrier cone)

The set of all barrier vectors to a convex set \(C\)
is called its *barrier cone*.

The barrier cone of a convex set is convex.

Proof. Let \(C\) be a convex set and let \(B\) be its barrier cone. Let \(\bu, \bv \in B\). Let \(t \geq 0\).

Thus, \(\bzero \in B\).

Thus, the set of inner products with \(t \bu\) is bounded from above by \(t \alpha\). Thus, \(t \bu \in B\).

Thus, the set of inner products with \(\bu + \bv\) is bounded from above by \(\alpha + \beta\). Thus, \(\bu + \bv \in B\).

Thus, \(B\) is closed under nonnegative scalar multiplication and vector addition. \(B\) is a convex cone.

## 9.4.10. Properties of Convex Cones#

We consider operations on convex cones which generate convex cones.

(Closure under set intersection)

If \(K_1\) and \(K_2\) are convex cones, then \(K = K_1 \cap K_2\) is convex cone.

Proof. We show that \(K\) is closed under nonnegative scalar multiplication.

Let \(\bx \in K\) and \(t \geq 0\).

Then, \(\bx \in K_1\) and \(\bx \in K_2\).

Hence, \(t \bx \in K_1\) and \(t \bx \in K_2\) since both are closed under nonnegative scalar multiplication.

Thus, \(t\bx \in K\).

Hence, \(K\) is closed under nonnegative scalar multiplication.

We show that \(K\) is closed under vector addition too.

Let \(\bx, \by \in K\).

Then, \(\bx, \by \in K_1\) and \(\bx, \by \in K_2\).

But then, \(\bx + \by \in K_1\) and \(\bx + \by \in K_2\) since both are closed under vector addition.

Thus, \(\bx + \by \in K\).

Hence, \(K\) is closed under vector addition.

Thus, \(K\) is closed under nonnegative scalar multiplication and vector addition. \(K\) is a convex cone.

(Closure under set addition)

If \(K_1\) and \(K_2\) are convex cones, then \(K = K_1 + K_2\) is convex cone.

Proof. We show that \(K\) is closed under nonnegative scalar multiplication.

Let \(\bx \in K\) and \(t \geq 0\).

Then, \(\bx = \bx_1 + \bx_2\) where \(\bx_1 \in K_1\), and \(\bx_2 \in K_2\).

Then, \(t \bx_1 \in K_1\) and \(t \bx_2 \in K_2\) since \(K_1\) and \(K_2\) are cone.

Then, \( t \bx = t(\bx_1 + \bx_2) = t \bx_1 + t \bx_2 \in K\).

Thus, \(K\) is closed under nonnegative scalar multiplication.

We show that \(K\) is closed under vector addition too.

Let \(\bx, \by \in K\).

Then, \(\bx = \bx_1 + \bx_2\) with some \(\bx_1 \in K_1\) and \(\bx_2 \in K_2\).

And, \(\by = \by_1 + \by_2\) with some \(\by_1 \in K_1\) and \(\by_2 \in K_2\).

Then, \(\bx + \by = (\bx_1 + \by_1) + (\bx_2 + \by_2)\).

Now, \(\bx_1 + \by_1 \in K_1\) and \(\bx_2 + \by_2 \in K_2\) since \(K_1\) and \(K_2\) are closed under addition (they are convex cones).

Thus, \(\bx + \by \in K\).

Thus, \(K\) is closed under vector addition.

Thus, \(K\) is closed under nonnegative scalar multiplication and vector addition. \(K\) is a convex cone.

We mention that by Theorem 9.34, \(K\) is convex. Hence, we just needed to show that \(K\) is a cone too.

(Positive scalar multiplication)

Let \(K\) be a convex cone. Then

Proof. We proceed as follows:

Let \(t > 0\).

Let \(\bx \in t K\).

Then, \(\frac{1}{t} \bx \in K\).

But then, \(t \frac{1}{t}\bx = \bx \in K\) too since \(K\) is closed under nonnegative scalar multiplication.

Thus, \(t K \subseteq K\).

Similarly, \(\bx \in K\) implies \(\bx \in tK\).

Thus, \(K \subseteq t K\).

Hence, \(K = t K\) for all \(t > 0\).

Note that \(0K = \{ \bzero \} \neq K\).

(Convex hull of the union)

If \(K_1\) and \(K_2\) are convex cones, then

Proof. By Corollary 9.3,

Now for \(t \in (0,1)\), by Property 9.13, \((1-t)K_1 = K_1\) and \(t K_2 = K_2\).

Thus, for \(t \in (0,1)\):

For, \(t=0\) we are left with \(K_1\) and for \(t=1\), we are left with \(K_2\).

Since \(\bzero \in K_1\) and \(\bzero \in K_2\), hence \(K_1 \subseteq K_1 + K_2\) and \(K_2 \subseteq K_1 + K_2\). Thus,

(Intersection as union)

If \(C_1\) and \(C_2\) are convex cones, then

Proof. We first show that for every \(t \in (0,1)\)

Let \(\bx \in C_1 \cap C_2\).

Then \(\bx \in C_1\) and \(\bx \in C_2\).

Since \(0 < t < 1\), hence due to Property 9.13, \(C_1 = t C_1\) and \(C_2 = (1-t)C_2\).

Hence \(\bx \in t C_1\) and \(\bx \in (1-t) C_2\).

Hence \(\bx \in t C_1 \cap (1 - t) C_2\).

Hence \(C_1 \cap C_2 \subseteq t C_1 \cap (1 - t) C_2\).

Conversely, let \(\bx \in t C_1 \cap (1 - t) C_2\).

Then \(\bx \in t C_1\) and \(\bx \in (1-t) C_2\).

Hence \(\bx \in C_1\) and \(\bx \in C_2\).

Hence \(\bx \in C_1 \cap C_2\).

Hence \(t C_1 \cap (1 - t) C_2 \subseteq C_1 \cap C_2\).

If \(t=0\) or \(t=1\), then

since every convex cone contains the origin. Together,

## 9.4.11. Cone Generated by a Convex Set#

The direct sum vector space \(\VV \oplus \RR\) has been described in Definition 9.4.

(Convex sets as cross sections of cones)

A convex set \(C \subseteq \VV\) can be regarded as a cross section of some convex cone \(K \subseteq \VV \oplus \RR\).

Let \(K\) be conic hull of points \((\bx, 1) \in \VV \oplus \RR\) such that \(\bx \in C\). Then,

Now consider the hyperplane in \(\VV \oplus \RR\) given by:

The intersection of \(H\) with \(K\) can be regarded as \(C\).

The projection of \(H \cap K\) on \(\VV\) is given by \(C\) (by dropping the last coordinate).

For every convex set \(C \subseteq \VV\), there is precisely one convex cone \(K \subseteq \VV \oplus \RR\) generated by the set \(\{(\bx, 1) \in \VV \oplus \RR \ST \bx \in C \}\) (its conic hull).

These convex cones have only \((\bzero, 0)\) in common with the half space \(\{(\bx, t) \in \VV \oplus \RR \ST t \leq 0 \}\).

We shall call this class of convex cones in \(\VV \oplus \RR\) generated by the convex sets in \(\VV\) as \(\CCC\).

An operation that is closed under the class \(\CCC\) corresponds to an operation on the convex sets in \(\VV\); e.g., if \(C_1\) and \(C_2\) are convex sets with corresponding cones \(K_1\) and \(K_2\), then \(C_1 \cap C_2\) is another convex set corresponding to a different convex cone \(K_3\). It is natural to ask if there is a way to construct \(K_3\) from \(K_1\) and \(K_2\) directly in \(\VV \oplus \RR\).

Each vector \((\bx, t) \in \VV \oplus \RR\) can be split as a direct sum with \(\bx \in \VV\) and \(t \in \RR\). Thus, it is possible to define different kinds of partial sums on \(\VV \oplus \RR\). Recall that partial sums on convex sets preserve the convexity. It turns out that they can do more. We can define partial sums which are closed under the family \(\CCC\) of convex cones in \(\VV \oplus \RR\) generated by the convex sets in \(\VV\).

We can define four types of partial sums:

Addition in \(\VV\), intersection in \(\RR\).

Addition in \(\VV\), addition in \(\RR\).

Intersection in \(\VV\), intersection in \(\RR\).

Intersection in \(\VV\), addition in \(\RR\).

Suppose that \(K_1\) and \(K_2\) are convex cones generated by the convex sets \(C_1\) and \(C_2\) respectively. Let \(K\) be their partial sum. Let us find out what is the corresponding convex set \(C\) in \(\VV\) based on the type of partial sum in \(\VV \oplus \RR\).

[Addition in \(\VV\), intersection in \(\RR\).]

In this case, \((\bx, 1) \in K\) if and only if \(\bx = \bx_1 + \bx_2\) for some \((\bx_1, 1) \in K_1\) and \((\bx_2, 1) \in K_2\).

Thus, the convex set corresponding to \(K\) is \(C = C_1 + C_2\).

[Addition in \(\VV\), addition in \(\RR\).]

\((\bx, 1) \in K\) if and only if \(\bx = \bx_1 + \bx_2\) and \(1 = t_1 + t_2\) with \(t_1 \geq 0\) and \(t_2 \geq 0\) for some \((\bx_1, t_1) \in K_1\) and \((\bx_2, t_2) \in K_2\).

Thus, \(C\) is the union of the sets \(t_1 C_1 + t_2 C_2\) over \(t_1 \geq 0\), \(t_2 \geq 0\) and \(t_1 + t_2 = 1\).

But, this is same as \(C = \ConvexHull (C_1 \cup C_2)\) as per Theorem 9.38.

[TODO] Clarify this further. It is not obvious.

[Intersection in \(\VV\), intersection in \(\RR\)]

\((\bx, 1) \in K\) if and only if \((\bx, 1) \in K_1\) as well as \((\bx, 1) \in K_2\).

Thus, \(C = C_1 \cap C_2\).

[Intersection in \(\VV\), addition in \(\RR\)]

\((\bx, 1) \in K\) if and only if \((\bx, t_1) \in K_1\) and \((\bx, t_2) \in K_2\) for some \(t_1, t_2 \geq 0\) with \(t_1 + t_2 = 1\).

In this case, we can write \(C\) as:

\[\begin{split} C &= \bigcup \{ t_1 C_1 \cap t_2 C_2 \ST t_1, t_2 \geq 0, t_1 + t_2 =1 \}\\ &= \bigcup \{ (1 - t) C_1 \cap t C_2 \ST t \in [0, 1]\}. \end{split}\]

[TODO] Clarify this further. It is not obvious.

## 9.4.12. Positive Semidefinite Cone#

(The convex cone of positive semidefinite matrices)

The set of positive semidefinite matrices \(\SS_+^n\) is a convex cone.

Proof. Let \(\bA, \bB \in \SS_+^n\) and \(\theta_1, \theta_2 \geq 0\). We have to show that \(\theta_1 \bA + \theta_2 \bB \in \SS_+^n\).

Now

Hence \(\theta_1 \bA + \theta_2 \bB \in \SS_+^n\).

## 9.4.13. Linear System with Nonnegativity Constraints#

Consider the system

where \(\bA \in \RR^{m \times n}\) and \(\bb \in \RR^m\). Without loss of generality, we shall assume that the rows of \(\bA\) are linearly independent. This is a linear system \(\bA \bx = \bb\) with the nonnegativity constraint \(\bx \succeq \bzero\). If we write \(\bA\) in the form of column vectors as

Then, the set \(Q = \{ \bA \bx \ST \bx \in \RR^n_+ \}\) can be written as

In other words, \(Q\) is the conic hull of the column vectors of \(\bA\). We can think of \(\bA\) as a linear mapping of the nonnegative orthant (a convex cone) \(\RR^n_+\) from \(\RR^n\) to another convex cone in \(\RR^n\) given as a conic hull of the columns of \(\bA\).

We can now see that \(P\) is nonempty if \(\bb \in Q\).

(Basic feasible solution)

Let \(P = \{ \bx \in \RR^n \ST \bA \bx = \bb, \bx \succeq \bzero \}\)
where \(\bA \in \RR^{m \times n}\) and \(\bb \in \RR^m\).
Assume that the rows of \(\bA\) are linearly independent.
Then, \(\bv \in \RR^n\) is a *basic feasible solution* (in short “bfs”)
of \(P\) if the columns of \(\bA\) corresponding to the positive
entries of \(\bv\) are linearly independent.

Consequently, \(\bv\) has at most \(m\) has positive entries. All other entries of \(\bv\) are \(0\).

(Existence of basic feasible solution)

Let \(P = \{ \bx \in \RR^n \ST \bA \bx = \bb, \bx \succeq \bzero \}\) where \(\bA \in \RR^{m \times n}\) and \(\bb \in \RR^m\). Assume that the rows of \(A\) are linearly independent.

If \(P\) is nonempty; i.e. \(P \neq \EmptySet\), then it contains at least one basic feasible solution.

Proof. Recall that

where \(\ba_1, \dots, \ba_n\) are columns of the matrix \(\bA\).

If \(P \neq \EmptySet\), then \(\bb \in Q\).

In other words, \(\bb\) is a conic combination of columns of \(\bA\).

By the conic representation theorem, there exists a subset of \(k\) linearly independent vectors among \(\{\ba_1, \dots, \ba_n \}\) such that \(\bb\) is their conic combination.

In other words, there exist \(k\) indices \(1 \leq i_1 < \dots < i_k \leq n\) and \(k\) numbers \(v_{i_1}, \dots, v_{i_k} > 0\) such that

\[ \bb = \sum_{j=1}^k v_{i_j} \ba_{i_j} \]and \(\{\ba_{i_1}, \dots, \ba_{i_k}\}\) are linearly independent.

Consequently \(k \leq m\) since columns of \(\bA\) belong to \(\RR^m\).

Let

\[ \bv = \sum_{j=1}^k v_{i_j} \be_{i_j} \]where \(\be_{i_j}\) are unit vectors of \(\RR^n\).

Clearly, \(\bv \succeq \bzero\) and \(\bA \bv = \bb\).

Therefore, \(\bv \in P\) and \(\bv\) is a basic feasible solution.

The basic feasible solutions of \(P\) are the extreme points of \(P\). Recall that a point is an extreme point if it cannot be expressed as a nontrivial convex combination of two distinct points of a set.

(Equivalence between basic feasible solutions and extreme points)

Let \(P = \{ \bx \in \RR^n \ST \bA \bx = \bb, \bx \succeq \bzero \}\) where \(\bA \in \RR^{m \times n}\) and \(\bb \in \RR^m\). Assume that the rows of \(A\) are linearly independent.

Then \(\bv\) is a basic feasible solution of \(P\) if and only if \(\bv\) is an extreme point of \(P\).

Proof. Let \(\bv\) be a basic feasible solution of \(P\).

Then \(\bb = \bA \bv\) and \(\bv\) has \(k\) positive entries with \(k \leq m\).

Without loss of generality, assume that first \(k\) entries of \(\bv\) are positive. This can be easily achieved by shuffling the columns of \(\bA\) in the linear system \(\bA \bx = \bb\).

Therefore, \(v_1, \dots, v_k > 0\) and \(v_{k+1}, \dots, v_n = 0\).

Also, the first \(k\) columns \(\ba_1, \dots, \ba_k\) of the matrix \(\bA\) are linearly independent since \(\bv\) is a basic feasible solution.

For contradiction, assume that \(\bv\) is not an extreme point of \(P\); i.e., \(\bv \notin \extreme P\).

Then, there exist \(\by, \bz \in P\) with \(\by \neq \bz\) and \(t \in (0,1)\) such that \(\bv = t \by + (1-t)\bz\).

Since \(\by, \bz \in P\), hence \(\by \succeq \bzero\) and \(\bz \succeq \bzero\).

Since the last \(n-k\) entries of \(\bv\) are zero, hence the last \(n-k\) entries of \(\by\) and \(\bz\) also must be zero as they have to be nonnegative.

Since \(\by, \bz \in P\), hence \(\bA \by = \bb\) and \(\bA \bz = \bb\).

Therefore,

\[ \bb = \sum_{i=1}^k y_i \ba_i = \sum_{i=1}^k z_i \ba_i. \]This implies that

\[ \sum_{i=1}^k (y_i -z_i) \ba_i = \bzero. \]But, \(\ba_1, \dots, \ba_k\) are linearly independent by hypothesis.

Thus, \(y_i = z_i\) for \(i=1,\dots,k\) must hold.

Then, \(\by = \bz\).

We arrive at a contradiction.

Thus, \(\bv\) must be an extreme point of \(P\).

For the converse, assume that \(\bv\) is an extreme point of \(P\).

Again, by contradiction, assume that \(\bv\) is not a basic feasible solution.

Thus, the columns of \(\bA\) corresponding to the positive entries of \(\bv\) are linearly dependent.

Assume that there are \(k\) positive entries in \(\bv\) and WLOG, assume that they correspond to first \(k\) columns of \(\bA\).

Then, since the corresponding columns are linearly dependent, hence there exists a nonzero vector \(\bt \in \RR^k\) such that

\[ \sum_{i=1}^k t_i \ba_i = \bzero. \]We can extend \(\bt\) to \(\RR^n\) by appending \(n-k\) zeros such that \(\bA \bt = \bzero\).

Since the first \(k\) entries of \(\bv\) are positive, we can choose a sufficiently small \(r > 0\) such that \(\by = \bv - r \bt \succeq \bzero\) and \(\bz = \bv + r \bt \succeq \bzero\).

Note that \(\bA \by = \bA \bz = \bb\).

Therefore, \(\by, \bz \in P\).

At the same time, it is easy to see that

\[ \bv = \frac{1}{2} \by + \frac{1}{2} \bz. \]Thus, \(\bv\) is a convex combination of two distinct points of \(P\).

This contradicts our hypothesis that \(\bv\) is an extreme point of \(P\).

Thus, \(\bv\) must be a basic feasible solution.