4.1. Vector Spaces#

4.1.1. Algebraic Structures#

In mathematics, the term algebraic structure refers to an arbitrary set with one or more operations defined on it.

  • Simpler algebraic structures include groups, rings, and fields.

  • More complex algebraic structures like vector spaces are built on top of the simpler structures.

We will develop the notion of vector spaces as a progression of these algebraic structures.

4.1.1.1. Groups#

A group is a set with a single binary operation. It is one of the simplest algebraic structures.

Definition 4.1 (Group)

Let \(G\) be a set. Let \(* : G \times G \to G\) be a binary operation defined on \(G\) mapping \((g_1, g_2) \to * (g_1, g_2)\) and denoted as

\[ * (g_1, g_2)\triangleq g_1 * g_2. \]

If the binary operation \(*\) satisfies the following properties:

  1. [Closure] The set \(G\) is closed under the binary operation \(*\). i.e.

    \[ \forall g_1, g_2 \in G, g_1 * g_2 \in G. \]
  2. [Associativity] For every \(g_1, g_2, g_3 \in G\)

    \[ g_1 * (g_2 * g_3) = (g_1 * g_2) * g_3 \]
  3. [Identity element] There exists an element \(e \in G\) such that

    \[ g * e = e * g = g \quad \forall g \in G \]
  4. [Inverse element] For every \(g \in G\) there exists an element \(g^{-1} \in G\) such that

    \[ g * g^{-1} = g^{-1} * g = e \]

then the set \(G\) together with the operator \(*\) denoted as \((G, *)\) is known as a group.

Above properties are known as group axioms. Note that commutativity is not a requirement of a group.

Remark 4.1

Frequently, the group operation is the regular mathematical addition. In those cases, we write \(g_1 * g_2\) as \(g_1 + g_2\). Otherwise, we will write \(g_1 * g_2\) as \(g_1 g_2\).

Often, we may simply write a group \((G, *)\) as \(G\) when the underlying operation \(*\) is clear from the context.

4.1.1.2. Commutative groups#

A commutative group is a richer structure than a group. Its elements also satisfy the commutativity property.

Definition 4.2 (Commutative group)

Let \((G, *)\) be a group such that its binary operation \(*\) satisfies an additional property:

  1. [Commutativity] For every \(g_1, g_2 \in G\)

    \[ g_1 g_2 = g_2 g_1 \]

Then \((G,*)\) is known as a commutative group or an Abelian group.

4.1.1.3. Rings#

A ring is a set with two binary operations defined over it with some properties as described below.

Definition 4.3 (Associative ring)

Let \(R\) be a set with two binary operations \(+ : R \times R \to R\) (addition) mapping \((r_1, r_2) \mapsto r_1 + r_2\) and \(\cdot : R \times R \to R\) (multiplication) mapping \((r_1, r_2) \mapsto r_1 \cdot r_2\) such that \((R, +, \cdot)\) satisfies following properties:

  1. \((R, +)\) is a commutative group.

  2. \(R\) is closed under multiplication.

    \[ r_1 \cdot r_2 \in R \quad \forall r_1, r_2 \in R \]
  3. Multiplication is associative.

    \[ r_1 \cdot (r_2 \cdot r_3) = (r_1 \cdot r_2) \cdot r_3 \quad \forall r_1, r_2, r_3 \in R \]
  4. Multiplication distributes over addition.

    \[\begin{split} \begin{aligned} &r_1 \cdot (r_2 + r_3) = (r_1 \cdot r_2) + (r_1 \cdot r_3) \quad \forall r_1, r_2, r_3 \in R\\ &(r_1 + r_2) \cdot r_3 = (r_1 \cdot r_3) + (r_2 \cdot r_3) \quad \forall r_1, r_2, r_3 \in R \end{aligned} \end{split}\]

Then, \((R, +, \cdot)\) is known as an associative ring. We denote the identity element for \(+\) as \(0\) and call it additive identity.

Remark 4.2

In the sequel we will write \(r_1 \cdot r_2\) as \(r_1 r_2\). We may simply write a ring \((R, +, \cdot)\) as \(R\) when the underlying operations \(+,\cdot\) are clear from the context.

There is a hierarchy of ring like structures. In particular we mention:

  1. Associative ring with identity

  2. Field

Definition 4.4 (Associative ring with identity)

Let \((R, +, \cdot)\) be an associate ring with the property that there exists an element \(1 \in R\) (known as multiplicative identity) such that

\[ 1 \cdot r = r \cdot 1 = r \quad \forall r \in R. \]

Then \((R, +, \cdot)\) is known as an associative ring with identity.

4.1.1.4. Fields#

Field is the richest algebraic structure on one set with two operations.

Definition 4.5 (Field)

Let \(F\) be a set with two binary operations \(+: F \times F \to F\) (addition) mapping \((x_1, x_2) \mapsto x_1 + x_2\) and \(\cdot: F \times F \to F\) (multiplication) mapping \((x_1, x_2) \mapsto x_1 \cdot x_2\) such that \((F, +, \cdot)\) satisfies following properties:

  1. \((F, +)\) is a commutative group (with additive identity as \(0 \in F\)).

  2. \((F \setminus \{0\}, \cdot)\) is a commutative group (with multiplicative identity as \(1 \in F\)).

  3. Multiplication distributes over addition:

    \[ \alpha \cdot (\beta + \gamma) = (\alpha \cdot \beta) + (\alpha \cdot \gamma) \quad \forall \alpha, \beta, \gamma \in F \]

Then \((F, +, \cdot)\) is known as a field.

The definition above implies a number of properties. For any \(a,b,c \in F\), we have:

  • Closure: \(a + b \in F\), \(a b \in F\).

  • Associativity of addition and multiplication: \(a + (b + c) = (a + b) + c\) and \(a (b c) = (a b) c\).

  • Commutativity of addition and multiplication: \(a + b = b + a\) and \(a b = b a \).

  • Additive identity: \(0 + a = a + 0 = a\).

  • Multiplicative identity: \(1 a = a 1 = a\).

  • Additive inverses: \(a + (-a) = (-a) + a = 0\).

  • Multiplicative inverses for every \(a \neq 0\): \( a a^{-1} = a^{-1} a = 1\).

  • Distributivity: \(a ( b + c) = (a b) + (a c)\).

Example 4.1 (Examples of fields)

  • The set of real numbers \(\RR\) is a field.

  • The set of complex numbers \(\CC\) is a field.

  • The Galois field GF-2 is the the set \(\{ 0, 1 \}\) with modulo-2 additions and multiplications.

4.1.2. Vector Spaces#

We are now ready to define a vector space. A vector space involves two sets. One set \(\VV\) contains the vectors. The other set \(\FF\) (a field) contains scalars which are used to scale the vectors.

Definition 4.6 (Vector space)

A set \(\VV\) is called a vector space over the field \(\FF\) (or a \(\FF\)-vector space) if there exist two mappings:

  • Vector addition: \(+ : \VV \times \VV \to \VV\) mapping \((\bv_1, \bv_2) \to \bv_1 + \bv_2\) for any \(\bv_1, \bv_2 \in \VV\) to another vector in \(\VV\) denoted by \(\bv_1 + \bv_2\)

  • Scalar multiplication: \(\cdot : \FF \times \VV \to \VV\) mapping \((\alpha, \bv) \to \alpha \cdot \bv\) with \(\alpha \in \FF\) and \(\bv \in \VV\) to another vector in \(\VV\) denoted by \(\alpha \cdot \bv\) or just \(\alpha \bv\)

which satisfy following requirements:

  1. \((\VV, +)\) is a commutative group.

  2. Scalar multiplication \(\cdot\) distributes over vector addition \(+\):

    \[ \alpha (\bv_1 + \bv_2) = \alpha \bv_1 + \alpha \bv_2 \quad \forall \alpha \in \FF; \forall \bv_1, \bv_2 \in \VV. \]
  3. Addition in \(\FF\) distributes over scalar multiplication \(\cdot\):

    \[ ( \alpha + \beta) \bv = (\alpha \bv) + (\beta \bv) \quad \forall \alpha, \beta \in \FF; \forall \bv \in \VV. \]
  4. Multiplication in \(\FF\) commutes over scalar multiplication:

    \[ (\alpha \beta) \cdot \bv = \alpha \cdot (\beta \cdot \bv) = \beta \cdot (\alpha \cdot \bv) = (\beta \alpha) \cdot \bv \quad \forall \alpha, \beta \in \FF; \forall \bv \in \VV. \]
  5. Scalar multiplication from multiplicative identity \(1 \in \FF\) satisfies the following:

    \[ 1 \bv = \bv \quad \forall \bv \in \VV. \]

Some remarks are in order:

  • \(\VV\) as defined above is also known as a \(\FF\) vector space.

  • Elements of \(\VV\) are known as vectors.

  • Elements of \(\FF\) are known as scalars.

  • There are two \(0\)s involved: \(0 \in \FF\) and \(\bzero \in \VV\). It should be clear from context which \(0\) is being referred to.

  • \(\bzero \in \VV\) is known as the zero vector.

  • All vectors in \(\VV \setminus \{\bzero\}\) are non-zero vectors.

  • We will typically denote elements of \(\FF\) by \(\alpha, \beta, \dots\).

  • We will typically denote elements of \(\VV\) by \(\bv_1, \bv_2, \dots\).

We quickly look at some vector spaces which will appear again and again in our discussions.

4.1.2.1. N-Tuples#

Example 4.2 (\(n\)-tuples as a vector space)

Let \(\FF\) be some field. The set of all \(n\)-tuples \((a_1, a_2, \dots, a_n)\) with \(a_1, a_2, \dots, a_n \in \FF\) is denoted as \(\FF^n\). This is a vector space over \(\FF\) with the operations of coordinate-wise addition and scalar multiplication.

Let \(\bu, \bv \in \FF^n\) with

\[ \bu = (u_1, \dots, u_n) \]

and

\[ \bv = (v_1, \dots, v_n). \]

Vector addition is defined as

\[ \bu + \bv \triangleq (u_1 + v_1, \dots, u_n + v_n). \]

The zero vector is defined as:

\[ \bzero = (0, \dots, 0). \]

Let \(c \in \FF\). Scalar multiplication is defined as

\[ c \bu \triangleq (c u_1, \dots, c u_n). \]

\(\bu, \bv\) are called equal if \(u_1 = v_1, \dots, u_n = v_n\).

In matrix notation, vectors in \(\FF^n\) can also be written as row vectors:

\[ \bu = \begin{bmatrix} u_1 & \dots & u_n \end{bmatrix} \]

or column vectors:

\[\begin{split} \bu = \begin{bmatrix} u_1 \\ \vdots \\ u_n \end{bmatrix}. \end{split}\]

We next verify that \(\FF^n\) equipped with the vector addition and scalar multiplication defined above satisfies all the properties of a vector space. Let us assume \(\bx, \by, \bz \in \FF^n\) and \(\alpha, \beta, \gamma \in \FF\).

We can now verify all the properties of the vector space:

  1. \(\FF^n\) is closed under vector addition.

    \[ \bx + \by = (x_1 + y_1, \dots, x_n + y_n). \]

    Since \(x_i + y_i \in \FF\), hence \(\bx + \by \in \FF^n\).

  2. Vector addition is associative.

    \[\begin{split} \bx + (\by + \bz) &= (x_1, \dots, x_n) + ((y_1, \dots, y_n) + (z_1, \dots, z_n))\\ &= (x_1, \dots, x_n) + (y_1 + z_1, \dots, y_n + z_n)\\ &= (x_1 + (y_1 + z_1), \dots, x_n + (y_n + z_n))\\ &= ((x_1 + y_1) + z_1, \dots, (x_n + y_n) + z_n)\\ &= (x_1+y_1, \dots, x_n + y_n) + (z_1, \dots, z_n)\\ &= ((x_1, \dots, x_n) + (y_1, \dots, y_n)) + (z_1, \dots, z_n)\\ &= (\bx + \by) + \bz. \end{split}\]
  3. Vector addition is commutative.

    \[ \bx + \by = (x_1 + y_1, \dots, x_n + y_n) = (y_1 + x_1, \dots, y_n + x_n) = \by + \bx . \]
  4. Additive identity:

    \[ \bx + \bzero = (x_1, \dots, x_n) + (0, \dots, 0) = (x_1, \dots, x_n) = \bx. \]

    Since commutativity is already established, hence:

    \[ \bzero + \bx = \bx + \bzero = \bx. \]

    Thus, \(\bzero\) is the additive identity element.

  5. Additive inverse: Let

    \[ \bu = (-x_1, \dots, -x_n). \]

    Then,

    \[ \bx + \bu = (x_1, \dots, x_n) + (-x_1, \dots, -x_n) = (x_1 + (-x_1), \dots, x_n + (-x_n)) = (0, \dots, 0) = \bzero. \]

    By commutativity \(\bu + \bx = \bx + \bu = \bzero\). Thus, every vector has an additive inverse. We can write \(\bu = -\bx\).

  6. Scalar multiplication distributes over vector addition.

    \[\begin{split} \alpha (\bx + \by) &= \alpha (x_1 + y_1, \dots, x_n + y_n) \\ &= (\alpha (x_1 + y_1), \dots, \alpha (x_n + y_n))\\ &= (\alpha x_1 + \alpha y_1, \dots, \alpha x_n + \alpha y_n)\\ &= (\alpha x_1, \dots, \alpha x_n) + (\alpha y_1, \dots, \alpha y_n)\\ &= \alpha (x_1, \dots, x_n) + \alpha (y_1, \dots, y_n)\\ &= \alpha \bx + \alpha \by. \end{split}\]
  7. Addition in \(\FF\) distributes over scalar multiplication.

    \[\begin{split} (\alpha + \beta) \bx &= (\alpha + \beta) (x_1, \dots, x_n)\\ &= ((\alpha + \beta) x_1, \dots, (\alpha + \beta)x_n)\\ &= (\alpha x_1 + \beta x_1, \dots, \alpha x_n + \beta x_n)\\ &= (\alpha x_1, \dots, \alpha x_n) + (\beta x_1, \dots, \beta x_n)\\ &= \alpha \bx + \beta \bx. \end{split}\]
  8. Multiplication commutes over scalar multiplication:

    \[\begin{split} (\alpha \beta) \bx &= (\alpha \beta) (x_1, \dots, x_n)\\ &= ((\alpha \beta)x_1, \dots, (\alpha \beta)x_n)\\ &= (\alpha (\beta x_1), \dots, \alpha (\beta x_n))\\ &= \alpha (\beta x_1, \dots, \beta x_n)\\ &= \alpha (\beta (x_1, \dots, x_n))\\ &= \alpha (\beta \bx). \end{split}\]

    Similarly,

    \[ (\alpha \beta) \bx = (\beta \alpha) \bx = \beta (\alpha \bx). \]
  9. Scalar multiplication from multiplicative identity of \(\FF\).

    \[ 1 \bx = 1 (x_1, \dots, x_n) = (x_1, \dots, x_n) = \bx. \]

Thus, \(\FF^n\) supports all the properties of a vector space. It is indeed a vector space.

Example 4.3 (A field is a vector space)

A field of scalars \(\FF\) is a vector space over \(\FF\) in its own right. One way to see this is by treating the scalar as a tuple with one value (i.e., 1-tuple). So, we can write it as \(\FF^1\).

The real line \(\RR\) is a vector space in its own right as \(\RR^1\). Similarly, the complex plane \(\CC\) is a vector space.

More formally, let \(\VV = \FF\). We will call the element of \(\VV\) as vectors and elements of \(\FF\) as scalars.

  1. For every \(x \in \FF\), let \(\bx\) be corresponding vector in \(\VV\) with \(\bx = x\).

  2. Define vector addition on \(\VV\) as as the corresponding addition in the field \(\FF\):

    \[ \bx + \by = x + y \Forall \bx, \by \in \VV. \]
  3. Define scalar multiplication on \(\VV\) as the corresponding multiplication in the field \(\FF\):

    \[ \alpha \bx = \alpha x \Forall \alpha \in \FF and \bx \in \VV. \]
  4. Define zero vector as \(0 \in \FF\): \(\bzero = 0\).

We can now verify all the properties of the vector space:

  1. \(\VV\) is closed under vector addition. Let \(\bx + \by \in \VV\). Then, \(\bx + \by = x + y\). Since \(x + y \in \FF\), hence \(\bx + \by \in \VV\).

  2. Vector addition is associative. For every \(\bx, \by, \bz \in \VV\):

    \[ \bx + (\by + \bz) = x + (y + z) = (x + y) + z = (\bx + \by) + \bz. \]
  3. Vector addition is commutative.

    \[ \bx + \by = x + y = y + x = \by + \bx. \]
  4. For every \(\bx \in \VV\),

\[ \bx + \bzero = x + 0 = x = \bx = x = 0 + x = \bzero + \bx. \]

Thus, \(\bzero\) is the additive identity element.

  1. Let \(\bx \in \VV\). Define \(\by = -x\). Since \(-x \in \FF\), hence \(\by \in \VV\). Now,

    \[ \bx + \by = x + (-x) = 0 = \bzero. \]

    Thus, every vector has an additive inverse.

  2. Scalar multiplication distributes over vector addition. Let \(\alpha \in \FF\) and \(\bx, \by \in \VV\).

    \[ \alpha (\bx + \by) = \alpha (x + y) = \alpha x + \alpha y = \alpha \bx + \alpha \by. \]
  3. Addition in \(\FF\) distributes over scalar multiplication. Let \(\alpha, \beta \in \FF\) and \(\bx \in \VV\).

    \[ (\alpha + \beta) \bx = (\alpha + \beta) x = \alpha x + \beta x = \alpha \bx + \beta \bx. \]
  4. Multiplication commutes over scalar multiplication:

    \[ (\alpha \beta) \bx = (\alpha \beta) x = \alpha (\beta x) = \alpha (\beta \bx). \]
  5. Scalar multiplication from multiplicative identity of \(\FF\).

    \[ 1 \bx = 1 x = x = \bx. \]

Thus, \(\VV=\FF\) satisfies all requirements of being a vector space. It is indeed a vector space in its own right.

4.1.2.2. Matrices#

Example 4.4 (Matrices)

Let \(\FF\) be some field. A matrix is an array of the form

\[\begin{split} \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1N} \\ a_{21} & a_{22} & \dots & a_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ a_{M 1} & a_{M 2} & \dots & a_{MN} \\ \end{bmatrix} \end{split}\]

with \(M\) rows and \(N\) columns where \(a_{ij} \in \FF\).

The set of these matrices is denoted as \(\FF^{M \times N}\) which is a vector space with operations of matrix addition and scalar multiplication.

Let \(A, B \in \FF^{M \times N}\). Matrix addition is defined by

\[ (A + B)_{ij} \triangleq A_{ij} + B_{ij}. \]

Let \(c \in \FF\). Scalar multiplication is defined by

\[ (cA)_{ij} \triangleq c A_{ij}. \]

4.1.2.3. Polynomials#

Example 4.5 (Polynomials)

Let \(\FF[x]\) denote the set of all polynomials with coefficients drawn from field \(\FF\). i.e. if \(f(x) \in \FF[x]\), then it can be written as

\[ f(x) = a_n x^n + a_{n-1}x^{n -1} + \dots + a_1 x + a_0 \]

where \(a_i \in \FF\).

The set \(\FF[x]\) is a vector space with usual operations of addition and scalar multiplication

\[ f(x) + g(x) = (a_n + b_n)x^n + \dots + (a_1 + b_1 ) x + (a_0 + b_0). \]
\[ c f(x) = c a_n x^n + \dots + c a_1 x + c a_0. \]

4.1.2.4. Vector Space Identities#

Some useful identities are presented here.

Theorem 4.1 (Uniqueness of additive identity)

The \(\bzero\) vector in a vector space \(\VV\) is unique.

Proof. Assume that there is another vector \(\bo\) satisfying

\[ \bv + \bo = \bo + \bv = \bv \Forall \bv \in \VV. \]

Then, in particular, it satisfies: \(\bzero + \bo = \bzero\). At the same time, we have \(\bo = \bo + \bzero\). Thus,

\[ \bzero = \bzero + \bo = \bo + \bzero = \bo. \]

Thus, \(\bzero\) is unique.

Theorem 4.2 (Cancellation law)

Let \(\VV\) be an \(\FF\) vector space. Let \(\bx, \by, \bz\) be some vectors in \(\VV\) such that \(\bx + \bz = \by + \bz\). Then \(\bx = \by\).

Proof. Since \(\bz \in \VV\), there exists an additive inverse \(\bv \in \VV\) such that \(\bz + \bv = \bzero\).

Now

\[ \bx = \bx + \bzero = \bx + (\bz + \bv) = (\bx + \bz) + \bv = (\by + \bz) + \bv = \by + (\bz + \bv) = \by + \bzero = \by. \]

Corollary 4.1 (Uniqueness of additive inverse)

The additive inverse of a vector \(\bx\) in \(\VV\) is unique.

Proof. Let \(\by\) and \(\bz\) be additive inverses of \(\bx\). Then, we have:

\[ \bx + \by = \bzero = \bx + \bz. \]

By cancellation law, \(\by = \bz\).

Thus, the additive inverse of a vector is unique.

Theorem 4.3

In a vector space \(\VV\) the following statements are true

  1. \(0 \bx = \bzero \Forall \bx \in \VV\).

  2. \(a \bzero = \bzero \Forall a \in \FF\).

  3. \((-a)\bx = - (a \bx) = a(- \bx) \Forall a \in \FF \text{ and } \bx \in \VV\).

Proof. (1) \(0 \bx = \bzero \Forall \bx \in \VV\).

By distributive law:

\[ 0 \bx = (0 + 0) \bx = 0 \bx + 0 \bx. \]

By existence of zero vector:

\[ 0 \bx = \bzero + 0 \bx. \]

Thus,

\[ 0 \bx = 0 \bx + 0 \bx = \bzero + 0 \bx. \]

Now, by cancellation law:

\[ 0 \bx = \bzero. \]

(2) \(a \bzero = \bzero \Forall a \in \FF\).

Due to additive identity and distribution law:

\[ a \bzero = a (\bzero + \bzero) = a \bzero + a \bzero. \]

Due to additive identity:

\[ a \bzero = \bzero + a \bzero. \]

Combining

\[ a \bzero + a \bzero = \bzero + a \bzero. \]

Now applying the cancellation law, we get:

\[ a \bzero = \bzero. \]

(3) \((-a)\bx = - (a \bx) = a(- \bx) \Forall a \in \FF \text{ and } \bx \in \VV\).

We start with

\[ \bzero = 0 \bx = (a + (-a)) \bx = a \bx + (-a)\bx. \]

Thus, \((-a)\bx\) is the additive inverse of \(a \bx\). Thus, \( (-a)\bx = -(a\bx)\).

Similarly,

\[ \bzero = a \bzero = a (\bx + (-\bx)) = a \bx + a (-\bx). \]

Thus, \(a (-\bx)\) is also additive inverse of \(a \bx\). Thus, \(a (-\bx) = - (a \bx)\).

4.1.3. Linear Independence#

Definition 4.7 (Linear combination)

A linear combination of two vectors \(\bv_1, \bv_2 \in \VV\) is defined as

\[ \alpha \bv_1 + \beta \bv_2 \]

where \(\alpha, \beta \in \FF\).

A linear combination of \(p\) vectors \(\bv_1,\dots, \bv_p \in \VV\) is defined as

\[ \sum_{i=1}^{p} \alpha_i \bv_i \]

where \(\alpha_i \in \FF\).

Note that a linear combination always consists of a finite number of vectors.

Definition 4.8 (Linear combination vector)

Let \(\VV\) be a vector space and let \(S\) be a nonempty subset of \(\VV\). A vector \(\bv \in \VV\) is called a linear combination of vectors of \(S\) if there exist a finite number of vectors \(\bs_1, \bs_2, \dots, \bs_n \in S\) and scalars \(a_1, \dots, a_N\) in \(\FF\) such that

\[ \bv = a_1 \bs_1 + a_2 \bs_2 + \dots a_n \bs_n. \]

We also say that \(v\) is a linear combination of \(\bs_1, \bs_2, \dots, \bs_n\) and \(a_1, a_2, \dots, a_n\) are the coefficients of linear combination.

Definition 4.9 (Trivial linear combination)

A linear combination \(a_1 \bs_1 + a_2 \bs_2 + \dots a_n \bs_n\) is called trivial if \(a_1 = a_2 = \dots = a_n = 0\).

\(\bzero\) is a trivial linear combination of any subset of \(\VV\) as:

\[ \bzero = 0 \bs_1 + 0 \bs_2 + \dots 0 \bs_n. \]

A linear combination may refer to the expression itself or its value. e.g. two different linear combinations may have same value.

Definition 4.10 (Linear dependence)

A finite set of non-zero vectors \(\{\bv_1, \cdots, \bv_p\} \subset \VV\) is called linearly dependent if there exist \(\alpha_1,\dots,\alpha_p \in \FF\) not all \(0\) such that

\[ \sum_{i=1}^{p} \alpha_i \bv_i = \bzero. \]

Since \(\alpha_i\) are not all zero, hence, it is a non-trivial combination.

Definition 4.11 (Linearly dependent set)

A set \(S \subseteq \VV\) is called linearly dependent if there exist a finite number of distinct vectors \(\bu_1, \bu_2, \dots, \bu_n \in S\) and scalars \(a_1, a_2, \dots, a_n \in \FF\) not all zero, such that

\[ a_1 \bu_1 + a_2 \bu_2 + \dots + a_n \bu_n = \bzero. \]

In other words, there exists a non-trivial linear combination which equals \(\bzero\).

Definition 4.12 (Linearly independent set)

A set \(S \subseteq \VV\) is called linearly independent if it is not linearly dependent.

Definition 4.13 (Linearly independent vectors)

More specifically a finite set of non-zero vectors \(\{\bv_1, \dots, \bv_n\} \subset \VV\) is called linearly independent if

\[ \sum_{i=1}^{n} \alpha_i \bv_i = 0 \implies \alpha_i = \bzero \Forall 1 \leq i \leq n. \]

In other words, the only linear combination giving us \(\bzero\) vector is the trivial linear combination.

Example 4.6 (Examples of linearly dependent and independent sets)

  • The empty set is linearly independent.

  • A set of a single non-zero vector \(\{\bv\}\) is always linearly independent. Prove!

  • If two vectors are linearly dependent, we say that they are collinear.

  • Alternatively if two vectors are linearly independent, we say that they are not collinear.

  • If a set \(\{\bv_1, \dots, \bv_p\}\) is linearly independent, then any subset of it will be linearly independent. Prove!

  • Adding another vector \(\bv\) to the set may make it linearly dependent. When?

  • It is possible to have an infinite set to be linearly independent. Consider the set of polynomials \(\{1, x, x^2, x^3, \dots\}\). This set is infinite, yet linearly independent.

Theorem 4.4

Let \(\VV\) be a vector space. Let \(S_1 \subseteq S_2 \subseteq \VV\). If \(S_1\) is linearly dependent, then \(S_2\) is linearly dependent.

Corollary 4.2

Let \(\VV\) be a vector space. Let \(S_1 \subseteq S_2 \subseteq \VV\). If \(S_2\) is linearly independent, then \(S_1\) is linearly independent.

4.1.4. Span#

Vectors can be combined to form other vectors. It makes sense to consider the set of all vectors which can be created by combining a given set of vectors.

Definition 4.14 (Span)

Let \(S \subset \VV\) be a subset of vectors. The span of \(S\) denoted as \(\langle S \rangle\) or \(\span S\) is the set of all possible linear combinations of vectors belonging to \(S\).

\[ \span S = \langle S \rangle \triangleq \{ \bv \in \VV \ST \bv = \sum_{i=1}^{p} \alpha_i \bv_i \quad \text{for some} \quad \bv_i \in S;\; \alpha_i \in \FF; \; p \in \mathbb{N}\} \]

For convenience we define \(\span \EmptySet = \{ \bzero\}\).

Span of a finite set of vectors \(\{\bv_1, \cdots, \bv_p\}\) is denoted by \(\langle \bv_1, \cdots, \bv_p \rangle\).

\[ \langle \bv_1, \cdots, \bv_p \rangle = \left \{\sum_{i=1}^{p} \alpha_i \bv_i \ST \alpha_i \in \FF \right \}. \]

We say that a set of vectors \(S \subseteq \VV\) spans \(\VV\) if \(\langle S \rangle = \VV\).

Proposition 4.1

Let \(S \subseteq \VV\), then \(\Span (S) \subseteq \VV\).

Definition 4.15 (Spanning a vector space)

Let \(S \subset \VV\). We say that \(S\) spans (or generates) \(\VV\) if

\[ \langle S \rangle = \VV. \]

In this case we also say that vectors of \(S\) span (or generate) \(\VV\).

Theorem 4.5

Let \(S\) be a linearly independent subset of a vector space \(\VV\) and let \(\bv \in \VV \setminus S\). Then \(S \cup \{ v \}\) is linearly dependent if and only if \( v \in \Span(S)\).

4.1.5. Basis#

Definition 4.16

A set of linearly independent vectors \(\BBB\) is called a basis of \(\VV\) if \(\langle \BBB \rangle = \VV\); i.e., \(\BBB\) spans \(\VV\).

Example 4.7 (Basis examples)

  • Since \(\Span(\EmptySet) = \{ \bzero \}\) and \(\EmptySet\) is linearly independent, \(\EmptySet\) is a basis for the zero vector space \(\{ 0 \}\).

  • The basis \(\{ \be_1, \dots, \be_N\}\) with \(\be_1 = (1, 0, \dots, 0)\), \(\be_2 = (0, 1, \dots, 0)\), \(\dots\), \(\be_N = (0, 0, \dots, 1)\), is called the standard basis for \(\FF^N\).

  • The set \(\{1, x, x^2, x^3, \dots\}\) is the standard basis for \(\FF[x]\). Indeed, an infinite basis. Note that though the basis itself is infinite, yet every polynomial \(p \in \FF[x]\) is a linear combination of finite number of elements from the basis.

We review some properties of bases.

Theorem 4.6 (Unique representation)

Let \(\VV\) be a vector space and \(\BBB = \{ \bv_1, \bv_2, \dots, \bv_n\}\) be a subset of \(\VV\). Then \(\BBB\) is a basis for \(\VV\) if and only if each \(v \in \VV\) can be uniquely expressed as a linear combination of vectors of \(\BBB\). In other words:

\[ \bv = a_1 \bv_1 + a_2 \bv_2 + \dots + a_n \bv_n \]

for unique scalars \(a_1, \dots, a_n\).

This theorem states that a basis \(\BBB\) provides a unique representation to each vector \(v \in \VV\) where the representation is defined as the \(n\)-tuple \((a_1, a_2, \dots, a_n)\).

Proof. Assume that each \(\bv \in \VV\) can be expressed uniquely as a linear combination of vectors of \(\BBB\).

  1. Then \(\bzero\) is a unique linear combination of \(\BBB\).

  2. But \(\bzero = 0 \bv_1 + \dots + 0 \bv_n\).

  3. Thus, \(\BBB\) is linearly independent.

  4. Also, \(\BBB\) spans \(\VV\) since every vector can be expressed as a linear combination.

  5. Thus, \(\BBB\) is a basis.

Assume that \(\BBB\) is a basis for \(\VV\).

  1. Then, \(\BBB\) is linearly independent and \(\BBB\) spans \(\VV\).

  2. Let \(\bv \in \VV\).

  3. Let \(\bv = \sum_{i=1}^n a_i \bv_i\) and \(\bv = \sum_{i=1}^n b_i \bv_i\) be two different representations of \(\bv\) in \(\BBB\).

  4. Then,

    \[\begin{split} &\sum_{i=1}^n a_i \bv_i = \sum_{i=1}^n b_i \bv_i\\ &\iff \sum_{i=1}^n (a_i - b_i) \bv_i = 0. \end{split}\]
  5. But \(\BBB\) is linearly independent.

  6. Hence, \(a_i - b_i = 0\) for \(i = 1, \dots, n\).

  7. Thus, \(a_i = b_i\) for \(i=1,\dots,n\).

  8. Hence, every \(\bv \in \VV\) has a unique representation as a linear combination of \(\BBB\).

If the basis is infinite, then the above theorem needs to be modified as follows:

Theorem 4.7 (Unique representation for infinite basis)

Let \(\VV\) be a vector space and \(\BBB\) be a subset of \(\VV\). Then \(\BBB\) is a basis for \(\VV\) if and only if each \(\bv \in \VV\) can be uniquely expressed as a linear combination of vectors of \(\BBB\):

\[ \bv = a_1 \bv_1 + a_2 \bv_2 + \dots + a_n \bv_n \]

for unique scalars \(a_1, \dots, a_n\) and unique vectors \(\bv_1, \bv_2, \dots \bv_n \in \BBB\).

Theorem 4.8

If a vector space \(\VV\) is spanned by a finite set \(S\), then some subset of \(S\) is a basis for \(\VV\). Hence \(\VV\) has a finite basis.

Theorem 4.9 (Replacement theorem)

Let \(\VV\) be a vector space that is spanned by a set \(G\) containing exactly \(n\) vectors. Let \(L\) be a linearly independent subset of \(\VV\) containing exactly \(m\) vectors.

Then \(m \leq n\) and there exists a subset \(H\) of \(G\) containing exactly \(n-m\) vectors such that \(L \cup H\) spans \(\VV\).

Corollary 4.3

Let \(\VV\) be a vector space having a finite basis. Then, every basis for \(\VV\) contains the same number of vectors.

4.1.6. Dimension#

Definition 4.17 (Dimension of vector space)

A vector space \(\VV\) is called finite-dimensional if it has a basis consisting of a finite number of vectors. This unique number of vectors in any basis \(\BBB\) of the vector space \(\VV\) is called the dimension or dimensionality of the vector space. It is denoted as \(\dim \VV\). We say:

\[ \dim \VV \triangleq |\BBB| \]

If \(\VV\) is not finite-dimensional, then we say that \(\VV\) is infinite-dimensional.

Example 4.8 (Vector space dimensions)

  • Dimension of \(\FF^N\) is \(N\).

  • Dimension of \(\FF^{M \times N}\) is \(MN\).

  • The vector space of polynomials \(\FF[x]\) is infinite dimensional.

Proposition 4.2

Let \(\VV\) be a vector space with dimension \(n\).

  1. Any finite spanning set for \(\VV\) contains at least \(n\) vectors, and a spanning set that contains exactly \(n\) vectors is a basis for \(\VV\).

  2. Any linearly independent subset of \(\VV\) that contains exactly \(n\) vectors is a basis for \(\VV\).

  3. Every linearly independent subset of \(\VV\) can be extended to a basis for \(\VV\).

4.1.7. Ordered Basis#

Definition 4.18 (Ordered basis)

For a finite dimensional vector space \(\VV\), an ordered basis is a basis for \(\VV\) with a specific order. In other words, it is a finite sequence (or tuple) of linearly independent vectors in \(\VV\) that spans \(\VV\).

Typically, we will write an ordered basis as \(\BBB = \{ \bv_1, \bv_2, \dots, \bv_n\}\) and assume that the basis vectors are ordered in the order they appear.

With the help of an ordered basis, we can define a coordinate vector.

Definition 4.19 (Coordinate vector)

Let \(\BBB = \{ \bv_1, \dots, \bv_n\}\) be an ordered basis for \(\VV\). For any \(\bx \in \VV\), let \(\alpha_1, \dots, \alpha_n\) be unique scalars such that

\[ x = \sum_{i=1}^n \alpha_i \bv_i. \]

The coordinate vector of \(\bx\) relative to \(\BBB\) is defined as

\[\begin{split} [\bx]_{\BBB} = \begin{bmatrix} \alpha_1\\ \vdots\\ \alpha_n \end{bmatrix}. \end{split}\]

4.1.8. Subspaces#

Definition 4.20 (Subspace)

Let \(\WW\) be a subset of \(\VV\). Then \(\WW\) is called a subspace if \(\WW\) is a vector space in its own right under the same vector addition \(+\) and scalar multiplication \(\cdot\) operations. i.e.

\[\begin{split} \begin{aligned} + : &\WW \times \WW \to \WW\\ &(\bw_1, \bw_2) \to \bw_1 + \bw_2 \quad \bw_1, \bw_2 \in \WW \end{aligned} \end{split}\]
\[\begin{split} \begin{aligned} \cdot : &\FF \times \WW \to \WW\\ &(\alpha, \bw) \to \alpha \cdot \bw \triangleq \alpha \bw \quad \alpha \in \FF; \bw \in \WW \end{aligned} \end{split}\]

are defined by restricting \(+ : \VV \times \VV \to \VV\) and \(\cdot : \VV \times \VV \to \VV\) to \(\WW\) and \(\WW\) is closed under these operations.

Example 4.9 (Trivial subspaces)

  • \(\VV\) is a subspace of \(\VV\).

  • \(\{\bzero\}\) is a subspace of any \(\VV\).

Theorem 4.10 (Subspace characterization)

A subset \(\WW \subseteq \VV\) is a subspace of \(\VV\) if and only if

  • \(\bzero \in\WW \).

  • \(\bx + \by \in\WW \) whenever \(\bx, \by \in\WW\).

  • \(\alpha \bx \in\WW \) whenever \(\alpha \in \FF\) and \(\bx \in\WW \).

In other words, a subset of \(\VV\) is a subspace if and only if it contains the zero vector and is closed under vector addition and scalar multiplication.

Example 4.10 (Symmetric matrices)

A matrix \(M \in \FF^{M \times N}\) is symmetric if

\[ M^T = M. \]

The set of symmetric matrices forms a subspace of set of all \(M\times N\) matrices.

Example 4.11 (Diagonal matrices)

A matrix \(M\) is called a diagonal if \(M_{ij} = 0\) whenever \(i \neq j\).

The set of diagonal matrices is a subspace of \(\FF^{M \times N}\).

Definition 4.21 (Proper subspace)

A subspace \(\WW\) of \(\VV\) is called a proper subspace if \(\VV \setminus \WW \neq \EmptySet\).

In other words, there are vectors in \(\VV\) which are not included in \(\WW\).

Theorem 4.11 (Intersection of subspaces)

Any intersection of subspaces of a vector space \(\VV\) is a subspace of \(\VV\).

Proof. Let \(\{W_i \}_{i \in I}\) be a family of subspaces of a vector space \(\VV\) indexed by \(I\).

Let the intersection of the subspaces be:

\[ W = \bigcap_{i \in I} W_i. \]

[Zero vector]

  1. Since \(\bzero \in W_i\) for every \(i \in I\), hence \(\bzero \in W\).

[Vector addition]

  1. Let \(\bx, \by \in W_i\) for every \(i \in I\).

  2. Then, \(\bx + \by \in W_i\) for every \(i \in I\) since \(W_i\) are closed under vector addition.

  3. Then, \(\bx + \by \in W\).

  4. Thus, \(W\) is closed under vector addition.

[Scalar multiplication]

  1. Let \(\bx \in W_i\) for every \(i \in I\) and let \(t \in \FF\).

  2. Then, \(t \bx \in W_i\) for ever \(i \in I\) since \(W_i\) are closed under scalar multiplication.

  3. Thus, \(t \bx \in W\).

  4. Thus, \(W\) is closed under scalar multiplication.

Thus, \(W\) is a subspace of \(\VV\).

We note that a union of subspaces is not necessarily a subspace, since its not closed under addition.

Theorem 4.12 (Span is a subspace)

Let \(\VV\) be a vector space. The span of a set \(S \subset \VV\) given by \(\span S\) is a subspace of \(\VV\). Moreover, any subspace of \(\VV\) that contains \(S\) must also contain the span of \(S\).

Proof. If \(S = \EmptySet\), then by convention, \(\span S = \{ \bzero \}\). This happens to be the trivial subspace.

We now consider \(S\) to be nonempty (possibly infinite).

[Zero vector]

  1. Since \(S\) is nonempty, hence there exists some \(\bv \in S\).

  2. Then, \(0 \bv = \bzero \in \span S\) as it is a linear combination of elements of \(S\).

[Scalar multiplication]

  1. Let \(\bv \in \span S\).

  2. Then, \(\bv = \sum_{i=1}^p t_i \bv_i\) where \(\bv_i \in S\), \(t_i \in \FF\), \(p \in \Nat\).

  3. But then for any \(t \in \FF\):

    \[ t \bv = t \left ( \sum_{i=1}^p t_i \bv_i \right ) = \sum_{i=1}^p (t t_i) \bv_i. \]
  4. Thus, \(t\bv\) is also a linear combination of elements of \(S\).

  5. Thus, \(\span S\) is closed under scalar multiplication.

[Vector addition]

  1. Let \(\bx, \by \in \span S\).

  2. Let \(\bx = \sum_{i=1}^p r_i \bx_i\) and \(\by = \sum_{i=1}^q t_i \by_i\) be linear combinations of elements of \(S\).

  3. Then,

    \[ \bx + \by = \sum_{i=1}^p r_i \bx_i + \sum_{i=1}^q t_i \by_i \]

    is also a linear combination of (up to \(p+q\)) elements of \(S\).

  4. Thus, \(\bx + \by \in \span S\).

  5. Thus, \(\span S\) is closed under vector addition.

Combining, \(\span S\) is a linear subspace.

Let \(W\) be a subspace such that \(S \subseteq W\).

  1. Let \(\bv \in \span S\).

  2. Then, \(\bv = \sum_{i=1}^p t_i \bv_i\) is a linear combination of elements of \(S\).

  3. But \(W\) is closed under linear combinations and \(S \subseteq W\).

  4. Thus, \(\bv \in W\).

  5. Thus, \(\span S \subseteq W\).

This theorem is quite useful. It allows us to construct subspaces from a given basis.

Let \(\BBB\) be a basis of an \(n\) dimensional space \(\VV\). There are \(n\) vectors in \(\BBB\). We can create \(2^n\) distinct subsets of \(\BBB\). Thus we can easily construct \(2^n\) distinct subspaces of \(\VV\).

Choosing some other basis lets us construct another set of subspaces.

An \(n\)-dimensional vector space has infinite number of bases. Correspondingly, there are infinite possible subspaces.

Definition 4.22 (Partial order on subspaces)

Let \(\VV\) be a vector space. If \(\WW_1\) and \(\WW_2\) are two subspaces of \(\VV\) then we say that \(\WW_1\) is smaller than \(\WW_2\) if \(\WW_1 \subset \WW_2\); i.e., \(\WW_1\) is a proper subset of \(\WW_2\). We denote this partial order by \(\WW_1 \prec \WW_2\).

If \(\WW_1 \subseteq \WW_2\), we denote this is \(\WW_1 \preceq \WW_2\).

Theorem 4.13 (Span as the smallest subspace)

Let \(\VV\) be a vector space. Let \(\bv_1, \dots, \bv_p\) be some arbitrary vectors in \(\VV\). Let \(S = \{ \bv_1, \dots, \bv_p \}\). Let the span of \(S\) be denoted as

\[ U = \span S = \langle \bv_1, \dots, \bv_p \rangle. \]

Let \(W\) be the smallest subspace containing the vectors in \(S\). Then

\[ \WW = U. \]

i.e. \(\WW\) is same as the span of \(\{ \bv_1, \dots, \bv_p \}\). In other words, \(\span S\) is the smallest subspace containing \(S\).

Proof. Since \(\WW\) contains \(S\), hence \(\span S \subseteq W\) due to Theorem 4.12. \(\span S\) is also a linear subspace. So \(\WW = \span S\) must hold.

If the vector space \(\VV\) is not finite dimensional, then its subspaces may or may not be finite dimensional. E.g., if \(W\) is a span of a fixed \(p\) number of vectors from \(\VV\), then it is finite dimensional with \(\dim W \leq p\) irrespective of whether \(\VV\) is finite dimensional or not.

However, if \(\VV\) is finite dimensional, then its subspaces must be finite dimensional.

Theorem 4.14 (Subspaces of a finite dimensional vector space)

Let \(\WW\) be a subspace of a finite-dimensional vector space \(\VV\). Then \(\WW\) is finite dimensional and

\[ \dim \WW \leq \dim \VV. \]

Moreover, if

\[ \dim \WW = \dim \VV, \]

then \(\WW = \VV\).

Proof. If \(\VV\) is finite dimensional, then it has a finite basis of say \(n\) basis vectors. Since \(\WW \subseteq \VV\) hence the basis spans \(\WW\) too. Thus, dimension of \(\WW\) cannot be greater than \(n\).

Now, assume that

\[ n = \dim \WW = \dim \VV. \]
  1. Let \(\BBB\) be a basis for \(\WW\).

  2. Then \(\BBB\) spans \(\WW\) and contains \(n\) linearly independent vectors.

  3. Since \(\WW \subseteq \VV\), hence \(\BBB \subseteq \VV\).

  4. But \(n = \dim \VV\).

  5. Hence, any set of \(n\) linearly independent vectors must be a basis for \(\VV\) too.

  6. Thus, \(\BBB\) is a basis for \(\VV\).

  7. Thus, \(\VV = \WW = \span \BBB\).

Corollary 4.4

If \(\WW\) is a subspace for a finite-dimensional vector space \(\VV\) then any basis for \(\WW\) can be extended to a basis for \(\VV\).

Definition 4.23 (Subspace codimension)

Let \(\VV\) be a finite dimensional vector space and \(\WW\) be a subspace of \(\VV\). The codimension of \(\WW\) is defined as

\[ \text{codim} \WW = \dim \VV - \dim \WW. \]

4.1.9. Direct Sum of Vector Spaces#

Consider two vector spaces \(\VV\) and \(\WW\) over same scalar field \(\FF\). Often, it is useful to consider the Cartesian product set \(\VV \times \WW\) and add a vector space structure to it. One common example is \(\VV=\RR^n\), \(\WW=\RR^m\) and \(\VV \times \WW = \RR^{n+m}\).

Definition 4.24 (Direct sum of vector spaces)

Let \(\VV\) and \(\WW\) be two different vector spaces over some field \(\FF\). A vector space structure over the field \(\FF\) can be introduced to the set \(\VV \times \WW\) as per the following definitions.

  1. Let \(\bzero_v \in \VV\) and \(\bzero_w \in \WW\) be the additive identities of \(\VV\) and \(\WW\) respectively. The additive identity of \(\VV \times \WW\) is given by

    \[ \bzero = (\bzero_v, \bzero_w). \]
  2. [Vector addition] Let \((\bx, \bu)\) and \((\by, \bv)\) be in \(\VV \times \WW\). Then, their sum is defined as:

    \[ (\bx, \bu) + (\by, \bv) \triangleq (\bx + \by, \bu + \bv). \]
  3. [Scalar multiplication] Let \((\bx, \bu) \in \VV \times \WW\) and \(\alpha \in \FF\). Then, the scalar multiplication is defined as:

    \[ \alpha (\bx, \bu) \triangleq (\alpha \bx, \alpha \bu). \]

\(\VV \times \WW\) equipped with these definitions is a vector space over \(\FF\) in its own right and is called the direct sum of \(\VV\) and \(\WW\) denoted by \(\VV \oplus \WW\).

Note

Some authors prefer to call \(\VV \oplus \WW\) as direct product (Cartesian product with vector space structure) and reserve the term direct sum for the sums within the same vector space (Definition 4.29 below).

For them, external direct sum is a direct product and internal direct sum is a direct sum.

Example 4.12

With the definition of direct sum given above, we have:

\[ \FF^2 = \FF \oplus \FF \]

or more generally,

\[ \FF^{n} = \FF \oplus \FF \oplus \dots \oplus \FF = \bigoplus_{i=1}^n \FF. \]

Specifically,

\[ \RR^n = \bigoplus_{i=1}^n \RR. \]

Also,

\[ \RR^{n + m} = \RR^n \times \RR^m. \]

4.1.10. Sets in Vector Spaces#

4.1.10.1. Set Arithmetic#

Definition 4.25 (Arithmetic on sets)

Let \(C\), \(D\) be subsets of an \(\FF\) vector space \(\VV\). Let \(\bz \in \VV\). Let \(\lambda \in \FF\). Let \(\Lambda \subseteq \FF\).

The addition of sets is defined as:

\[ C + D \triangleq \{ \bx + \by \ST \bx \in C, \by \in D \}. \]

The subtraction of sets is defined as:

\[ C - D \triangleq \{ \bx - \by \ST \bx \in C, \by \in D \}. \]

Addition of a set with a vector is defined as:

\[ \bz + C \triangleq \{ \bz + \bx \ST \bx \in C\} = \{ \bz \} + C. \]

Subtraction of set with a vector is defined as:

\[ C - \bz \triangleq \{ \bx - \bz \ST \bx \in C\} = C - \{ \bz \}. \]

Scalar multiplication of a set with a scalar is defined as:

\[ \lambda C \triangleq \{ \lambda \bx \ST \bx \in C \}. \]

Multiplication of a set of scalars with a set of vectors is defined as:

\[ \Lambda C \triangleq \bigcup_{\lambda \in \Lambda} \lambda C. \]

Multiplication of a set of scalars with a vector is defined as:

\[ \Lambda \bz \triangleq \Lambda \{ \bz \} = \{ \lambda \bz \ST \lambda \in \Lambda\}. \]

4.1.10.2. Symmetry#

Definition 4.26 (Symmetric reflection)

The symmetric reflection of a set \(C\) across the origin is given by \(-C = (-1) C\).

Definition 4.27 (Symmetric set)

A set \(C\) is said to be symmetric if \(-C = C\) holds true.

If a nonempty set is symmetric, it must contain \(\bzero\) since \(-\bzero = \bzero\).

4.1.10.3. Set Arithmetic Properties#

Theorem 4.15 (Set vector arithmetic identities)

Let \(C\), \(D\) be subsets of an \(\FF\) vector space \(\VV\). Let \(\bz \in \VV\). Let \(\lambda \in \FF\). Let \(\Lambda \subseteq \FF\).

\[ C = (C + \bz) - \bz. \]
\[ C \subseteq D \iff C + \bz \subseteq D + \bz. \]

Proof. We show that \(C = (C + \bz) - \bz\).

  1. Let \(\bx \in C\).

  2. Then, \(\bx = (\bx + \bz) - \bz\).

  3. Then, \(\bx + \bz \in C+\bz\).

  4. Then, \((\bx + \bz) - \bz \in (C + \bz) - \bz\).

  5. Thus, \(C \subseteq (C + \bz) - \bz\).

  6. Conversely, let \(\bx \in (C + \bz) - \bz\).

  7. Then, \(\bx + \bz \in C+\bz\).

  8. But then, \(\bx \in C\).

  9. Thus, \((C + \bz) - \bz \subseteq C\).

  10. Combining \(C = (C + \bz) - \bz\).

We show that \(C \subseteq D \iff C + \bz \subseteq D + \bz\).

Assume \(C \subseteq D\).

  1. Let \(\bx \in C + \bz\).

  2. Then, \(\bx - \bz \in C\).

  3. Then, \(\bx - \bz \in D\) since \(C \subseteq D\).

  4. Then, \(\bx \in D + \bz\).

  5. Thus, \(C + \bz \in D + \bz\).

Now, assume \(C + \bz \subseteq D + \bz\).

  1. Let \(\bx \in C\).

  2. Then, \(\bx + \bz \in C + \bz\).

  3. Then, \(\bx + \bz \in D + \bz\) since \(C + \bz \subseteq D + \bz\).

  4. Then, \(\bx \in D\).

  5. Thus, \(C \subseteq D\).

Theorem 4.16 (Properties of set arithmetic)

Let \(\VV\) be a vector space over a field \(\FF\).

Let \(C, D, E \subseteq \VV\) and \(\alpha, \beta \in \FF\).

  1. Set addition is commutative:

    \[ C + D = D + C. \]
  2. Set addition is associative:

    \[ C + (D + E) = (C + D) + E. \]
  3. Scalar multiplication with a set commutes with multiplication in \(\FF\):

    \[ \alpha (\beta C) = (\alpha \beta) C = (\beta \alpha )C = \beta (\alpha C). \]
  4. Scalar multiplication distributes over set addition:

    \[ \alpha (C + D) = (\alpha C) + (\alpha D). \]
  5. The set \(\{ \bzero \}\) is the identity element for set addition:

    \[ C + \{ \bzero \} = \{ \bzero \} + C = C. \]
  6. The set \(F = C + (-C)\) is symmetric and \(\bzero \in F\).

  7. \( (\alpha + \beta) C \subseteq \alpha C + \beta C\).

Proof. We shall prove some of the properties.

[Commutativity]

  1. Let \(\bu \in C + D\).

  2. Then, there exists, \(\bx \in C\) and \(\by \in D\) such that \(\bu = \bx + \by\).

  3. But then, \(\bu = \by + \bx\) since vector addition is commutative.

  4. Thus, \(\bu \in D + C\).

  5. Thus, \(C + D \subseteq D + C\).

  6. Similarly, \(D + C \subseteq C + D\).

[Associativity]

  1. Let \(\ba \in C + (D + E)\).

  2. Then, there exists \(\bc \in C\) and \(\bb \in D + E\) such that \(\ba = \bc + \bb\).

  3. Then, there exists \(\bd \in D\) and \(\be \in E\) such that \(\bb = \bd + \be\).

  4. Thus, \(\ba = \bc + (\bd + \be)\).

  5. But vector addition is associative.

  6. Thus, \(\ba = (\bc + \bd) + \be\).

  7. Then, \(\bc + \bd \in C + D\).

  8. Thus, \(\ba \in (C + D) + E\).

  9. Thus, \(C + (D + E) \subseteq (C + D) + E\).

  10. Similarly reasoning shows that \((C + D) + E \subseteq C + (D + E)\).

\( (\alpha + \beta) C \subseteq \alpha C + \beta C\)

  1. Let \(\bx \in (\alpha + \beta) C\).

  2. Then, there exists \(\by \in C\) such that \(\bx = (\alpha + \beta) \by\).

  3. Then, \(\bx = \alpha \by + \beta \by\).

  4. Then, \(\alpha \by \in \alpha C\) and \(\beta \by \in \beta C\).

  5. Thus, \(\bx \in \alpha C + \beta C\).

  6. Thus, \((\alpha + \beta) C \subseteq \alpha C + \beta C\).

Additive inverses don’t exist for sets containing more than one element.

Theorem 4.17 (Distributive law for set sum over intersection)

Let \(\VV\) be a vector space over a field \(\FF\).

Let \(C, D, E \subseteq \VV\). Then

\[ (C \cap D) + E \subseteq (C + E) \cap (D + E). \]

Proof. We show that \((C \cap D) + E \subseteq (C + E) \cap (D + E)\).

  1. Let \(\bv \in (C \cap D) + E\).

  2. Then, \(\bv = \bx + \by\) such that \(\bx \in (C \cap D)\) and \(\by \in E\).

  3. Then, \(\bx \in C\), \(\bx \in D\) and \(\by \in E\).

  4. Thus, \(\bx + \by \in C + E\) and \(\by + \by \in D + E\).

  5. Thus, \(\bv = \bx + \by \in (C + E) \cap (D + E)\).

  6. Thus, \((C \cap D) + E \subseteq (C + E) \cap (D + E)\).

We mention that \( (C + E) \cap (D + E) \not\subseteq (C \cap D) + E\) in general. Here is an example.

  1. Consider \(\VV = \RR^2\).

  2. Let \(C\) to be the \(x\)-axis.

  3. Let \(D\) to be the \(y\)-axis.

  4. Let \(E\) to be the line \(\{ (x, y) \ST x = y \}\).

  5. Both \(C+E\) and \(D+E\) are the whole plane \(\RR^2\).

  6. Thus, \((C + E) \cap (D + E) = \RR^2\).

  7. But \(C \cap D = \{ \bzero \}\).

  8. Thus, \((C \cap D) + E = E\).

  9. Clearly, \((C + E) \cap (D + E) \not\subseteq (C \cap D) + E\).

4.1.10.4. Direct Sums#

Definition 4.28 ((External) direct sum over two different vector spaces)

Let \(\VV\) and \(\WW\) be vector spaces over \(\FF\). Let \(C \subseteq \VV\) and \(D \subseteq \WW\). Then, the direct sum of \(C\) and \(D\) is defined as

\[ C \oplus D \triangleq \{ \bx = (\by, \bz) \ST \by \in C, \bz \in D \}. \]

\(C \oplus D\) is a subset of the (direct sum) vector space \(\VV \oplus \WW\).

Definition 4.29 ((Internal) direct sum of sets in same vector space)

Let \(C,D\) be subsets of \(\VV\). If each vector \(\bx \in C+D\) can be expressed uniquely in the form

\[ \bx = \by + \bz, \by \in C, \bz \in D; \]

then the sum \(C + D\) is called a direct sum of \(C\) and \(D\) and is denoted by \(C \oplus D\).

The key idea behind a direct sum is that it enables a unique decomposition of a vector into its components belonging to individual sets.

Theorem 4.18

Two subsets \(C, D\) of \(\VV\) are eligible for a direct sum if and only if the symmetric sets \(C - C\) and \(D - D\) have only the zero vector \(\bzero\) in common.

If two subspaces of \(\VV\) have only \(\bzero\) in common, then any vector in their sum can be decomposed uniquely between the two subspaces.

4.1.11. Sums of Subspaces#

Definition 4.30 ((Internal) direct sum of subspaces)

If \(\WW_1\) and \(\WW_2\) are two subspaces of \(\VV\) such that \(\WW_1 \cap \WW_2 = \{ \bzero \}\), then the direct sum of \(\WW_1\) and \(\WW_2\) is defined as:

\[ \WW_1 \oplus \WW_2 \triangleq \WW_1 + \WW_2 = \{ \bx_1 + \bx_2 \ST \bx_1 \in \WW_1, \bx_2 \in \WW_2 \}. \]

If, in particular, \(\VV = \WW_1 + \WW_2\), we say that \(\VV\) is a direct sum of its subspaces \(\WW_1\) and \(\WW_2\) and write as \(\VV = \WW_1 \oplus \WW_2\).

Example 4.13 (Vector space as direct sum of spans of basis vectors)

The spans of basis vectors have only \(\bzero\) in common. They indeed offer a unique decomposition of any vector in the space. Thus, for a finite dimensional space \(\VV\) with a basis \(\BBB = \{\bv_1, \dots \bv_n \}\)

\[ \VV = \bigoplus_{i=1}^n \span \{ \bv_i \}. \]

Theorem 4.19 (Dimension of sum of subspaces)

Let \(W_1\) and \(W_2\) be finite dimensional subspaces of a vector space \(\VV\). Then

\[ \dim (W_1 + W_2) = \dim W_1 + \dim W_2 - \dim (W_1 \cap W_2). \]

Theorem 4.20 (Dimension of direct sum of subspaces)

Let \(W_1\) and \(W_2\) be finite dimensional subspaces of a vector space \(\VV\) such that \(W_1 \cap W_2 = \{\bzero \}\). Then

\[ \dim (W_1 \oplus W_2) = \dim W_1 + \dim W_2. \]

Proof. We note that \(W_1 \cap W_2 = \{\bzero \}\) implies that \(\dim (W_1 \cap W_2) = 0\). Rest follows from the previous result Theorem 4.19.

4.1.12. Real Vector Spaces#

Definition 4.31 (Real vector space)

A real vector space \(\VV\) is a vector space defined over the real field \(\RR\).

In other words, the scalars come from the field of real numbers.

There are several features associated with the real field.