Inner Product Spaces
Contents
4.5. Inner Product Spaces#
Inner products are a generalization of the notion of dot product. We restrict our attention to real vector spaces and complex vector spaces. Thus, the field \(\FF\) can be either \(\RR\) or \(\CC\).
4.5.1. Inner Product#
(Inner product)
An inner product over an \(\FF\)-vector space \(\VV\) is any map \(\langle, \rangle : \VV \times \VV \to \FF\) mapping \((\bv_1, \bv_2) \mapsto \langle \bv_1, \bv_2 \rangle\) satisfying following properties:
[Positive definiteness]
\[ \langle \bv, \bv \rangle \geq 0 \text{ and } \langle \bv, \bv \rangle = 0 \iff \bv = 0. \][Conjugate symmetry]
\[ \langle \bv_1, \bv_2 \rangle = \overline{\langle \bv_2, \bv_1 \rangle} \quad \forall \bv_1, \bv_2 \in \VV. \]
[Linearity in the first argument]
\[\begin{split} \begin{aligned} &\langle \alpha \bv, \bw \rangle = \alpha \langle \bv, \bw \rangle \quad \forall \bv, \bw \in \VV; \forall \alpha \in \FF\\ &\langle \bv_1 + \bv_2, \bw \rangle = \langle \bv_1, \bw \rangle + \langle \bv_2, \bw \rangle \quad \forall \bv_1, \bv_2,\bw \in \VV \end{aligned} \end{split}\]
(Scaling in second argument)
Let \(\langle \cdot, \cdot \rangle : \VV \times \VV \to \FF\) be an inner product. Then
Proof. We proceed as follows:
(Distribution in second argument)
Let \(\langle \cdot, \cdot \rangle : \VV \times \VV \to \FF\) be an inner product. Then for any \(\bv, \bx, \by \in \VV\):
Proof. We proceed as follows:
(Inner product with zero)
Let \(\langle \cdot, \cdot \rangle : \VV \times \VV \to \FF\) be an inner product. Then,
Proof. We proceed as follows:
By cancelling terms, we get:
Using the conjugate symmetry, we get:
Linearity in first argument extends to any arbitrary linear combination:
Similarly we have conjugate linearity in second argument for any arbitrary linear combination:
The standard inner product on \(\RR^n\) is defined as:
This is often called the dot product or scalar product.
The standard inner product on \(\CC^n\) is defined as:
Let \(\bx, \by \in \RR^2\). Define:
Now:
\(\langle \bx, \bx \rangle = (x_1 - x_2)^2 + 3 x_2^2\). Thus, \(\bx = \bzero \iff \langle \bx, \bx \rangle = 0\). Thus, it is positive definite.
\(\langle \by, \bx \rangle = y_1 x_1 - y_2 x_1 - y_1 x_2 + 4 y_2 x_2 = \langle \bx, \by \rangle\). It is symmetric.
We can also verify that it is linear in the first argument.
Thus, it satisfies all the properties of an inner product.
Note that, in the matrix notation, we can write this inner product as:
The matrix
is positive definite. Its trace is \(5\) and its determinant is \(3\). Its eigen values are \(4.303, 0.697\).
Let \(\CC^{n \times n}\) be the space of \(n \times n\) matrices. For any \(\bA = (a_{ij})\) and \(\bB = (b_{ij})\) in \(\CC^{n \times n}\), we define the inner product as:
It can be easily seen that:
where \(\bB^H\) is the conjugate transpose of \(\bB\) and \(\Trace\) computes the trace of a matrix (sum of its diagonal values).
Let \(\CC^{n \times 1}\) be the space of column vectors. Let \(\bQ\) be an arbitrary \(n \times n\) invertible matrix over \(\CC\).
For any \(\bx, \by \in \CC^{n \times 1}\), define
We identify the \(1 \times 1\) matrix in the R.H.S. with its single entry as a complex number \(\CC\). This is a valid inner product.
When \(\bQ = \bI\), the identity matrix, the inner product reduces to:
This is the standard inner product on the space of column vectors.
For complex inner products, the inner product is determined identified by its real part.
This statement may be confusing. Let us unpack what it means. Let
Then, computing the inner product involves computing the real part as well as computing the complex part. What the statement means is that, if we know how to compute \(\Re\langle \bx, \by \rangle\) for any \(\bx, \by \in \VV\), then, we can use the same method to compute \(\Im\langle \bx, \by \rangle\) too; but using different inputs. See below.
Proof. Let
For any complex number \(z = x + i y \in \CC\), we have:
Since, \(\langle \bx, \by \rangle\) is a complex number, hence:
Thus,
4.5.2. Real Inner Product#
From the perspective of convex analysis, the general inner product is not very useful. We prefer a special class of inner products whose value is always real. This is applicable on vector spaces where the field of scalars is \(\RR\).
(Real inner product)
A real inner product over an \(\RR\)-vector space \(\VV\) is any map \(\langle, \rangle : \VV \times \VV \to \RR\) mapping \((\bv_1, \bv_2) \mapsto \langle \bv_1, \bv_2 \rangle\) satisfying following properties:
[Positive definiteness]
\[ \langle \bv, \bv \rangle \geq 0 \text{ and } \langle \bv, \bv \rangle = 0 \iff \bv = 0. \][Symmetry]
\[ \langle \bv_1, \bv_2 \rangle = \langle \bv_2, \bv_1 \rangle \quad \forall \bv_1, \bv_2 \in \VV. \]
[Linearity in the first argument]
\[\begin{split} \begin{aligned} &\langle \alpha \bv, \bw \rangle = \alpha \langle \bv, \bw \rangle \quad \forall \bv, \bw \in \VV; \forall \alpha \in \RR\\ &\langle \bv_1 + \bv_2, \bw \rangle = \langle \bv_1, \bw \rangle + \langle \bv_2, \bw \rangle \quad \forall \bv_1, \bv_2,\bw \in \VV \end{aligned} \end{split}\]
Real inner product is always real valued no matter whether the vectors are real or complex.
Since the real inner product is symmetric, hence since it is linear in first argument, it is linear in second argument too.
\(\CC^n\) over \(\RR\))
(A real inner product forIn this example, we are dealing with \(n\)-tuples of complex numbers in \(\CC^n\) with the field of scalars being \(\RR\). It can be easily checked that \(\CC^n\) over \(\RR\) is a vector space.
Let \(z_1, z_2\) be complex numbers:
Then
\(\Re (z \overline{z}) = x^2 + y^2\) is positive definite; i.e., \(\Re (z \overline{z}) = 0 \iff z = 0 + i 0\).
\(\Re (z_1 \overline{z_2}) = \Re (z_2 \overline{z_1})\) is symmetric.
For any \(\alpha \in \RR\) \(\Re (\alpha z_1 \overline{z_2}) = \alpha \Re (z_1 \overline{z_2})\). Thus, it is linear in first argument.
Now, for any \(\bx, \by \in \CC^n\), define:
Following the argument above, it is a real inner product on \(\CC^n\).
Interestingly, if \(\bu \in \CC^n\) is identified with \(\bv \in \RR^{2 n}\) by stacking the real and imaginary parts, then the real inner product defined above for \(\CC^n\) is nothing but the standard inner product for \(\RR^{2 n}\).
While the presentation in rest of the section will be based on the general conjugate symmetric inner product, it will be easy to extrapolate the results for the special case of real inner products.
4.5.3. Inner Product Space#
(Inner product space / Pre-Hilbert space)
An \(\FF\)-vector space \(\VV\) equipped with an inner product \(\langle, \rangle : \VV \times \VV \to \FF\) is known as an inner product space or a pre-Hilbert space.
4.5.4. Orthogonality#
Orthogonality is the generalization of the notion of perpendicularity from elementary geometry.
(Orthogonal vectors)
Any two vectors \(\bu , \bv \in \VV\) are called orthogonal to each other if \(\langle \bu, \bv \rangle = 0\).
We write \(\bu \perp \bv\) if \(\bu\) and \(\bv\) are orthogonal to each other.
(Set of orthogonal vectors)
A set of non-zero vectors \(\{\bv_1, \dots, \bv_p\}\) is called orthogonal or pairwise orthogonal if
(Orthogonality implies independence)
A set of orthogonal vectors is linearly independent.
Proof. Let \(\bv_1, \dots \bv_n\) be a set of orthogonal vectors. Suppose there is a linear combination:
Taking inner product on both sides with \(\bv_j\), we get:
Thus, the only zero linear combination is the trivial combination. Thus, the vectors are linearly independent.
4.5.5. Norm Induced by Inner Product#
(Norm induced by inner product)
Every inner product \(\langle \cdot, \cdot \rangle : \VV \times \VV \to \FF\) on a vector space \(\VV\) induces a norm \(\| \cdot \| : \VV \to \RR\) given by:
We shall justify that this function satisfies all the properties of a norm later. But before that, let us examine some implications of this definition which are useful in their own right.
Note that it is easy to see that \(\| \cdot \|\) is positive definite; i.e., \(\| \bzero \| = 0\) and \(\| \bv \| > 0\) if \(\bv \neq \bzero\).
Also, it is positively homogeneous, since:
(Pythagoras theorem)
If \(\bu \perp \bv\) then
Proof. Expanding:
where we used the fact that: \(\langle \bu, \bv \rangle = \langle \bv, \bu \rangle = 0\) since \(\bu \perp \bv\).
(Cauchy Schwartz inequality)
For any \(\bu, \bv \in \VV\):
The equality holds if and only if \(\bu\) and \(\bv\) are linearly dependent.
Proof. If either \(\bu = \bzero\) or \(\bv = \bzero\) then the equality holds. So, suppose that neither of them are zero vectors. In particular \(\bv \neq \bzero\) means \(\| \bv \| > 0\).
Define
Then,
Thus, \(\bw \perp \bu - \bw\). Therefore, by Pythagorean theorem,
Multiplying on both sides by \(\| \bv \|^2\), we obtain:
Taking square roots on both sides,
In the derivation above, the equality holds if and only if
which means that \(\bu\) and \(\bv\) are linearly dependent.
Conversely, if \(\bu\) and \(\bv\) are linearly dependent, then \(\bu = \alpha \bv\) for some \(\alpha \in \FF\), and
giving us \(\bu - \bw = \bzero\). Hence, the equality holds.
(Inner product induced norm justification)
The function \(\| \cdot \| : \VV \to \RR\) induced by the inner product \(\langle \cdot, \cdot \rangle : \VV \times \VV \to \FF\) as defined in Definition 4.76 is indeed a norm.
Proof. We need to verify that \(\| \cdot \|\) so defined is indeed a norm. We have already shown that it is positive definite and positive homogeneous. We now show the triangle inequality. We will take help of the Cauchy Schwartz inequality shown above:
Taking square root on both sides, we obtain:
Thus, \(\| \cdot \|\) is indeed a norm.
We recap the sequence of results to emphasize the logical flow:
We started with just the definition of \(\| \cdot \|\) in Definition 4.76.
We proved positive definiteness from the definition itself.
We proved positive homogeneity also from the definition itself.
We proved Pythagoras theorem utilizing previously established results for inner products.
We proved Cauchy Schwartz inequality using positive definiteness, positive homogeneity and Pythagoras theorem.
We proved triangle inequality using Cauchy Schwartz inequality.
(Inner product space to metric space)
Every inner product space is a normed space. Hence it is also a metric space.
Proof. An inner product induces a norm which makes the vector space a normed space. A norm induces a metric which makes the vector space a metric space.
4.5.6. Hilbert Spaces#
(Hilbert space)
An inner product space \(\VV\) that is complete with respect to the metric induced by the norm induced by its inner product is called a Hilbert space.
In other words, \(\VV\) is a Hilbert space if every Cauchy sequence of \(\VV\) converges in \(\VV\).
4.5.7. Orthonormality#
(Set of orthonormal vectors)
A set of non-zero vectors \(\{\be_1, \dots, \be_p\}\) is called orthonormal if
i.e., \(\langle \be_i, \be_j \rangle = \delta(i, j)\).
In other words, the vectors are unit norm (\(\| \be_i \| = 1\)) and are pairwise orthogonal (\(\be_i \perp \be_j)\) whenever \(i \neq j\)).
Since orthonormal vectors are orthogonal, hence they are linearly independent.
(Orthonormal basis)
A set of orthonormal vectors form an orthonormal basis for their span.
(Expansion of a vector in an orthonormal basis)
Let \(\{\be_1, \dots, \be_n\}\) be an orthonormal basis for \(\VV\). Then, any \(\bv \in \VV\) can be written as:
Proof. Since \(\{\be_1, \dots, \be_n\}\) forms a basis for \(\VV\), hence every every \(\bv \in \VV\) can be written as:
where \(\alpha_1, \dots, \alpha_n \in \FF\).
Taking inner product with \(\be_j\) on both sides, we get:
Since \(\langle \be_i, \be_j \rangle = \delta(i, j)\), hence the above reduces to:
(Norm of a vector in an orthonormal basis)
Let \(\{\be_1, \dots, \be_n\}\) be an orthonormal basis for \(\VV\). For any \(\bv \in \VV\), let its expansion in the orthonormal basis be:
Then,
Proof. Expanding the expression for norm squared:
Here are some interesting questions:
Can a basis in an inner product space be converted into an orthonormal basis?
Does a finite dimensional inner product space have an orthonormal basis?
Does every finite dimensional subspace of an inner product space have an orthonormal basis?
The answer to these questions is yes. We provide a constructive answer by the Gram-Schmidt algorithm described in the next section.
4.5.8. The Gram-Schmidt Algorithm#
The Gram-Schmidt algorithm (described below) can construct an orthonormal basis from an arbitrary basis for the span of the basis.
(The Gram-Schmidt algorithm)
Inputs \(\bv_1, \bv_2, \dots, \bv_n\), a set of linearly independent vectors
Outputs \(\be_1, \be_2, \dots, \be_n\), a set of orthonormal vectors
\(\bw_1 = \bv_1\).
\(\be_1 = \frac{\bw_1}{\| \bw_1 \|}\).
For \(j=2, \dots, n\):
\(\bw_j = \bv_j - \sum_{i=1}^{j-1} \langle \bv_j, \be_i \rangle \be_i\).
\(\be_j = \frac{\bw_j}{\| \bw_j \|}\).
(Justification for Gram-Schmidt algorithm)
Let \(\bv_1, \bv_2, \dots, \bv_n\) be linearly independent. The Gram-Schmidt algorithm described above generates a set of orthonormal vectors.
Moreover, for each \(j = 1, \dots, n\), the set \(\be_1, \dots, \be_j\) is an orthonormal basis for the subspace: \(\span \{\bv_1, \dots, \bv_j \}\).
Proof. We prove this by mathematical induction. Consider the base case for \(j=1\).
\(\bw_1 = \bv_1\).
\(\be_1 = \frac{\bw_1}{\| \bw_1 \|} = \frac{\bv_1}{\| \bv_1 \|}\).
Thus, \(\| \be_1 \| = 1\).
\(\span \{ \be_1 \} = \span \{ \bv_1 \}\) because \(\be_1\) is a nonzero scalar multiple of \(\bv_1\).
Now, assume that the set \(\be_1, \dots, \be_{j-1}\) is an orthonormal basis for \(\span \{\bv_1, \dots, \bv_{j-1} \}\).
Thus, \(\span \{\be_1, \dots, \be_{j-1} \} = \span \{\bv_1, \dots, \bv_{j-1} \}\).
Since \(\bv_j\) is linearly independent from \(\bv_1, \dots, \bv_{j-1}\), hence \(\bv_j \notin \span \{\bv_1, \dots, \bv_{j-1} \}\).
Thus, \(\bv_j \notin \span \{\be_1, \dots, \be_{j-1} \}\).
Hence, \(\bw_j = \bv_j - \sum_{i=1}^{j-1} \langle \bv_j, \be_i \rangle \be_i \neq \bzero\). If it was \(\bzero\), then \(\bv_j\) would be linearly dependent on \(\be_1, \dots, \be_{j-1}\).
Thus, \(\| \bw_j \| > 0\).
Thus, \(\be_j = \frac{\bw_j}{\| \bw_j \|}\) is well-defined.
Also, \(\| \be_j \| = 1\) by construction, thus, \(\be_j\) is unit-norm.
Note that \(\bw_j\) is orthogonal to \(\be_1, \dots, \be_{j-1}\). For any \(1 \leq k < j\), we have:
\[\begin{split} \begin{aligned} \langle \bw_j, \be_k \rangle &= \left \langle \bv_j - \sum_{i=1}^{j-1} \langle \bv_j, \be_i \rangle \be_i, \be_k \right \rangle\\ &= \langle \bv_j \be_k \rangle - \sum_{i=1}^{j-1} \langle \bv_j, \be_i \rangle \langle \be_i, \be_k \rangle\\ &= \langle \bv_j \be_k \rangle - \langle \bv_j, \be_k \rangle \langle \be_k, \be_k \rangle\\ &= \langle \bv_j \be_k \rangle - \langle \bv_j, \be_k \rangle = 0. \end{aligned} \end{split}\]since \(\be_1, \dots, \be_{j-1}\) are orthonormal.
Thus, for any \(1 \leq k < j\):
\[ \langle \be_j, \be_k \rangle = \left \langle \frac{\bw_j}{\| \bw_j \|}, \be_k \right \rangle = \frac{\langle \bw_j, \be_k \rangle}{\| \bw_j \|} = 0. \]Thus, \(\be_j\) is orthogonal to \(\be_1, \dots, \be_{j-1}\).
Since, all of them are unit norm, hence, \(\be_1, \dots, \be_{j-1}, \be_j\) are indeed orthonormal.
We also need to show that \(\span \{\be_1, \dots, \be_{j} \} = \span \{\bv_1, \dots, \bv_{j} \}\).
Note that \(\bw_j \in \span \{\bv_j, \be_1, \dots, \be_{j-1} \} = \span \{\bv_1, \dots, \bv_j \}\) since \(\span \{\be_1, \dots, \be_{j-1} \} = \span \{\bv_1, \dots, \bv_{j-1} \}\) by inductive hypothesis.
Thus, \(\be_j \in \span \{\bv_1, \dots, \bv_j \}\) since \(\be_j\) is just scaled \(\bw_j\).
Thus, \(\span \{\be_1, \dots, \be_{j} \} \subseteq \span \{\bv_1, \dots, \bv_{j} \}\).
For the converse, by definition \(\bv_j = \bw_j + \sum_{i=1}^{j-1} \langle \bv_j, \be_i \rangle \be_i\).
Hence, \(\bv_j \in \span \{\bw_j, \be_1, \dots, \be_{j-1}\} = \span \{\be_1, \dots, \be_{j} \}\).
Thus, \(\span \{\bv_1, \dots, \bv_{j} \} \subseteq \span \{\be_1, \dots, \be_{j} \}\).
Thus, \(\span \{\be_1, \dots, \be_{j} \} = \span \{\bv_1, \dots, \bv_{j} \}\) must be true.
(Existence of orthonormal basis)
Every finite dimensional inner product space has an orthonormal basis.
Proof. This is a simple application of the Gram-Schmidt algorithm.
Every finite dimensional vector space has a finite basis.
Every finite basis can be turned into an orthonormal basis by the Gram-Schmidt algorithm.
Thus, we have an orthonormal basis.
Every finite dimensional subspace of an inner product space has an orthonormal basis.
4.5.9. Orthogonal Complements#
(Orthogonal complement)
Let \(S\) be a subset of an inner product space \(\VV\). The orthogonal complement of \(S\) is the set of all vectors in \(\VV\) that are orthogonal to every element of \(S\). It is denoted by \(S^{\perp}\).
(Orthogonal complement of a vector)
Let \(\ba \in \VV\). The orthogonal complement of \(\ba\) is the set of all vectors in \(\VV\) that are orthogonal to \(\ba\). It is denoted by \(\ba^{\perp}\).
\(\ba^{\perp}\) is just a notational convenience.
(Orthogonal complement is a linear subspace)
If \(\VV\) is an inner product space and \(S \subseteq \VV\), then \(S^{\perp}\) is a subspace.
Proof. To verify that \(S^{\perp}\) is a subspace, we need to check the following.
It contains the zero vector.
It is closed under vector addition.
It is closed under scalar multiplication.
We proceed as follows:
\(\langle \bzero , \bs \rangle = 0\) holds for any \(\bs \in S\). Thus, \(\bzero \in S^{\perp}\).
Let \(\bu, \bv \in S^{\perp}\). Then,
\(\langle \bu, \bs \rangle = 0\) and \(\langle \bv, \bs \rangle = 0\) for every \(s \in S\).
Thus, \(\langle \bu + \bv, \bs \rangle = \langle \bu, \bs \rangle + \langle \bv, \bs \rangle = 0 + 0 = 0\) for every \(s \in S\).
Thus, \(\bu + \bv \in S^{\perp}\).
Similarly, if \(\bv \in S^{\perp}\), then \(\langle \alpha \bv, \bs \rangle = \alpha \langle \bv, \bs \rangle = 0\) for every \(\bs \in S\).
Thus, \(S^{\perp}\) is a subspace of \(\VV\).
The orthogonal complement of the inner product space \(\VV\) is its trivial subspace containing just the zero vector.
(Orthogonal complement and basis)
If \(S\) is a subspace of \(\VV\), then to show that some vector \(\bu \in S^{\perp}\), it is sufficient to show that \(\bu\) is orthogonal to all the vectors in some basis of \(S\).
Specifically, if \(S\) is a finite dimensional subspace of \(\VV\) and \(\BBB = \{ \bv_1, \dots, \bv_m \}\) is a basis for \(S\), then
Proof. Let \(\BBB\) be a basis for \(S\) (finite or infinite).
Then, for any \(\bs \in S\):
where \(\alpha_p \in \FF\) and \(\be_p \in \BBB\).
Now, if \(\bu\) is orthogonal to every vector in \(\BBB\), then
Thus, \(\bu \perp \bs\). Since \(\bs\) was arbitrarily chosen from \(S\), hence \(\bu \in S^{\perp}\).
Now, assume \(S\) to be finite dimensional and \(\BBB = \{ \bv_1, \dots, \bv_m \}\) to be a basis of \(S\). Let
We first show that \(S^{\perp} \subseteq T\).
Let \(\bv \in S^{\perp}\).
Then, \(\bv \perp \bs\) for every \(\bs \in S\).
In particular, \(\bv \perp \bv_i\) for \(i=1,\dots,m\) since \(\BBB \subset S\).
Thus, \(\bv \in T\).
Thus, \(S^{\perp} \subseteq T\).
We next show that \(T \subseteq S^{\perp}\).
Let \(\bx \in T\).
Then, \(\bx \perp \bv_i\) for \(i=1,\dots, m\).
But then, for any \(\bs \in S\)
\[ \langle \bs, \bx \rangle = \left \langle \sum_{i=1}^m t_i \bv_i, \bx \right \rangle = \sum_{i=1}^m t_i \langle \bv_i, \bx \rangle 0 \]since \(\bs = \sum_{i=1}^m t_i \bv_i\) is a linear combination of \(\BBB\).
Thus, \(\bx \perp \bs\) for every \(\bs \in S\).
Thus, \(\bx \in S^{\perp}\).
Thus, \(T \subseteq S^{\perp}\).
Combining:
(Orthogonal decomposition)
Let \(\VV\) be an inner product space and \(S\) be a finite dimensional subspace of \(\VV\). Then, every \(\bv \in \VV\) can be written uniquely in the form:
where \(\bv_{\parallel} \in S\) and \(\bv_{\perp} \in S^{\perp}\).
Proof. Let \(\be_1, \dots, \be_p\) be an orthonormal basis for \(S\).
Define:
And
By construction, \(\bv_{\parallel} \in \span \{ \be_1, \dots, \be_p\} = S\).
Now, for every \(0 \leq i \leq p\):
Thus, \(\bv_{\perp} \in S^{\perp}\).
We have shown that the existence of the decomposition of an vector \(\bv\) in components which belong to \(S\) and \(S^{\perp}\). Next, we need to show that the decomposition is unique.
For contradiction, assume there was another decomposition:
such that \(\bu_{\parallel} \in S\) and \(\bu_{\perp} \in S^{\perp}\).
Then,
gives us:
Thus, \(\bw \in S\) as well as \(\bw \in S^{\perp}\). But then, \(\bw \perp \bw\) giving us:
This is possible only if \(\bv_{\parallel} - \bu_{\parallel} = \bzero\), thus, \(\bv_{\parallel} = \bu_{\parallel}\). Consequently, \(\bu_{\perp} = \bv_{\perp}\) too.
Thus,
is a unique decomposition.
(Intersection between a subspace and its complement)
If \(S\) is a finite dimensional subspace of an inner product space \(\VV\), then
In other words, the only vector common between \(S\) and its orthogonal complement is the zero vector.
(Vector space as direct sum)
If \(S\) is a finite dimensional subspace of an inner product space \(\VV\), then
In other words, \(\VV\) is a direct sum of \(S\) and its orthogonal complement.
Proof. From Corollary 4.14, the intersection between \(S\) and \(S^{\perp}\) is the zero vector. Thus, by Definition 4.30, the direct sum between the two spaces \(S \oplus S^{\perp}\) is well defined.
By Theorem 4.85, every vector \(\bv \in \VV\) can be uniquely decomposed as
where \(\bv_{\parallel} \in S\) and \(\bv_{\perp} \in S^{\perp}\).
Thus, \(\VV \subseteq S \oplus S^{\perp}\).
However, since both \(S\) and \(S^{\perp}\) are subspaces of \(\VV\), hence
(Dimension of vector space as direct sum)
Let \(\VV\) be a finite dimensional inner product space. If \(S\) is a subspace of \(\VV\), then
Proof. Since \(\VV\) is finite dimensional, hence both \(S\) and \(S^{\perp}\) are finite dimensional subspaces of \(\VV\).
By Theorem 4.86
Then, due to Theorem 4.20
(Orthogonal complement of orthogonal complement)
Let \(\VV\) be a finite dimensional inner product space. Let \(S\) be a subspace of \(\VV\) and let \(S^{\perp}\) be its orthogonal complement. Then
In other words, in a finite dimensional space, the orthogonal complement of orthogonal complement is the original subspace itself.
Note that this result is valid only for finite dimensional spaces since in that case both \(S\) and \(S^{\perp}\) are finite dimensional.
Proof. Since \(\VV\) is finite dimensional, hence both \(S\) and \(S^{\perp}\) are finite dimensional.
We shall first show that \(S \subseteq \left (S^{\perp} \right )^{\perp}\).
Let \(\bs \in S\).
Then, by definition, \(\bs \perp \bu \Forall \bu \in S^{\perp}\).
Thus, \(\bs \in \left (S^{\perp} \right )^{\perp}\).
Thus, \(S \subseteq \left (S^{\perp} \right )^{\perp}\).
We now show that \(\left (S^{\perp} \right )^{\perp} \subseteq S\).
Let \(\bu \in \left (S^{\perp} \right )^{\perp}\).
By Theorem 4.86, \(\VV = S \oplus S^{\perp}\) since \(S\) is a finite dimensional subspace of \(\VV\).
Thus, \(\bu = \bv + \bw\) such that \(\bv \in S\) and \(\bw \in S^{\perp}\).
Since \(\bu - \bv = \bw\), hence \(\bu - \bv \in S^{\perp}\).
We have already shown above that \(S \subseteq \left (S^{\perp} \right )^{\perp}\). Hence \(\bv \in \left (S^{\perp} \right )^{\perp}\).
Thus, \(\bu - \bv = \bw \in \left (S^{\perp} \right )^{\perp}\) since both \(\bu\) and \(\bv\) belong to \(\left (S^{\perp} \right )^{\perp}\).
Thus, \(\bu - \bv \in \left (S^{\perp} \right )^{\perp} \cap S^{\perp}\) as \(\bw \in S^{\perp}\) by orthogonal decomposition above.
But, by Corollary 4.14 \(S^{\perp} \cap \left (S^{\perp} \right )^{\perp} = \{ \bzero \}\) since \(\left (S^{\perp} \right )^{\perp}\) is the orthogonal complement of \(S^{\perp}\) and \(S^{\perp}\) is finite dimensional.
Thus, \(\bu - \bv = \bzero\).
Thus, \(\bu = \bv\).
Thus, \(\bu \in S\).
Since \(\bu\) was an arbitrary element of \(\left (S^{\perp} \right )^{\perp}\), hence \(\left (S^{\perp} \right )^{\perp} \subseteq S\).
Combining the two:
(n-1 dimensional subspaces)
Let \(\VV\) be a finite dimensional inner product space with \(\dim \VV = n\). Let \(S\) be an \(n-1\) dimensional subspace of \(\VV\). Then, there exists a nonzero vector \(\bb \in \VV\) such that
In other words, the \(n-1\) dimensional subspaces are the sets of the form \(\{ \bx \ST \bx \perp \bb \}\) where \(\bb \neq \bzero\).
Proof. Let \(S\) be \(n-1\) dimensional. Then, from Theorem 4.87
This gives us \(\dim S^{\perp} = n - (n-1) = 1\).
Since \(S^{\perp}\) is one dimensional, we can choose a non-zero vector \(\bb \in S^{\perp}\) as its basis. Since \(\VV\) is finite dimensional, hence
Thus, \(S\) consists of vectors which are orthogonal to a basis of \(S^{\perp}\). Thus,
4.5.10. Orthogonal Projection#
Recall that a projection operator \(P : \VV \to \VV\) is an operator which satisfies \(P^2 = P\).
The range of \(P\) is given by
The null space of \(P\) is given by
(Orthogonal projection operator)
A projection operator \(P : \VV \to \VV\) over an inner product space \(\VV\) is called orthogonal projection operator if its range \(\Range(P)\) and the null space \(\NullSpace(P)\) as defined above are orthogonal to each other; i.e.
(Orthogonal projection operator for a subspace)
Let \(S\) be a finite dimensional subspace of \(\VV\). Let \(\{\be_1, \dots, \be_p\}\) be an orthonormal basis of \(S\). Let the operator \(P_S: \VV \to \VV\) be defined as:
where
is the unique orthogonal decomposition of \(\bv\) w.r.t. the subspace \(S\) as defined in Theorem 4.85. Then,
\(P_S \bv = \sum_{i=1}^p \langle \bv, \be_i \rangle \be_i\).
For any \(\bv \in \VV\), \(\bv - P_S \bv \perp S\).
\(P_S\) is a linear map.
\(P_S\) is the identity map when restricted to \(S\); i.e., \(P_S \bs = \bs \Forall \bs \in S\).
\(\Range(P_S) = S\).
\(\NullSpace(P_S) = S^{\perp}\).
\(P_S^2 = P_S\).
For any \(\bv \in \VV\), \(\| P_S \bv \| \leq \| \bv \|\).
For any \(\bv \in \VV\) and \(\bs \in S\):
\[ \| \bv - P_S \bv \| \leq \| \bv - \bs \| \]with equality if and only if \(\bs = P_S \bv\).
\(P_S\) is indeed an orthogonal projection onto \(S\).
Proof. For the sake of brevity, we abbreviate \(P = P_S\).
Following (4.3), indeed:
For any \(\bv \in \VV\) (due to Theorem 4.85):
Since \(\bv_{\perp} \in S^{\perp}\) hence \(\bv - P \bv \perp S\).
[Linear map]
Let \(\bu, \bv \in \VV\).
Let \(\bu = \bu_{\parallel} + \bu_{\perp}\) and \(\bv = \bv_{\parallel} + \bv_{\perp}\).
Consider \(\bu + \bv = (\bu_{\parallel} + \bv_{\parallel}) + (\bu_{\perp} + \bv_{\perp})\).
Then, \(\bu_{\parallel} + \bv_{\parallel} \in S\) and \(\bu_{\perp} + \bv_{\perp} \in S^{\perp}\).
Since, the orthogonal decomposition is unique, hence \(P (\bu + \bv) = \bu_{\parallel} + \bv_{\parallel} = P \bu + P \bv\).
Similarly, for \(\alpha \in \FF\), \(\alpha \bu = \alpha \bu_{\parallel} + \alpha\bu_{\perp}\).
With \(\alpha \bu_{\parallel} \in S\) and \(\alpha\bu_{\perp} \in S^{\perp}\), \(P (\alpha \bu) = \alpha \bu_{\parallel} = \alpha P \bu\).
Thus, \(P\) is a linear map.
For any \(\bs \in S\), we can write it as \(\bs = \bs + \bzero\). With \(\bs \in S\) and \(\bzero \in S^{\perp}\), we have: \(P \bs = \bs\).
[Range]
Since \(P\) maps \(\bv\) to a component in \(S\), hence \(\Range(P) \subseteq S\).
Since for every \(\bs \in S\), there is \(\bv \in S\) such that \(P \bv = \bs\) (specifically \(\bv = \bs\)), hence \(S \subseteq \Range(P)\).
Combining \(\Range(P) = S\).
[Null space]
Let \(\bv \in \NullSpace(P)\). Write \(\bv = \bv_{\parallel} + \bv_{\perp}\).
Then, \(P \bv = \bv_{\parallel} = \bzero\) as \(\bv\) is in the null space of \(P\).
Hence, \(\bv = \bv_{\perp} \in S^{\perp}\).
Thus, \(\NullSpace(P) \subseteq S^{\perp}\).
Now, let \(\bv \in S^{\perp}\).
We can write \(\bv\) as \(\bv = \bzero + \bv\) where \(\bzero \in S\) and \(\bv \in S^{\perp}\).
Thus, \(P \bv = \bzero\).
Thus, \(S^{\perp} \subseteq \NullSpace(P)\).
Combining, \(S^{\perp} = \NullSpace(P)\).
[\(P^2 = P\)]
For any \(\bv \in \VV\), we have, \(P \bv = \bv_{\parallel}\).
Since \(\bv_{\parallel} \in S\), hence \(P \bv_{\parallel} = \bv_{\parallel}\).
Thus, \(P^2 \bv = P \bv_{\parallel} = \bv_{\parallel} = P v\).
Since \(\bv\) was arbitrary, hence, \(P^2 = P\).
[\(\| P \bv \| \leq \| \bv \|\)]
We have \(\bv = \bv_{\parallel} + \bv_{\perp} = P \bv + \bv_{\perp}\).
By Pythagoras theorem: \( \| \bv \|^2 = \| P \bv \|^2 + \| \bv_{\perp}\|^2\).
Thus, \( \| \bv \|^2 \geq \| P \bv \|^2\).
Taking square root on both sides: \(\| P \bv \| \leq \| \bv \|\).
[\(\| \bv - P \bv \| \leq \| \bv - \bs \|\)]
Let \(\bv \in \VV\) and \(\bs \in S\).
Note that \(P \bv \in S\) hence \(P \bv - \bs \in S\).
By definition \(\bv - P \bv \in S^{\perp}\).
Thus, \(\bv - P \bv \perp P \bv - \bs\).
We have: \(\bv - \bs = (\bv - P \bv) + (P \bv - \bs)\).
Applying Pythagoras theorem:
\[ \| \bv - \bs \|^2 = \| \bv - P \bv\|^2 + \| P \bv - \bs \|^2 \geq \| \bv - P \bv\|^2. \]Taking square root on both sides:
\[ \| \bv - P \bv \| \leq \| \bv - \bs \|. \]Equality holds if and only if \(\| P \bv - \bs \|^2 = 0\) if and only if \(P \bv = \bs\).
In order to show that \(P\) is an orthogonal projection, we need to show that:
\(P\) is a projection operator.
\(\br \perp \bn \Forall \br \in \Range(P) , \Forall \bn \in \NullSpace(P)\).
We have shown that:
\(P^2 = P\). Hence \(P\) is a projection operator.
\(\Range(P) = S\) and \(\NullSpace(P) = S^{\perp}\).
By definition, for any \(\br \in S\) and \(\bn \in S^{\perp}\), \(\br \perp \bn\).
Thus, \(P\) is an orthogonal projection operator.
(Orthogonal projectors are adjoint)
A projection operator is orthogonal if and only if it is self adjoint.
(Orthogonal projection on a line)
Consider a unit norm vector \(\bu \in \RR^N\).
Thus \(\bu^T \bu = 1\).
Consider
Now
Thus \(P\) is a projection operator.
Now,
Thus \(P_{\bu}\) is self-adjoint. Hence, \(P_{\bu}\) is an orthogonal projection operator.
Also,
Thus \(P_{\bu}\) leaves \(\bu\) intact; i.e., Projection of \(\bu\) on to \(\bu\) is \(\bu\) itself.
Let \(\bv \in \bu^{\perp}\) i.e. \(\langle \bu, \bv \rangle = 0\).
Then,
Thus \(P_{\bu}\) annihilates all vectors orthogonal to \(\bu\).
Any vector \(\bx \in \RR^N\) can be broken down into two components
such that \(\langle \bu , \bx_{\perp} \rangle =0\) and \(\bx_{\parallel}\) is collinear with \(\bu\).
Then,
Thus \(P_{\bu}\) retains the projection of \(\bx\) on \(\bu\) given by \(\bx_{\parallel}\).
(Projections over the column space of a matrix)
Let \(\bA \in \RR^{M \times N}\) with \(N \leq M\) be a matrix given by
where \(\ba_i \in \RR^M \) are its columns which are linearly independent.
The column space of \(\bA\) is given by
It can be shown that \(\bA^T \bA\) is invertible.
Consider the operator
Now,
Thus \(P_{\bA}\) is a projection operator.
Thus \(P_{\bA}\) is self-adjoint.
Hence \(P_{\bA}\) is an orthogonal projection operator on the column space of \(\bA\).
4.5.11. Parallelogram Identity#
(Parallelogram identity)
Proof. Expanding:
Also:
Thus,
When inner product is a real number following identity is quite useful.
(Parallelogram identity for real inner product)
Proof. Expanding:
Also,
Thus,
since for real inner products
4.5.12. Polarization identity#
When inner product is a complex number, polarization identity is quite useful.
(Polarization identity for complex inner product)
Proof. Expanding
Also,
And,
And,
Thus,