Matrix Norms
Contents
4.12. Matrix Norms#
This section reviews various matrix norms on the vector space of
complex matrices over the field of complex numbers
We know
Matrix norms will follow the usual definition of norms for a vector space.
Definition 4.139 (Matrix norm)
A function
[Positivity]
with
.[Homogeneity]
[Triangle inequality]
We recall some of the standard results on finite dimensional normed vector spaces.
All matrix norms are equivalent. Let
A matrix norm is a continuous function
4.12.1. Norms like on Complex Vector Space#
Following norms are quite like
Definition 4.140 (Sum norm)
Let
Matrix sum norm is defined as
Definition 4.141 (Frobenius norm)
Let
Matrix Frobenius norm is defined as
Definition 4.142 (Max norm)
Let
Matrix Max norm is defined as
4.12.2. Properties of Frobenius Norm#
We now prove some elementary properties of Frobenius norm.
Lemma 4.86
The Frobenius norm of a matrix is equal to the Frobenius norm of its Hermitian transpose.
Proof. This is valid since the norm involves only the absolute values of entries in the matrix.
Let
Then
Then
Now
Lemma 4.87 (Expansion in squared norms of columns)
Let
Then
Proof. We note that
Now
We thus showed that that the square of the Frobenius norm of a matrix
is nothing but the sum of squares of
Lemma 4.88
Let
Then
Proof. We note that
Now
We now consider how the Frobenius norm is affected with the action of unitary matrices.
Let
be any arbitrary matrix in .Let
be some unitary matrices in .Let
be some unitary matrices in .
We present our first result that multiplication with unitary matrices doesn’t change Frobenius norm of a matrix.
Theorem 4.141
The Frobenius norm of a matrix is invariant to pre or post multiplication by a unitary matrix. i.e.
and
Proof. We can write
So
Then applying Lemma 4.87 clearly
But we know that unitary matrices are norm preserving. Hence
Thus
which implies
Similarly writing
we have
Then applying Lemma 4.88 clearly
But we know that unitary matrices are norm preserving. Hence
Thus
which implies
An alternative approach for the 2nd part of the proof using the first part is just one line
In above we use Lemma 4.86
and the fact that
Theorem 4.142 (Consistency of Frobenius norm)
Let
Proof. We can write
where
where
Then
Now looking carefully
Applying the Cauchy-Schwartz inequality we have
Now
which implies
by taking square roots on both sides.
Corollary 4.28
Let
Proof. We note that Frobenius norm for a column matrix is same as
Now applying Theorem 4.142 we have
It turns out that Frobenius norm is intimately related to the singular value decomposition of a matrix.
Lemma 4.89 (Frobenius norm and singular values)
Let
Let the singular values of
Proof. This comes from the invariance of Frobenius norm with unitary transformations.
But
since
4.12.3. Consistency of a Matrix Norm#
Definition 4.143 (Consistent matrix norm)
A matrix norm
holds true for all
A consistent matrix norm is also known as a sub-multiplicative norm.
With this definition and results in Theorem 4.142 we can see that Frobenius norm is consistent.
4.12.4. Subordinate Matrix Norm#
A matrix operates on vectors from one space to generate vectors in another space. It is interesting to explore the connection between the norm of a matrix and norms of vectors in the domain and co-domain of a matrix.
Definition 4.144
Let
for all
If
We have shown earlier in Corollary 4.28 that Frobenius norm is subordinate to Euclidean norm.
4.12.5. Operator Norm#
We now consider the maximum factor by which a matrix
Definition 4.145 (Operator norm)
Let
The term
The norm as defined above is known as
We need to verify that this definition satisfies all properties of a norm.
Nonnegativity
Clearly if
then always, hence .Conversely, if
then .Hence
for every .In particular this is true for the unit vectors
.The
-th column of is given by which is .Thus each column in
is .Hence
.
Homogeneity
Consider
.Then
We now present some useful observations on operator norm before we can prove triangle inequality for operator norm.
For any
, hence we only need to consider vectors which don’t belong to the kernel of .Thus we can write
We also note that
Thus, it is sufficient to find the maximum on unit norm vectors:
Note that since
hence the term in denominator goes away.
Lemma 4.90 (Operator norm is subordinate)
The
Proof. For
Theorem 4.143 (Existence of the maximizer for operator norm)
There exists a vector
Proof. Recall that
The norm function
is continuous on .The mapping
is continuous.Hence the function
is continuous.The set
is compact.Hence, the function attains a supremum value over this set.
We are now ready to prove triangle inequality for operator norm.
Lemma 4.91 (Triangle inequality for operator norm)
Operator norm as defined in Definition 4.145 satisfies triangle inequality.
Proof. Let
Consider the operator norm of matrix
.From previous remarks, there exists some vector
with such thatNow
From subordinate property of operator norm, we have
and
since
.Hence we have
It turns out that operator norm is also consistent under certain conditions.
Lemma 4.92 (Consistency of operator norm)
Let
is consistent.
Proof. We need to show that
Now
We note that if
, then .Hence we can rewrite as
Now if
then .Hence
and
Clearly
Furthermore
Thus we have
4.12.6. -norm for Matrices#
We recall the definition of
The operator norms
Definition 4.146 (Matrix
The
where
As per Lemma 4.92
Special cases are considered for
Theorem 4.144 (Max column sum, max row sum, spectral norms)
Let
For
This is also known as max column sum norm.
For
This is also known as max row sum norm.
Finally for
where
Proof. Max column sum norm:
Let
Then
Thus,
which the maximum column sum.
We need to show that this upper bound is indeed an equality.
Indeed for any
where is a unit vector with in -th entry and 0 elsewhere,Thus
Combining the two, we see that
Max row sum norm:
For
, we proceed as follows.where
are the rows of .This shows that
We need to show that this is indeed an equality.
Fix an
and choose such thatClearly
.Then
Thus,
Combining the two inequalities we get:
Spectral norm:
Remaining case is for
.For any vector
with ,since
norm is invariant to unitary transformations.Let
.Then
.Let
.Now
This shows that
Now consider some vector
such that .Then
Thus
Combining the two, we get that
.
4.12.7. The -norm#
Theorem 4.145
Let
and if
If
and if
If
and if
4.12.8. Unitary Invariant Norms#
Definition 4.147 (Unitary invariant norm)
A matrix norm
We have already seen in Theorem 4.141 that Frobenius norm is unitary invariant.
It turns out that spectral norm is also unitary invariant.
4.12.9. More Properties of Operator Norms#
In this section we will focus on operator norms connecting
normed linear spaces
We recall that
The table below [78] shows how to compute
different
Calculation |
|||
---|---|---|---|
1 |
1 |
Maximum |
|
1 |
2 |
Maximum |
|
1 |
Maximum absolute entry of a matrix |
||
2 |
1 |
NP hard |
|
2 |
2 |
Maximum singular value |
|
2 |
Maximum |
||
1 |
NP hard |
||
2 |
NP hard |
||
Maximum |
The topological dual of the finite dimensional normed linear space
-norm is dual of -norm. It is a self dual. norm and -norm are dual of each other.
When a matrix
Theorem 4.146 (Operator norm and conjugate transpose)
Operator norm of a matrix always equals the operator norm of its conjugate transpose. In other words
where
Specific applications of this result are:
This is obvious since the maximum singular value of a matrix and its conjugate transpose are same.
This is also obvious since max column sum of
We now need to show the result for the general case (arbitrary
Proof. TODO
Theorem 4.147 (
where
Proof. Expanding:
Thus,
We need to show that this upper bound is indeed an equality.
Indeed for any
Thus
Combining the two, we see that
Theorem 4.148 (
where
Theorem 4.149 (Consistency of
For two matrices
Theorem 4.150 (Consistency of
For two matrices
Theorem 4.151 (Dominance of
In particular
Also:
Proof. Choosing
But
Consider the expression
The constraint
This expression measures the factor by which the non-singular part of
Theorem 4.152
The following bound holds for every matrix
If
Proof. The spaces
We recall that
is a projector onto the column space of .As a result we can write
whenever
and .Now
When
is surjective, then .Hence
Thus, the inequality changes into equality.
Finally
which completes the proof.
4.12.10. Row Column Norms#
A common way of measuring the norm of a matrix is to compute the
If
, then the resultant norm is same as the Frobenius norm.If
, then the resultant norm is same as the sum norm.If
, then the resultant norm is same as the max norm.
Thus, one can think of the row column norms as a generalization of these norms.
Definition 4.148 (Row-column norms)
Let
Then we define
where
We define
This is equivalent to taking
For
In other words, we compute
Note that the norm
Theorem 4.153
where
Theorem 4.154
Theorem 4.155
For any two matrices
Proof. Let
From Theorem 4.150, we have
From Theorem 4.153
and
Thus
Theorem 4.156 (Relations between
Relations between
Proof. The first three are straight forward applications of Theorem 4.153. The next two are applications of Theorem 4.154. See also Table 4.1.
4.12.11. Block Diagonally Dominant Matrices and Generalized Gershgorin Circle Theorem#
In [35] the idea of diagonally dominant matrices (see Diagonally Dominant Matrices) has been generalized to block matrices using matrix norms. We consider the specific case with spectral norm.
Definition 4.149 (Block diagonally dominant matrix)
Let
where each of the submatrices
holds true for all
If the inequality satisfies strictly for all
Theorem 4.157 (Nonsingularity of block strictly diagonally dominant matrices)
If the partitioned matrix
For proof see [35].
This leads to the generalized Gershgorin disc theorem.
Theorem 4.158 (Generalized Gershgorin disc theorem)
Let
where each of the submatrices
Then each eigenvalue
For proof see [35].
Since the
Corollary 4.29 (
Let