18.6. Dictionaries#

In this section we review various properties associated with a dictionary D which are useful in understanding the behavior and capabilities of a dictionary.

We recall that a dictionary D consists of a finite number of unit norm vectors in CN called atoms which span the signal space CN. Atoms of the dictionary are indexed by an index set Ω; i.e.,

D={dω:ωΩ}

with |Ω|=D and ND with dω2=1 for every atom.

The vectors xCN can be represented by a synthesis matrix consisting of the atoms of D by a vector aCD as

x=Da.

Note that we are using the same symbol D to represent the dictionary as a set of atoms as well as the corresponding synthesis matrix. We can write the matrix D consisting of its columns as

D=[d1dD]

This shouldn’t be causing any confusion. When we write the subscript as dω where ωΩ we are referring to the atoms of the dictionary D indexed by the set Ω, while when we write the subscript as di we are referring to a column of corresponding synthesis matrix. In this case, Ω will simply mean the index set {1,,D}. Obviously |Ω|=D holds still.

Often, we will be working with a subset of atoms in a dictionary. Usually such a subset of atoms will be indexed by an index set ΛΩ. Λ will take the form of Λ{ω1,,ωD} or Λ{1,,D} depending upon whether we are talking about the subset of atoms in the dictionary or a subset of columns from the corresponding synthesis matrix.

Often we will need the notion of a subdictionary [79] described below.

18.6.1. Subdictionaries#

Definition 18.16 (Subdictionary)

A subdictionary is a linearly independent collection of atoms. Let Λ{ω1,,ωD} be the index set for the atoms in the subdictionary. We denote the subdictionary as DΛ. We also use DΛ to denote the corresponding matrix with Λ{1,,D}.

Remark 18.5 (Rank of subdictionary)

A subdictionary is full rank.

This is obvious since it is a collection of linearly independent atoms.

For subdictionaries, often we will say K=|Λ| and G=DΛHDΛ as its Gram matrix. Sometimes, we will also be considering G1. G1 has a useful interpretation in terms of the dual vectors for the atoms in DΛ [78].

Let {dλ}λΛ denote the atoms in DΛ. Let {cλ}λΛ be chosen such that

dλ,cλ=1

and

dλ,cω=0 for λ,ωΛ,λω.

Each dual vector cλ is orthogonal to atoms in the subdictionary at different indices and is long enough so that its inner product with dλ is one. The dual system somehow inverts the sub-dictionary. In fact the dual vectors are nothing but the columns of the matrix B=(DΛ)H. Now, a simple calculation shows that:

BHB=(DΛ)(DΛ)H=(DΛHDΛ)1DΛHDΛ(DΛHDΛ)1=(DΛHDΛ)1=G1.

Therefore, the inverse Gram matrix lists the inner products between the dual vectors.

Sometimes we will be discussing tools which apply for general matrices. We will use the symbol Φ for representing general matrices. Whenever the dictionary is an orthonormal basis, we will use the symbol Ψ.

18.6.2. Spark#

Definition 18.17 (Spark)

The spark of a given matrix Φ is the smallest number of columns of Φ that are linearly dependent. If all columns are linearly independent, then the spark is defined to be number of columns plus one.

Note that the definition of spark applies to all matrices (wide, tall or square). It is not restricted to the synthesis matrices for a dictionary.

Correspondingly, the spark of a dictionary is defined as the minimum number of atoms which are linearly dependent.

We recall that rank of a matrix is defined as the maximum number of columns which are linearly independent. Definition of spark bears remarkable resemblance yet its very hard to obtain as it requires a combinatorial search over all possible subsets of columns of Φ.

Example 18.21 (Spark)

  1. Spark of the 3×3 identity matrix

    (100010001)

    is 4 since all columns are linearly independent.

  2. Spark of the 2×4 matrix

    (10100101)

    is 2 since column 1 and 3 are linearly dependent.

  3. If a matrix has a column with all zero entries, then the spark of such a matrix is 1. This is a trivial case and we will not consider such matrices in the sequel.

  4. In general for an N×D synthesis matrix, spark(D)[2,N+1].

A naive combinatorial algorithm to calculate the spark of a matrix is given below.

18.6.2.1. Spark and Nullspace#

Remark 18.6 (Spark and sparsity of null space vectors)

The 0-“norm” of vectors belonging to null space of a matrix Φ is greater than or equal to spark(Φ):

x0spark(Φ)xN(Φ).

Proof. We proceed as follows:

  1. If xN(Φ) then Φx=0.

  2. Thus non-zero entries in x pick a set of columns in Φ which are linearly dependent.

  3. Clearly x0 indicates the number of columns in the set which are linearly dependent.

  4. By definition, spark of Φ indicates the minimum number of columns which are linearly dependent.

  5. Hence the result:

    x0spark(Φ)xN(Φ).

18.6.2.2. Uniqueness-Spark#

Spark is useful in characterizing the uniqueness of the solution of a (D,K)-EXACT-SPARSE problem (see Definition 18.8). We now present a criteria based on spark which characterizes the uniqueness of a sparse solution to the problem y=Φx.

Theorem 18.23 (Uniqueness of a sparse solution for an underdetermined system via spark)

Consider a solution x to the underdetermined system y=Φx. If x obeys

x0<spark(Φ)2

then it is necessarily the sparsest solution.

Proof. Let x be some other solution to the problem. Then

Φx=ΦxΦ(xx)=0(xx)N(Φ).

Due to Remark 18.6 we have

xx0spark(Φ).

Now

x0+x0xx0spark(Φ).

Hence, if x0<spark(Φ)2, then we have

x0>spark(Φ)2

for any other solution x to the equation y=Φx. Thus x is necessarily the sparsest possible solution.

This result is quite useful as it establishes a global optimality criterion for the (D,K)-EXACT-SPARSE problem.

As long as K<12spark(Φ) this theorem guarantees that the solution to (D,K)-EXACT-SPARSE problem is unique. This is quite surprising result for a non-convex combinatorial optimization problem. We are able to guarantee a global uniqueness for the solution based on a simple check on the sparsity of the solution.

Note that we are only saying that if a sufficiently sparse solution is found then it is unique. We are not claiming that it is possible to find such a solution.

Obviously, the larger the spark, we can guarantee uniqueness for signals with higher sparsity levels. So a natural question is: How large can spark of a dictionary be? We consider few examples.

Example 18.22 (Spark of Gaussian dictionaries)

Consider a dictionary D whose atoms di are random vectors independently drawn from normal distribution.

  1. Since a dictionary requires all its atoms to be unit-norms, hence we divide the each of the random vectors with their norms.

  2. We know that with probability 1 any set of N independent Gaussian random vectors is linearly independent.

  3. Also, since diRN hence a set of N+1 atoms is always linearly dependent.

  4. Thus spark(D)=N+1.

Thus, if a solution to EXACT-SPARSE problem contains N2 or fewer non-zero entries then it is necessarily unique with probability 1.

Example 18.23 (Spark of Dirac Fourier basis)

For

D=[IF]CN×2N

it can be shown that

spark(D)=2N.

In this case, the sparsity level of a unique solution must be less than N.

18.6.3. Coherence#

Finding out the spark of a dictionary D is NP-hard since it involves considering combinatorially large number of selections of columns from D. In this section we consider the coherence of a dictionary which is computationally tractable and quite useful in characterizing the solutions of sparse approximation problems.

Definition 18.18 (Coherence of a dictionary)

The coherence of a dictionary D is defined as the maximum absolute inner product between two distinct atoms in the dictionary:

μ=maxjk|dωj,dωk|=maxjk|(DHD)jk|.

If the dictionary consists of two orthonormal bases, then coherence is also known as mutual coherence or proximity; see Definition 18.1.

We note that dωi is the i-th column of synthesis matrix D. Also DHD is the Gram matrix for D whose elements are nothing but the inner-products of columns of D.

We note that by definition dω2=1 hence μ1 and since absolute values are considered hence μ0. Thus, 0μ1.

For an orthonormal basis Ψ all atoms are orthogonal to each other, hence

|ψωj,ψωk|=0 whenever jk.

Thus μ=0 for an orthonormal basis.

In the following, we will use the notation |A| to denote a matrix consisting of absolute values of entries in a matrix A; i.e.,

|A|ij=|Aij|.

The off-diagonal entries of the Gram matrix are captured by the matrix DHDI. Note that all diagonal entries in DHDI are zero since atoms of D are unit norm. Moreover, each of the entries in |DHDI| is dominated by μ(D).

The inner product between any two atoms |dωj,dωk| is a measure of how much they look alike or how much they are correlated. Coherence just picks up the two vectors which are most alike and returns their correlation. In a way μ is quite a blunt measure of the quality of a dictionary, yet it is quite useful.

If a dictionary is uniform in the sense that there is not much variation in |dωj,dωk|, then μ captures the behavior of the dictionary quite well.

Definition 18.19 (Incoherent dictionary)

We say that a dictionary is incoherent if the coherence of the dictionary is small.

We are looking for dictionaries which are incoherent. In the sequel we will see how incoherence plays a role in sparse approximation.

Example 18.24 (Coherence of two ortho bases)

We established in Theorem 18.3 that coherence of two ortho-bases is bounded by

1Nμ1.

In particular we showed in Theorem 18.4 that coherence of Dirac Fourier basis is 1N.

Example 18.25 (Coherence: Multi-ONB dictionary)

A dictionary of concatenated orthonormal bases is called a multi-ONB. For some N, it is possible to build a multi-ONB which contains N or even N+1 bases yet retains the minimal coherence μ=1N possible.

Theorem 18.24 (Coherence lower bound)

A lower bound on the coherence of a general dictionary is given by

μDNN(D1).

Definition 18.20 (Grassmannian frame)

If each atomic inner product meets this bound, the dictionary is called an optimal Grassmannian frame.

The definition of coherence can be extended to arbitrary matrices ΦCN×D.

Definition 18.21 (Coherence for arbitrary matrices)

The coherence of a matrix ΦCN×D is defined as the maximum absolute normalized inner product between two distinct columns in the matrix. Let

Φ=[ϕ1ϕ2ϕD].

Then coherence of Φ is given by

(18.32)#μ(Φ)=maxjk|ϕj,ϕk|ϕj2ϕk2

It is assumed that none of the columns in Φ is a zero vector.

18.6.3.1. Lower Bounds for Spark#

Coherence of a matrix is easy to compute. More interestingly it also provides a lower bound on the spark of a matrix.

Theorem 18.25 (Lower bound on spark in terms of coherence)

For any matrix ΦCN×D (with non-zero columns) the following relationship holds

spark(Φ)1+1μ(Φ).

Proof. We note that scaling of a column of Φ doesn’t change either the spark or coherence of Φ. Therefore, we assume that the columns of Φ are normalized.

  1. We now construct the Gram matrix of Φ given by G=ΦHΦ.

  2. We note that

    Gkk=11kD

    since each column of Φ is unit norm.

  3. Also

    |Gkj|μ(Φ)1k,jD,kj.
  4. Consider any p columns from Φ and construct its Gram matrix.

  5. This is nothing but a leading minor of size p×p from the matrix G.

  6. From the Gershgorin disk theorem, if this minor is diagonally dominant, i.e. if

    ji|Gij|<|Gii|i

    then this sub-matrix of G is positive definite and so corresponding p columns from Φ are linearly independent.

  7. But

    |Gii|=1

    and

    ji|Gij|(p1)μ(Φ)

    for the minor under consideration.

  8. Hence for p columns to be linearly independent the following condition is sufficient

    (p1)μ(Φ)<1.
  9. Thus if

    p<1+1μ(Φ),

    then every set of p columns from Φ is linearly independent.

  10. Hence, the smallest possible set of linearly dependent columns must satisfy

    p1+1μ(Φ).
  11. This establishes the lower bound that

    spark(Φ)1+1μ(Φ).

This bound on spark doesn’t make any assumptions on the structure of the dictionary. In fact, imposing additional structure on the dictionary can give better bounds. Let us look at an example for a two ortho-basis [28].

Theorem 18.26 (Lower bound on spark for two ortho bases)

Let D be a two ortho-basis. Then

spark(D)2μ(D).

Proof. From Theorem 18.6 we know that for any vector vN(D)

v02μ(D).

But

spark(D)=minvN(D)(v0).

Thus

spark(D)2μ(D).

For maximally incoherent two orthonormal bases, we know that μ=1N. A perfect example is the pair of Dirac and Fourier bases. In this case spark(D)2N.

18.6.3.2. Uniqueness-Coherence#

We can now establish a uniqueness condition for sparse solution of y=Φx.

Theorem 18.27 (Uniqueness of a sparse solution of an underdetermined system via coherence)

Consider a solution x to the under-determined system y=Φx. If x obeys

x0<12(1+1μ(Φ))

then it is necessarily the sparsest solution.

Proof. This is a straightforward application of Theorem 18.23 and Theorem 18.25.

It is interesting to compare the two uniqueness theorems: Theorem 18.23 and Theorem 18.27.

Theorem 18.23 uses spark, is sharp and is far more powerful than Theorem 18.27.

Coherence can never be smaller than 1N, therefore the bound on x0 in Theorem 18.27 can never be larger than N+12.

However, spark can be easily as large as N and then bound on x0 can be as large as N2.

We recall from Theorem 18.8 that the bound for sparsity level of sparest solution in two-ortho basis H=[ΨX] is given by

x0<1μ(H)

which is a larger bound than Theorem 18.27 for general dictionaries by a factor of 2.

Thus, we note that coherence gives a weaker bound than spark for supportable sparsity levels of unique solutions. The advantage that coherence has is that it is easily computable and doesn’t require any special structure on the dictionary (two ortho basis has a special structure).

18.6.3.3. Singular Values of Subdictionaries#

Theorem 18.28 (Singular values of subdictionaries and coherence)

Let D be a dictionary and DΛ be a subdictionary. Let μ be the coherence of D. Let K=|Λ|. Then the eigen values of G=DΛHDΛ satisfy:

1(K1)μλ1+(K1)μ.

Moreover, the singular values of the sub-dictionary DΛ satisfy

1(K1)μσ(DΛ)1+(K1)μ.

Proof. We recall from Gershgorin’s circle theorem that for any square matrix ACK×K, every eigen value λ of A satisfies

|λaii|ji|aij| for some i{1,,K}.
  1. Now consider the matrix G=DΛHDΛ with diagonal elements equal to 1 and off diagonal elements bounded by the coherence μ.

  2. Then

    |λ1|ji|Gij|jiμ=(K1)μ.
  3. Thus,

    (K1)μλ1(K1)μ1(K1)μλ1+(K1)μ.
  4. This gives us a lower bound on the smallest eigen value.

    λmin(G)1(K1)μ.
  5. Since G is positive definite (DΛ is full-rank), hence its eigen values are positive. Thus, the above lower bound is useful only if

    1(K1)μ>01>(K1)μμ<1K1.
  6. We also get an upper bound on the eigen values of G given by

    λmax(G)1+(K1)μ.
  7. The bounds on singular values of DΛ are obtained as a straight-forward extension by taking square roots on the expressions.

18.6.3.4. Embeddings using Subdictionaries#

Theorem 18.29 (Norm bounds for embeddings with real dictionaries)

Let D be a real dictionary and DΛ be a subdictionary with K=|Λ|. Let μ be the coherence of D. Let vRK be an arbitrary vector. Then

|v|T[Iμ(1I)]|v|DΛv22|v|T[I+μ(1I)]|v|

where 1 is a K×K matrix of all ones. Moreover

(1(K1)μ)v22DΛv22(1+(K1)μ)v22.

Proof. We can see that

DΛv22=vTDΛTDΛv.
  1. Expanding we have

    vTDΛTDΛv=i=1Kj=1KvidλiTdλjvj.
  2. The terms in the R.H.S. for i=j are given by

    vidλiTdλivi=|vi|2.
  3. Summing over i=1,,K, we get

    i=1K|vi|2=v22=vTv=|v|T|v|=|v|TI|v|.
  4. We are now left with K2K off diagonal terms.

  5. Each of these terms is bounded by

    μ|vi||vj|vidλiTdλjvjμ|vi||vj|.
  6. Summing over the K2K off-diagonal terms we get:

    ij|vi||vj|=i,j|vi||vj|i=j|vi||vj|=|v|T(1I)|v|.
  7. Thus,

    μ|v|T(1I)|v|ijvidλiTdλjvjμ|v|T(1I)|v|.
  8. Thus,

    |v|TI|v|μ|v|T(1I)|v|vTDΛTDΛv|v|TI|v|+μ|v|T(1I)|v|.
  9. We get the result by slight reordering of terms:

    |v|T[Iμ(1I)]|v|DΛv22|v|T[I+μ(1I)]|v|.
  10. We note that due to Theorem 18.15

    |v|T1|v|=v12.
  11. Thus, the inequalities can be written as

    (1+μ)v22μv12DΛv22(1μ)v22+μv12.
  12. Alternatively,

    v22μ(v12v22)DΛv22v22+μ(v12v22).
  13. Finally, due to Theorem 18.11

    v12Kv22v12v22(K1)v22.
  14. This gives us

    (1(K1)μ)v22DΛv22(1+(K1)μ)v22.

We now present the above theorem for the complex case. The proof is based on singular values. This proof is simpler and more general than the one presented above.

Theorem 18.30 (Norm bounds for embeddings with complex dictionaries)

Let D be a dictionary and DΛ be a sub-dictionary with K=|Λ|. Let μ be the coherence of D. Let vCK be an arbitrary vector. Then

(1(K1)μ)v22DΛv22(1+(K1)μ)v22.

Proof. Recall that

σmin2(DΛ)v22DΛv22σmax2(DΛ)v22.

The Theorem 18.28 tells us:

1(K1)μσ2(DΛ)1+(K1)μ.

Thus,

σmin2(DΛ)v22(1(K1)μ)v22

and

σmax2(DΛ)v22(1+(K1)μ)v22.

This gives us the result

(1(K1)μ)v22DΛv22(1+(K1)μ)v22.

18.6.4. Babel Function#

Recalling the definition of coherence, we note that it reflects only the extreme correlations between atoms of dictionary. If most of the inner products are small compared to one dominating inner product, then the value of coherence is highly misleading.

In [77], Tropp introduced Babel function, which measures the maximum total coherence between a fixed atom and a collection of other atoms. The Babel function quantifies an idea as to how much the atoms of a dictionary are “speaking the same language”.

Definition 18.22 (Babel function)

The Babel function for a dictionary D is defined by

(18.33)#μ1(p)max|Λ|=pmaxψλΛ|ψ,dλ|,

where the vector ψ ranges over the atoms indexed by ΩΛ. We define

μ1(0)=0

for sparsity level p=0.

Let us dig deeper into what is going on here. For each value of p we consider all possible (Dp) subspaces by choosing p vectors from D.

Let the atoms spanning one such subspace be identified by an index set ΛΩ.

All other atoms are indexed by the index set Γ=ΩΛ. Let

Ψ={ψγ|γΓ}

denote the atoms indexed by Γ. We pickup a vector ψΨ and compute its inner product with all atoms indexed by Λ. We compute the sum of absolute value of these inner products over all {dλ:λΛ}.

We run it for every ψΨ and then pickup the maximum value of above sum over all ψ.

We finally compute the maximum over all possible p-subspaces. This number is considered at the Babel number for sparsity level p.

We first make a few observations over the properties of Babel function. Babel function is a generalization of coherence.

Remark 18.7 (Babel function for p=1)

For p=1 we observe that

μ1(1)=μ(D)

the coherence of D.

Theorem 18.31 (Monotonicity of babel function)

μ1 is a non-decreasing function of p.

Proof. This is easy to see since the sum

λΛ|ψ,dλ|

cannot decrease as p=|Λ| increases. The following argument provides the details.

  1. For some value of p let Λp and ψp denote the set and vector for which the maximum in (18.33) is attained.

  2. Now pick some column which is not ψp and is not indexed by Λp and include it for Λp+1.

  3. Note that Λp+1 and ψp might not be the maximizers for μ1 for sparsity level p+1 in (18.33).

  4. Clearly

    λΛp+1|ψp,dλ|λΛp|ψp,dλ|.
  5. Hence μ1(p+1) cannot be less than μ1(p).

Theorem 18.32 (An upper bound for Babel function)

Babel function is upper bounded by coherence as per

μ1(p)pμ(D).

Proof. Note that

λΛ|ψ,dλ|pμ(D).

This leads to

μ1(p)=max|Λ|=pmaxψλΛ|ψ,dλ|max|Λ|=pmaxψ(pμ(D))=pμ(D).

18.6.4.1. Computation of Babel Function#

It might seem at first that computation of Babel function is combinatorial and hence prohibitively expensive. But it is not true.

Example 18.26 (Procedure for computing the Babel function)

We will demonstrate this through an example in this section. Our example synthesis matrix will be

D=[0.5000.653310.50.270600.5100.270600.50.653300.5010.270600.50.653300.5000.653300.50.27061]

From the synthesis matrix D we first construct its Gram matrix given by

G=DHD.

We then take absolute value of each entry in G to construct |G|. For the running example

|G|=[10.50.500.5000.50.5100.270600.50.653300.5010.270600.50.6533000.27060.270610.6533000.65330.5000.653310.50.2706000.50.500.5100.500.65330.653300.2706010.27060.5000.653300.50.27061]

We now sort every row in descending order to obtain a new matrix G.

G=[10.50.50.50.500010.65330.50.50.270600010.65330.50.50.270600010.65330.65330.27060.270600010.65330.50.50.270600010.50.50.50.500010.65330.65330.27060.270600010.65330.50.50.2706000]

First entry in each row is now 1. This corresponds to di,di and it doesn’t appear in the calculation of μ1(p). Hence we disregard whole of first column.

Now look at column 2 in G. In the i-th row it is nothing but

maxji|di,dj|.

Thus,

μ(D)=μ1(1)=max1jDGj,2

i.e. the coherence is given by the maximum in the 2nd column of G. In the running example

μ(D)=μ1(1)=0.6533.

Looking carefully we can note that for ψ=di the maximum value of sum

Λ|ψ,dλ|

while |Λ|=p is given by the sum over elements from 2nd to (p+1)-th columns in i-th row. Thus

μ1(p)=max1iDj=2p+1Gij.

For the running example the Babel function values are given by

(0.65331.30661.65332222).

We see that Babel function stops increasing after p=4. Actually D is constructed by shuffling the columns of two orthonormal bases. Hence many of the inner products are 0 in G.

18.6.4.2. Babel Function and Spark#

We first note that Babel function tells something about linear independence of the columns of D.

Theorem 18.33 (Linear independence of atoms and Babel function)

Let μ1 be the Babel function for a dictionary D. If

μ1(p)<1

then all selections of p+1 columns from D are linearly independent.

Proof. We recall from the proof of Theorem 18.25 that if

p+1<1+1μ(D)p<1μ(D)

then every set of (p+1) columns from D are linearly independent. We also know from Theorem 18.32 that

pμ(D)μ1(p)μ(D)μ1(p)p1μ(D)pμ1(p).

Thus if

p<pμ1(p)1<1μ1(p)μ1(p)<1

then all selections of p+1 columns from D are linearly independent.

This leads us to a lower bound on spark from Babel function.

Lemma 18.1 (Lower bound on spark based on Babel function)

A lower bound of spark of a dictionary D is given by

spark(D)min1pN{p|μ1(p1)1}.

Proof. For all jp2 we are given that μ1(j)<1. Thus all sets of p1 columns from D are linearly independent (using Theorem 18.33).

Finally μ1(p1)1, hence we cannot say definitively whether a set of p columns from D is linearly dependent or not. This establishes the lower bound on spark.

An earlier version of this result also appeared in [28] theorem 6.

18.6.4.3. Babel Function and Singular Values#

Theorem 18.34 (Subdictionary singular value bounds from Babel function)

Let D be a dictionary and Λ be an index set with |Λ|=K. The singular values of DΛ are bounded by

1μ1(K1)σ21+μ1(K1).

Proof. Consider the Gram matrix

G=DΛHDΛ.

G is a K×K square matrix.

Also let

Λ={λ1,λ2,,λK}

so that

DΛ=[dλ1dλ2dλK].

The Gershgorin Disc Theorem states that every eigenvalue of G lies in one of the K discs

Δk={z||zGkk|jk|Gjk|}

Since di are unit norm, hence Gkk=1.

Also we note that

jk|Gjk|=jk|dλj,dλk|μ1(K1)

since there are K1 terms in sum and μ1(K1) is an upper bound on all such sums.

Thus if z is an eigen value of G then we have

|z1|μ1(K1)μ1(K1)z1μ1(K1)1μ1(K1)z1+μ1(K1).

This is OK since G is positive semi-definite, thus the eigen values of G are real.

But the eigen values of G are nothing but the squared singular values of DΛ. Thus we get

1μ1(K1)σ21+μ1(K1).

Corollary 18.2

Let D be a dictionary and Λ be an index set with |Λ|=K. If μ1(K1)<1 then the squared singular values of DΛ exceed (1μ1(K1)).

Proof. From previous theorem we have

1μ1(K1)σ21+μ1(K1).

Since the singular values are always non-negative, the lower bound is useful only when μ1(K1)<1. When it holds we have

σ(DΛ)1μ1(K1).

Theorem 18.35 (Uncertainty principle : Babel function)

Let μ1(K1)<1. If a signal can be written as a linear combination of k atoms, then any other exact representation of the signal requires at least (Kk+1) atoms.

Proof. If μ1(K1)<1, then the singular values of any sub-matrix of K atoms are non-zero. Thus, the minimum number of atoms required to form a linear dependent set is K+1. Let the number of atoms in any other exact representation of the signal be l. Then

k+lK+1lKk+1.

18.6.4.4. Babel Function and Gram Matrix of Subdictionaries#

Let Λ index a subdictionary and let G=DΛHDΛ denote the Gram matrix of the subdictionary DΛ. Assume K=|Λ|.

Theorem 18.36 (A bound on the norms of Gram matrix)

G=G11+μ1(K1).

Proof. Since G is Hermitian, hence the two norms are equal:

G=GH1=G1.
  1. Now each row consists of a diagonal entry 1 and K1 off diagonal entries.

  2. The absolute sum of all the off-diagonal entries in a row is upper bounded by μ1(K1).

  3. Thus, the absolute sum of all the entries in a row is upper bounded by 1+μ1(K1).

  4. G is nothing but the maximum 1 norm of rows of G.

  5. Hence

    G1+μ1(K1).

Theorem 18.37 (A bound on the norms of inverse Gram matrix)

Suppose that μ1(K1)<1. Then

G1=G1111μ1(K1).

Proof. Since G is Hermitian, hence the two operator norms are equal:

G1=G11.
  1. We can write G as G=I+A where A consists of off-diagonal entries in G.

  2. Recall that since atoms are unit norm, hence diagonal entries in G are 1.

  3. Each row of A lists inner products between a fixed atom and K1 other atoms (leaving 0 at the diagonal entry).

  4. Therefore

    Aμ1(K1).

    since 1 norm of any row is upper bounded by the babel number μ1(K1).

  5. Now G1 can be written as a Neumann series

    G1=k=0(A)k.
  6. Thus

    G1=k=0(A)kk=0(A)k=k=0Ak=11A

    since A<1.

  7. Finally

    Aμ1(K1)1A1μ1(K1)11A11μ1(K1).
  8. Thus

    G111μ1(K1).

18.6.4.5. Quasi Incoherent Dictionaries#

Definition 18.23 (Quasi incoherent dictionary)

When the Babel function of a dictionary grows slowly, we say that the dictionary is quasi-incoherent.