PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX …rbb/productdistance7.pdf ·...
Click here to load reader
Transcript of PRODUCT DISTANCE MATRIX OF A GRAPH AND SQUARED DISTANCE MATRIX …rbb/productdistance7.pdf ·...
PRODUCT DISTANCE MATRIX OF A GRAPH AND
SQUARED DISTANCE MATRIX OF A TREE
R. B. Bapat and S. Sivasubramanian
Let G be a strongly connected, weighted directed graph. We define a product
distance η(i, j) for pairs i, j of vertices and form the corresponding product
distance matrix. We obtain a formula for the determinant and the inverse of
the product distance matrix. The edge orientation matrix of a directed tree
is defined and a formula for its determinant and its inverse, when it exists, is
obtained. A formula for the determinant of the (entry-wise) squared distance
matrix of a tree is proved.
1. INTRODUCTION
Let G be a connected graph with vertex set V (G) = {1, . . . , n} and edge set E(G). The
distance between vertices i, j ∈ V (G), denoted d(i, j), is defined to be the minimum length
(the number of edges) of a path from i to j (or an ij-path). The distance matrix D(G), or
simply D, is the n× n matrix with (i, j)-element equal to 0 if i = j and d(i, j) if i 6= j.
According to a well-known result due to Graham and Pollak[7], if T is a tree with n
vertices, then the determinant of the distance matrix D of T is (−1)n−1(n − 1)2n−2. Thus
the determinant depends only on the number of vertices in the tree and not on the tree
itself. A formula for the inverse of the distance matrix of a tree was given by Graham
and Lovasz[6]. These two results have generated considerable interest and a plethora of
extensions and generalizations have been proved (see, for example, [1],[2],[3],[4],[10] and
the references contained therein). Here we describe an early generalization due to Graham,
Hoffman and Hosoya[5].
The set up of [5] is quite general and considers weighted, directed graphs. Let G be a
directed graph with vertex set V (G) = {1, . . . , n}. We assume G to be strongly connected.
The weight of a directed path is defined to be the sum of the weights of the edges on the path
and the distance d(i, j) between two vertices i and j is the minimum weight of a directed
path between i and j. The distance matrix of G is then defined as the n× n matrix D with
its (i, j)-element equal to d(i, j). Note that the distance matrix is not necessarily symmetric.
Recall that a block of G is defined to be a maximal subgraph with no cut-vertices. For the
purpose of defining blocks we consider the underlying undirected graph obtained from G.
Let B1, . . . , Bp be the blocks of G. It was shown in [5] that the determinant of D(G) depends
2010 Mathematics Subject Classification. 15A15, 05C05.
Keywords and Phrases. Distance, Trees, Degrees, Block, Line graph
1
2 R. B. Bapat and S. Sivasubramanian
only on the determinants of D(B1), . . . , D(Bp) but not in the manner in which the blocks
are assembled. If A is a square matrix then cofA will denote the sum of the cofactors of A.
Theorem 1. [Graham, Hoffman, Hosoya] Let G be a directed, weighted, strongly connected
graph and let B1, . . . , Bp be the blocks of G. Then
(i) cof D(G) =∏pi=1 cof D(Bi)
(ii) detD(G) =∑pi=1 detD(Bi)
∏j 6=i cof D(Bj).
If G is an undirected graph, then we may replace each edge of G by a pair of directed
edges in each direction and get a directed graph. Theorem 1, applied to the directed graph,
in fact gives a statement about the original undirected graph. Thus the formula for the
determinant of the distance matrix of a tree can be obtained from (ii) of Theorem 1.
Let T be a tree with vertex set {1, . . . , n}. The exponential distance matrix of T is
defined to be the n× n matrix E with (i, j)-element qd(i,j), where q is a parameter. It was
shown in [3] that the determinant of E is (1− q2)n−1, which again is a function only of the
number of vertices. The result was generalized to directed trees in [4] as follows. Replace
each edge of T by a pair of directed edges in either direction. For each edge e in T, let the
corresponding directed edges be assigned weights qe, te. Define the product distance between
vertices i 6= j to be the product of the weights of the edges of the unique directed path from i
to j. The product distance between a vertex to itself is defined to be 1. The product distance
matrix E of T is the n × n matrix with its (i, j)-element equal to the product distance
between i and j. With this notation we state the following result from [4].
Theorem 2. Let T be a tree with vertex set {1, . . . , n} and let E be the product distance
matrix of T. Then detE =∏n−1i=1 (1− qiti).
In this paper we extend Theorem 2 to arbitrary graphs in the spirit of Theorem 1.
Thus we consider a product distance, to be precisely defined in the next section, on arbitrary
graphs and obtain a formula for the determinant of the product distance matrix of the graph
in terms of the determinants of the corresponding matrices of the blocks. It turns out that
we also have a formula for the inverse, when it exists, of the product distance matrix. These
results are obtained in the next section. In Section 3 we define the edge orientation matrix
of a directed tree and obtain a formula for its determinant and inverse using the results on
the product distance matrix.
In the final section we consider the entry-wise square of the distance matrix of a tree,
which we refer to as the squared distance matrix of a tree. Using the results in Section 3,
we obtain a formula for the determinant of the squared distance matrix as well as a formula
for the sum of its cofactors. Both the determinant and the sum of cofactors turn out be
functions of the degree sequence of the tree.
Product distance matrix of a graph and squared distance matrix of a tree 3
2. PRODUCT DISTANCE
Let G be a directed graph with vertex set V (G) = {1, . . . , n}. We assume G to be
strongly connected. Recall that a block of G is defined to be a maximal subgraph with no
cut-vertices. For the purpose of defining blocks we consider the underlying undirected graph
obtained from G.
Definition A product distance on G is a function η : V (G) × V (G) → (−∞,∞) satisfying
the following properties:
(i) η(i, i) = 1, i = 1, . . . , n.
(ii) If i, j ∈ V (G) are vertices such that each directed path from i to j passes through the
cut-vertex k, then η(i, j) = η(i, k)η(k, j).
Recall that a graph G is called a block graph if each block of G is a complete graph
(a clique). We may construct a product distance on a block graph G as follows. We set
η(i, i) = 1, i = 1, . . . , n. If i, j belong to the same block, then η(i, j) is set equal to any real
number. If i, j are in different blocks of G, then there exists an ij-path, i−k1−k2−· · ·−kr−j,where k1, . . . , kr (and possibly i, j as well) are cut-vertices. We set η(i, j) to be the product
η(i, k1)η(k1, k2) · · · η(kr−1, kr)η(kr, j).
Another way in which a product distance might arise is the following. Let G be a
directed, strongly connected graph, which is not necessarily a block graph. We assume that
each edge of G is assigned a positive weight. We set η(i, i) = 1, i = 1, . . . , n. If e is an edge
from i to j, then η(i, j) is set equal to the weight of e. In general, for vertices i, j, set η(i, j)
equal to the minimum weight of a directed ij-path, where the weight of a path is defined to
be the product of the weights of the edges in the path. It can be verified that this distance
is a product distance in that it satisfies (i),(ii) in the definition.
We remark that it is important to have the weights as positive numbers in the preceding
construction. Consider, for example, the following graph, with each edge carrying weight −1.
Then η(1, 2) = −1, η(2, 3) = −1, whereas η(1, 3) = −1 and (ii) in the definition of product
distance is violated.
◦
}}}}}}}
AAAA
AAA ◦
}}}}}}}
AAAA
AAA
1◦ ◦2 ◦3
◦
AAAAAAAA
}}}}}}}}◦
AAAAAAAA
}}}}}}}}
Let G be a directed graph with V (G) = {1, . . . , n}. Given a product distance η on G,
the product distance matrix of G is the n × n matrix E = ((eij)), with rows and columns
4 R. B. Bapat and S. Sivasubramanian
indexed by V (G), and with eij = η(i, j) for all i, j. We emphasize that the matrix E depends
on the product distance function η(i, j), though we suppress η in the notation and denote
the matrix as E rather than Eη. The distance function will be clear from the context.
If G is an undirected graph, we may replace each edge by a pair of directed edges,
oriented in either direction and obtain a directed graph, which we denote by G. A product
distance may then be defined on G. If the product distance η on G is symmetric, that is, if
it satisfies η(i, j) = η(j, i) for all i, j, then we may ignore the directed graph and work with
G instead.
We show that the determinant of the product distance matrix equals the product of
the determinants of the product distance matrices of the blocks in the graph. We first prove
a preliminary result.
Lemma 3. Let A be an n× n matrix with aii = 1, i = 1, . . . , n. Let j ∈ {1, . . . , n} be fixed.
For each i ∈ {1, . . . , n}, i 6= j, subtract aji times the j-th column from the i-th column. From
the resulting matrix, delete row j and column j and let B be the matrix obtained. Then
detA = detB.
Proof: Let C be the matrix obtained from A after subtracting aji times the j-th column
from the i-th column, i ∈ {1, . . . , n}, i 6= j. Clearly, detA = detC. Note that the j-th row of
C is (0, . . . , 0, 1, 0, . . . , 0), where the 1 occurs at the j-th place. Expanding detC along the
j-th row we see that detC = detB and the proof is complete.
Theorem 4. Let G be a strongly connected, directed graph with V (G) = {1, . . . , n} and let
B1, . . . , Bp be the blocks of G. Let η be a product distance on G. Let E be the product distance
matrix of G and let Ei be the product distance matrix of Bi, i = 1, . . . , p. Then
detE =
p∏i=1
detEi.
Proof: We prove the result by induction on p, the number of blocks of G. The base case
when p = 1 is trivial and so we assume p > 1. Every such graph has a leaf block, which is a
block with exactly one cut-vertex. Let Bk be a leaf block. We assume the unique cut-vertex
of Bk to be 1, without loss of generality. Let H = G \ (Bk \ {1}). We reorder the rows and
the columns of E so that 1 appears first, followed by the vertices of H in any order, followed
by the vertices of Bk \ {1} in any order. We assume that Bk has s vertices. Let EH be the
distance matrix of H and let
EH =
1 a2 · · · at
b2... P
bt
and Ek =
1 f2 · · · fs
g2
... Q
gs
.
Product distance matrix of a graph and squared distance matrix of a tree 5
Then
E =
1 a2 · · · at f2 · · · fs
b2 b2f2 · · · b2fs... P
.... . .
...
bt btf2 · · · btfs
g2 g2a2 · · · g2at...
.... . .
... Q
gs gsa2 · · · gsat
.
For each r ∈ Bk \ {1}, subtract d(1, r) times the first column of E from the r-th column.
The resulting matrix has the form
E =
1 a2 · · · at 0 · · · 0
b2 0 · · · 0... P
.... . .
...
bt 0 · · · 0
g2 g2a2 · · · g2at...
.... . .
... Q
gs gsa2 · · · gsat
.
Then
(1) detE = detE = (detEH)(detQ).
By Lemma 3, detQ = detEk, while by the induction assumption,
detEH =∏i 6=k
detEi.
Substituting in (1) the result is proved.
Our next objective is to obtain a formula for the inverse of the product distance matrix,
when it is nonsingular, in terms of the inverses of the product distance matrices of the blocks.
Lemma 5. Let A and B be matrices of order m × m and n × n respectively. Let x, y ∈<m, u, v ∈ <n. Assuming that all inverses exist,
(2)
[1 x′
y A
]−1
=
[α −x′(A− yx′)−1
−αA−1y (A− yx′)−1
],
(3)
[1 u′
v B
]−1
=
[β −βu′B−1
−βB−1v B−1 + βB−1vu′B−1
]
6 R. B. Bapat and S. Sivasubramanian
and
(4)
1 x′ u′
y A yu′
v vx′ B
−1
=
α+ β(u′B−1v) −x′(A− yx′)−1 −βu′B−1
−αA−1y (A− yx′)−1 0
−βB−1v 0 B−1 + βB−1vu′B−1
,where α = (1− x′A−1y)−1 and β = (1− u′B−1v)−1.
Proof: A simple verification shows that[1 x′
y A
][α −x′(A− yx′)−1
−αA−1y (A− yx′)−1
]equals the identity matrix and thus (2) is proved. The proofs of (3) and (4) are similar.
Theorem 6. Let G be a strongly connected, directed graph with V (G) = {1, . . . , n} and let
B1, . . . , Bp be the blocks of G. Let η be a product distance on G. Let E be the product distance
matrix of G and let Ei be the product distance matrix of Bi, i = 1, . . . , p. Let Ei be nonsingular
and let Ni = E−1i , i = 1, . . . , p. Let Mi be the n× n matrix obtained by augmenting Ni with
zero rows and columns corresponding to indices in V (G) \ V (Bi), i = 1, . . . , p. Let R be the
n× n diagonal matrix with its (i, i)-entry equal to δi − 1 if i is contained in δi blocks. Then
E is nonsingular and E−1 =∑pi=1Mi −R.
Proof: We set up the notation as in the proof of Theorem 4. The details are repeated here
for convenience. We prove the result by induction on p, the number of blocks of G. The
base case when p = 1 is trivial and so we assume p > 1. Every such graph has a leaf block,
which is a block with exactly one cut-vertex. Let Bk be a leaf block. We assume the unique
cut-vertex of Bk to be 1, without loss of generality. Let H = G \ (Bk \ {1}). We reorder
the rows and the columns of E so that 1 appears first, followed by the vertices of H in any
order, followed by the vertices of Bk \ {1} in any order. We assume that Bk has s vertices.
Let EH be the distance matrix of H and let
EH =
1 a2 · · · at
b2... P
bt
and Ek =
1 f2 · · · fs
g2
... Q
gs
.Then
E =
1 a2 · · · at f2 · · · fs
b2 b2f2 · · · b2fs... P
.... . .
...
bt btf2 · · · btfs
g2 g2a2 · · · g2at...
.... . .
... Q
gs gsa2 · · · gsat
.
Product distance matrix of a graph and squared distance matrix of a tree 7
Let R be the matrix which is the same as R except that the (1, 1)-element of R is
δ1 − 2. By the induction assumption EH is nonsingular and
(5)
[E−1H 0
0 0
]=∑i 6=k
Mi −R,
where the matrix on the left side of (5) is n×n. We assume the matrix Q to be nonsingular.
This will be true by perturbing the entries if necessary. The final result is then true without
this assumption by a continuity argument. Let f = (f2, . . . , fs)′, g = (g2, . . . , gs)
′ and let
β = (1 − f ′Q−1g)−1. If we apply Lemma 5 to get a formula for E−1 and E−1H , then we see
that
(6) E−1 −
[E−1H 0
0 0
]=
β(f ′Q−1g) 0 · · · 0 −βf ′Q−1
0 0... 0
...
0 0
−βQ−1g 0 · · · 0 Q−1 + βQ−1gf ′Q−1
.
Note that
(7) Mk =
β 0 · · · 0 −βf ′Q−1
0 0... 0
...
0 0
−βQ−1g 0 · · · 0 Q−1 + βQ−1gf ′Q−1
.
From (6),(7) we see that
(8) E−1 −
[E−1H 0
0 0
]−Mk =
−1 0 · · · 0
0 0 · · · 0...
.... . .
...
0 0 · · · 0
= R−R.
Using (5),(8) we see that
E−1 =
p∑i=1
Mi −R
and the proof is complete.
Example Consider the graph G as shown. We make G into a directed graph by replacing
8 R. B. Bapat and S. Sivasubramanian
each edge by two edges in either direction.
◦1
◦3
BBBB
BBBB
◦2 ◦5 ◦6
||||||||
◦4 ◦7
LetB1, B2, B3, B4 be the blocks with V (B1) = {1, 2}, V (B2) = {2, 3, 4}, V (B3) = {2, 5}, V (B4) =
{5, 6, 7}. Let the distance matrices of the blocks be given by
E1 =
[1 −2
−3 1
], E2 =
1 3 4
2 1 1
1 0 1
, E3 =
[1 2
1 1
], E4 =
1 2 3
1 1 1
2 3 1
.Then the distance matrix of the corresponding product distance is seen to be
E =
1 −2 −6 −8 −4 −8 −12
−3 1 3 4 2 4 6
−6 2 1 1 4 8 12
−3 1 0 1 2 4 6
−3 1 3 4 1 2 3
−3 1 3 4 1 1 1
−6 2 6 8 2 3 1
.
For example, the distance d(1, 6) is given by the product d(1, 2)d(2, 5)d(5, 6), which is
(−2)(2)(2) = −8. We have
M1 =
−1/5 −2/5 0 0 0 0 0
−3/5 −1/5 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
,M2 =
0 0 0 0 0 0 0
0 −1/6 1/2 1/6 0 0 0
0 1/6 1/2 −7/6 0 0 0
0 1/6 −1/2 5/6 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
,
M3 =
0 0 0 0 0 0 0
0 −1 0 0 2 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 1 0 0 −1 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
,M4 =
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 −2/3 7/3 −1/3
0 0 0 0 1/3 −5/3 2/3
0 0 0 0 1/3 1/3 −1/3
.
Product distance matrix of a graph and squared distance matrix of a tree 9
Finally, R is given by
R =
0 0 0 0 0 0 0
0 2 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 1 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
.
We leave it to the reader to verify that E−1 = M1 +M2 +M3 +M4 −R, thereby verifying
Theorem 6.
3. EDGE ORIENTATION MATRIX OF A TREE
Let T be a tree with V (T ) = {1, . . . , n}. We assign an orientation to each edge of T.
The distance between i, j ∈ V (T ) is defined to be the length (that is, the number of edges,
ignoring direction) on the unique (undirected) ij-path. We set d(i, i) = 0, i = 1, . . . , n. Let
D be the distance matrix of T. Thus the rows and the columns of D are indexed by V (G),
and for i, j ∈ V (T ), the (i, j)-element of D is d(i, j). Let e = (ij) and f = (k`) be edges of
T. We say that e and f are similarly oriented if d(i, k) = d(j, `). Otherwise e and f are said
to be oppositely oriented.
Definition The edge orientation matrix of T is the (n − 1) × (n − 1) matrix H defined as
follows. The rows and the columns of H are indexed by the edges of T. The (e, f)-element
of H, denoted h(e, f) is defined to be 1(−1) if the corresponding edges of T are similarly
(oppositely) oriented. The diagonal elements of H are set to be 1.
Let Q be the directed vertex-edge incidence matrix of T. The rows and the columns of
Q are indexed by V (T ) and E(T ) respectively. If i ∈ V (G), e ∈ E(G), then the (i, e)-element
of Q is 1 if i is the tail of e, −1, if i is the head of e, and 0 if i and e are not incident. Recall
that L = QQ′ is the Laplacian of T.
We may define a weighted analogue of H when the tree has weights on the edges.
Results analogous to those obtained in the section can be proved for the weighted version.
However for simplicity, we consider the unweighted case.
Let T be a directed tree and let G be the line graph of T. Thus the vertex set of G
is V (G) = E(T ). Two vertices of G are adjacent if the corresponding undirected edges of T
have a vertex in common. Let e = (ij) and f = (k`) be edges of T, also viewed as vertices
of G. With each edge of G we associate a weight as follows. If e and f are adjacent in G,
then the corresponding edge joining e and f is assigned weight 1(−1) if e and f are similarly
10 R. B. Bapat and S. Sivasubramanian
(oppositely) oriented. Let η be the product distance on G, which is well-defined since G is a
block graph. (See the discussion in Section 2, following the definition of product distance.)
Let E = ((η(e, f))) be the product distance matrix of G. With these definitions we have the
following.
Lemma 7. Let T be a directed tree and let G be the line graph of T. Let H be the edge
orientation matrix of T. Then H = E, the product distance matrix of G.
Proof: Note that if e and f are adjacent in G, then the corresponding entry h(e, f) =
η(e, f) in view of the definition of H. If e and f are not adjacent, then there exists a path
joining e and f in G. Let the shortest such path be e = e1, e2, . . . , ek = f. We prove the
result by induction on the length of the path. The result is true when the length is 1.
Assume the result to be true for paths of length less than k − 1 and proceed by induction.
Thus η(e, f) = η(e1, e2)η(e2, e3) · · · η(ek−1, ek) = η(e1, e2)η(e2, ek), and by the induction
assumption, this product equals η(e1, e2)h(e2, ek) = η(e, e2)h(e2, f). First suppose that e
and e2 are similarly oriented. Then η(e, e2) = 1. If e2 and f are similarly oriented as well,
then h(e2, f) = h(e, f) = 1 and hence η(e, f) = h(e, f). If e2 and f are oppositely oriented,
then h(e2, f) = h(e, f) = −1 and again η(e, f) = h(e, f). The case when e and e2 are
oppositely oriented is similar and the proof is complete.
Lemma 8. Let St be the star on t+ 1 vertices. Let each edge of St be oriented and let X be
the edge orientation matrix. Then detX = 2t−1(2− t).
Proof: Note that if we reverse the orientation of an edge, then the corresponding row and
column of X both get multiplied by −1 and detX does not change. Thus we may assume
that all the edges in St are oriented away from the center. Then
X =
1 −1 · · · −1
−1 1 · · · −1...
.... . .
...
−1 −1 · · · 1
.
The eigenvalues are: 2− t with multiplicity 1 and 2 with multiplicity t− 1. Hence detX =
2t−1(2− t) and the proof is complete.
Theorem 9. Let T be a directed tree with V (T ) = {1, . . . , n} and let H be the edge orien-
tation matrix of T. Let k1, k2, . . . , kn be the degree sequence of T. Then
(9) detH = 2n−2n∏i=1
(2− ki).
Proof: We assume, without loss of generality, that vertices 1, . . . , p are non-pendant vertices
and that vertices p+ 1, . . . , n are pendant. Let G be the line graph of T and orient the edges
Product distance matrix of a graph and squared distance matrix of a tree 11
of G as described earlier. As noted in Lemma 7, H is the product distance matrix of G. The
blocks of G correspond to the stars in T and thus the blocks of G are the complete graphs
with k1, . . . , kp vertices. From Theorem 4 and Lemma 8 it follows that
(10) detH =
p∏i=1
2ki−1(2− ki).
We have
(11)
p∑i=1
(ki − 1) =
n∑i=1
(ki − 1) = 2(n− 1)− n = n− 2.
Combining (10),(11) we obtain (9) and the proof is complete.
Corollary 10. H is nonsingular if and only if no vertex in T has degree 2.
Proof: The result easily follows from (9).
When H is nonsingular, we may construct H−1 using Theorem 6. We proceed to
explain the construction. For positive integers p, q, we define Jpq to be the p × q matrix of
all ones.
Lemma 11. Let p, q be positive integers with p+ q ≥ 3 and let
A =
(2Ip − Jp Jpq
Jqp 2Iq − Jq
).
Then
A−1 =1
2(p+ q − 2)
((p+ q − 2)Ip − Jp Jpq
Jqp (p+ q − 2)Iq − Jq
).
Proof: It is easily verified by multiplication that the product of the two matrices is I.
For each star of T write the inverse of the corresponding principal submatrix of H and
put zeros elsewhere. The principal submatrix corresponding to a star has the same form as
the matrix A in Lemma 11, after a relabeling of rows and columns and thus its inverse can
readily be written down using Lemma 11. Add all such matrices. Then for each edge which
is not a pendant edge, add 1 to the corresponding diagonal position. The resulting matrix
is the inverse of H. The proof follows from Theorem 6.
Example Consider the tree T as shown.
◦2 ◦5
iv
��
◦8
vii
��◦1
i
OO
iii // ◦4 vi // ◦7
viii
��
◦10ixoo
◦3
ii
OO
◦6
v
OO
◦9
12 R. B. Bapat and S. Sivasubramanian
For convenience, we have labeled the edges with roman numerals. The edge orientation
matrix H is given by
1 1 −1 1 1 −1 1 −1 1
1 1 1 −1 −1 1 −1 1 −1
−1 1 1 −1 −1 1 −1 1 −1
1 −1 −1 1 −1 1 −1 1 −1
1 −1 −1 −1 1 1 −1 1 −1
−1 1 1 1 1 1 −1 1 −1
1 −1 −1 −1 −1 −1 1 1 −1
−1 1 1 1 1 1 1 1 1
1 −1 −1 −1 −1 −1 −1 1 1
.
There is no vertex of degree 2 in T and hence H is nonsingular. The tree has 3 stars, which
we denote by S1, S2, S3 with vertex sets V (S1) = {1, 2, 3, 4}, V (S2) = {1, 4, 5, 6, 7} and
V (S3) = {4, 7, 8, 9, 10}. We may obtain H−1 from the inverses of the principal submatrices
corresponding to S1, S2, S3 as described earlier.
4. SQUARED DISTANCE MATRIX OF A TREE
Let T be a tree with vertex set {1, 2, . . . , n}. Let D be the distance matrix of T and
let ∆ be obtained by squaring each element of D. We will refer to ∆ = (fij)1≤i,j≤n as the
squared distance matrix of T. The main objective of this section is to prove a formula for
det ∆. It turns out that det ∆ depends only on the degree sequence of T.
Example Consider the tree T as shown.
◦1
◦3 ◦2 ◦5 ◦6
◦4 ◦7Then the squared distance matrix of T is:
∆ =
0 1 4 4 4 9 9
1 0 1 1 1 4 4
4 1 0 4 4 9 9
4 1 4 0 4 9 9
4 1 4 4 0 1 1
9 4 9 9 1 0 4
9 4 9 9 1 4 0
.
Product distance matrix of a graph and squared distance matrix of a tree 13
We first prove some results connecting ∆ with Q and H defined after orienting each
edge of T.
Lemma 12. Let T be a tree with vertex set {1, 2, . . . , n}. Assign an orientation to each edge
of T and let Q and H be the incidence matrix and the edge orientation matrix, respectively.
Let ∆ be the squared distance matrix of T. Then Q′∆Q = −2H.
Proof: Let e and f be edges of T where e is directed from i to j and f is directed from k
to `. Then the (e, f)-element of Q′∆Q is given by
(12) d(i, k)2 + d(j, `)2 − d(i, `)2 − d(j, k)2.
We must consider cases. First suppose that e and f are similarly oriented, so that d(i, k) =
d(j, `). As a subcase, assume that j is on the path from i to k. Then d(i, `)−1 = d(j, k)+1 =
d(i, k). If we set d(i, k) = d(j, `) = α, then d(i, `) = α+ 1 and d(j, k) = α− 1. It follows that
the expression in (12) equals 2α2 − (α+ 1)2 − (α− 1)2 = −2, which is the (e, f)-element of
−2H. The other subcase when j is not on the (i, k)-path, as well as the case when e and f
are oppositely oriented, are handled similarly and the proof is complete.
Lemma 13. Let T be a tree with vertex set {1, 2, . . . , n}. Let ∆ be the squared distance
matrix of T and let k1, . . . , kn be the degree sequence of T. Let τi = 2−ki, i = 1, . . . , n. Then
(13) cof ∆ = (−1)n−12 · 4n−2n∏i=1
τi.
Proof: Assign an orientation to each edge of T and let Q and H be the incidence matrix
and the edge orientation matrix, respectively. Let adj A denote the adjoint (the transpose
of the cofactor matrix) of the square matrix A. By Lemma 12, Q′∆Q = −2H. Let γi denote
the determinant of the (n − 1) × (n − 1) submatrix of Q obtained by deleting its i-th row,
i = 1, . . . , n. We claim that (−1)iγi is the same for all i. To see this, fix 1 ≤ i < n, and let Q
be the matrix obtained from Q by first replacing its i-th row by the sum of the i-th and the
(i+ 1)-th rows and then deleting the (i+ 1)-th row. The sum of the elements in any column
of Q is zero and hence det Q = 0. However det Q = γi + γi+1 and hence γi = −γi+1. This
proves the claim. It is well-known that Q is totally unimodular (see, for example, [1],p.13)
and hence the common value of (−1)γi is ±1. Let ∆(i, j) denote the submatrix obtained by
deleting row i and column j of ∆. A simple application of the Cauchy-Binet formula shows
14 R. B. Bapat and S. Sivasubramanian
that
det(Q′∆Q) =
n∑i=1
n∑j=1
γiγj det(∆(i, j))
=
n∑i=1
n∑j=1
(−1)iγi(−1)jγj(−1)i+j det(∆(i, j))
=
n∑i=1
n∑j=1
(−1)i+j det(∆(i, j))
= 1′adj ∆1(14)
It follows from Q′∆Q = −2H and (14) that
(15) 1′(adj ∆)1 = det(−2H) = (−2)n−1 detH.
By Theorem 9,
(16) detH = 2n−2n∏i=1
(2− ki) = 2n−2n∏i=1
τi.
Since cof ∆ = 1′(adj ∆)1, the proof is complete in view of (15) and (16).
Lemma 14. Let T be a tree and let v be a vertex of degree two in T . Let s and t be the
neighbors of v in T . Replace column s of ∆ by the sum of the columns s and t minus twice
the column v. Then in the resulting matrix the column s is (2, 2, . . . , 2)′.
Proof: After performing this elementary column operation, consider the entry in row w of
Cols. The entry will be f ′w,s = d2w,s−2d2
w,v +d2w,t. If w is not the vertex v, then assume that
fw,s = (d−1)2, fw,v = d2 and fw,t = (d+1)2 for some positive integer d, i.e., we assume that
w is closer to s than to t. The other case where w is closer to t is proved analogously and
hence omitted. It is clear that f ′w,s = 2. It is easy to see that when w = v, again f ′v,s = 2,
completing the proof.
Corollary 15. Let T be a tree on n vertices and let T have two distinct vertices a and b
both having degree two. Then, det ∆ = 0.
Proof: Let the neighbors of a and b be s, t and y, z respectively. The four vertices s, t, y, z
need not be distinct. However at least two of these four vertices must be at distance at
least 3, which we assume to be s and y, without loss of generality. We apply the elementary
column operations in Lemma 14 to both the vertices s and y, after which the columns s and
y both change to (2, 2, . . . , 2)′. Clearly, this operation leaves det ∆ unchanged. Since two
columns are now identical, det ∆ = 0.
The following result is easily proved using the multilinear property of the determinant
(see, for example, [1],p.97).
Product distance matrix of a graph and squared distance matrix of a tree 15
Lemma 16. Let A be an n× n matrix. Then for any real α,
det(A+ αJ) = detA+ αcof A.
The following is the main result of this section.
Theorem 17. Let T be a tree with V (T ) = {1, . . . , n} and let ∆ be the squared distance
matrix of T. Let k1, k2, . . . , kn be the degree sequence of T. Let τi = 2−ki, i = 1, . . . , n. Then
(17) det ∆ = (−1)n4n−2
(2n− 1)
n∏i=1
τi − 2
n∑i=1
∏j 6=i
τj
Proof: We induct on the number of vertices, with the case when T has at most 3 vertices
being clear. First suppose that T has two vertices both with degree 2. Then by Corollary
15, det ∆ = 0. Also the right side of (17) is also zero and the result is proved. Therefore we
assume that T has at most one vertex of degree 2. Then we may assume that there exists
a vertex w of T adjacent to at least two pendant vertices. Let w be adjacent to t pendant
vertices in T where t ≥ 2. Thus the degree of w is t + 1, where w is adjacent to vertex u
apart from the t pendant vertices. Without loss of generality, assume that the t pendant
vertices are the vertices 1, 2, . . . , t.
For 1 ≤ i ≤ t, perform the following operations on ∆: replace row i by the sum of row
i and row u minus twice the row w and similarly, replace column i by the sum of column i
and column u minus twice the column w. This will result in
∆ =
[4(J − I) 2J
2J ∆1
],
where ∆1 is the squared distance matrix corresponding to the tree T = T −{1, 2, . . . , t}, the
top left X = 4(J − I) matrix is of order t × t, and J denotes the matrix of all ones of the
appropriate size. It is easy to see that detX = (−1)t−14t(t− 1) and that 1′X−11 = t4(t−1) .
We equivalently write detX = (−4)t(1− t).By the Schur complement formula (see [1],p.4) we get,
(18) det ∆ = (detX) det(∆1 −t
t− 1J).
It follows from (18) and Lemma 16 that
(19) det ∆ = (−1)t4t(1− t)(
det ∆1 −t
t− 1cof ∆1
).
In T , let τ = (τ1, . . . , τn)′. Since τi = 1, i = 1, . . . , t and τw = 1 − t, we may write
τ = (1, 1, . . . , 1, (1 − t), z′)′ where z is the vector consisting of τi, i = t + 2, . . . , n. We write
16 R. B. Bapat and S. Sivasubramanian
the vector of τi where i is a vertex of T as τ ′ = (1, z′)′. Note that the first component in τ
is 1 since w is a pendant vertex in T .
Let
A =
n∑i=t+1
∏j 6=i
τj B =
n∑i=1
∏j 6=i
τj and C =
n∏i=1
τi.
Note that A is defined for T , while B,C are defined for T . We claim that
(20) A =B
1− t+
t2 − 2t
(1− t)2C.
The claim is justified as follows. First suppose that no vertex of T has degree 2. Then
(21) A =
n∏i=t+1
τi
n∑j=t+1
1
τj,
(22) B =
n∏i=1
τi
n∑j=1
1
τjand C = (1− t)
∏zi.
Note that
(23)
n∑i=1
1
τi= t+
1
1− t+∑ 1
zi.
Using (21),(22),(23), we get (20). If T has a vertex of degree 2, then we may assume, without
loss of generality, that z1 = 0. In that case A =∏i 6=1 zi, B = (1− t)
∏i6=1 zi and C = 0, and
again (20) is proved.
By induction hypothesis and using (20),
det ∆1 = (−1)n−t4n−t−2
{2(n− t)− 1
1− tC − 2A
}= (−1)n−t4n−t−2
{2n(1− t) + 3t− 1
(1− t)2C − 2B
1− t
}.(24)
By Lemma 13,
t
t− 1cof ∆1 =
t
t− 1(−1)n−t−14n−t−2 2C
1− t
= (−1)n−t4n−t−2 2t.C
(1− t)2.(25)
It follows from (24),(25) that
(26) det ∆1 −t
t− 1cof ∆1 = (−1)n−t4n−t−2
{(2n− 1)C
1− t− 2B
1− t
}.
Plugging in (26) into (19), we see that det ∆ = (−1)n4n−2 {(2n− 1)C − 2B}, com-
pleting the proof.
Product distance matrix of a graph and squared distance matrix of a tree 17
The complete binary tree BTn with 2n−1 vertices is the tree with a root vertex having
degree 2, and is such that each vertex has either two or zero children (see, for example,
[9],p.51). It is easy to see that BTn has 2n−1 pendant vertices. Below, we illustrate BT3.
◦
◦
�������◦
◦
�������
@@@@
@@@
◦
@@@@
@@@ ◦
◦
As a simple consequence of Theorem 17 we have the following.
Corollary 18. Let T be the complete binary tree BTm+1 with n = 2m+1 − 1 vertices. Let
∆ be the squared distance matrix of T. Then det ∆ = (−1)m2.4n−2.
Acknowledgement We sincerely thank two anonymous referees for their critical read-
ing of the manuscript and for several helpful suggestions. Some Theorems in this work were
conjectures, tested using the computer package “Sage”. We thank the authors of “Sage” for
generously releasing their software as an open-source package. The first author acknowledges
support from the JC Bose Fellowship, Department of Science and Technology, Government
of India. The second author acknowledges support from project grant P07 IR052, given by
IIT Bombay.
REFERENCES
1. R. B. Bapat:Graphs and Matrices. Universitext. Springer, London; Hindustan Book Agency,
New Delhi, 2010.
2. R. B. Bapat, S. J. Kirkland, M. Neumann:On distance matrices and Laplacians. Linear
Algebra Appl., 401 (2005), 193–209.
3. R. B. Bapat, A. K. Lal, Sukanta Pati:A q-analogue of the distance matrix of a tree. Linear
Algebra Appl., 416 (2006), 799–814.
4. R. B. Bapat, Sivaramakrishnan Sivasubramanian:The product distance matrix of a tree and
a bivariate zeta function of a graph. Electron. J. Linear Algebra, 23 (2012), 275–286.
18 R. B. Bapat and S. Sivasubramanian
5. R. L. Graham, A. J. Hoffman, H. Hosoya:On the distance matrix of a directed graph. J.
Graph Theory, 1 (1977), 85–88.
6. R. L. Graham, L. Lovasz, L.:Distance matrix polynomials of trees. Adv. Math., 29 (1978),
60–88.
7. R. L. Graham, H. O. Pollak:On the addressing problem for loop switching. Bell System Tech.
J., 50 (1971), 2495–2519.
8. D. B. West: Introduction to Graph Theory. Prentice Hall, Inc., Upper Saddle River, NJ, 1996.
9. S. Gill Williamson: Combinatorics for Computer Science. Dover Publications Inc., Mineola,
NY, 2002.
10. Weigen Yan, Yeong-Nan Yeh:The determinants of q-distance matrices of trees and two
quantiles relating to permutations. Adv. in Appl. Math., 39 (2007), 311–321.
R. B. Bapat
Stat-Math Unit
Indian Statistical Institute, Delhi
7-SJSS Marg, New Delhi 110 016, India.
email: [email protected]
Sivaramakrishnan Sivasubramanian
Department of Mathematics
Indian Institute of Technology, Bombay
Mumbai 400 076, India.
email: [email protected]